Adverse Effects in Dual-Star Interferometry
NASA Technical Reports Server (NTRS)
Colavita, M. Mark
2008-01-01
Narrow-angle dual-star interferometric astrometry can provide very high accuracy in the presence of the Earth's turbulent atmosphere. However, to exploit the high atmospherically-limited accuracy requires control of systematic errors in measurement of the interferometer baseline, internal OPDs, and fringe phase. In addition, as high photometric SNR is required, care must be taken to maximize throughput and coherence to obtain high accuracy on faint stars. This article reviews: the keys aspects of the dual-star approach and implementation; the main contributors to the
Integration deficiencies associated with continuous limb movement sequences in Parkinson's disease.
Park, Jin-Hoon; Stelmach, George E
2009-11-01
The present study examined the extent to which Parkinson's disease (PD) influences integration of continuous limb movement sequences. Eight patients with idiopathic PD and 8 age-matched normal subjects were instructed to perform repetitive sequential aiming movements to specified targets under three-accuracy constraints: 1) low accuracy (W = 7 cm) - minimal accuracy constraint, 2) high accuracy (W = 0.64 cm) - maximum accuracy constraint, and 3) mixed accuracy constraint - one target of high accuracy and another target of low accuracy. The characteristic of sequential movements in the low accuracy condition was mostly cyclical, whereas in the high accuracy condition it was discrete in both groups. When the accuracy constraint was mixed, the sequential movements were executed by assembling discrete and cyclical movements in both groups, suggesting that for PD patients the capability to combine discrete and cyclical movements to meet a task requirement appears to be intact. However, such functional linkage was not as pronounced as was in normal subjects. Close examination of movement from the mixed accuracy condition revealed marked movement hesitations in the vicinity of the large target in PD patients, resulting in a bias toward discrete movement. These results suggest that PD patients may have deficits in ongoing planning and organizing processes during movement execution when the tasks require to assemble various accuracy requirements into more complex movement sequences.
Current Status of Astrometry Satellite missions in Japan: JASMINE project series
NASA Astrophysics Data System (ADS)
Yano, T.; Gouda, N.; Kobayashi, Y.; Tsujimoto, T.; Hatsutori, Y.; Murooka, J.; Niwa, Y.; Yamada, Y.
Astrometry satellites have common technological issues. (A) Astrometry satellites are required to measure the positions of stars with high accuracy from the huge amount of data during the observational period. (B) The high stabilization of the thermal environment in the telescope is required. (C) The attitude-pointing stability of these satellites with sub-pixel accuracy is also required. Measurement of the positions of stars from a huge amount of data is the essence of astrometry. It is needed to exclude the systematic errors adequately for each image of stars in order to obtain the accurate positions. We have carried out a centroiding experiment for determining the positions of stars from about 10 000 image data. The following two points are important issues for the mission system of JASMINE in order to achieve our aim. For the small-JASMINE, we require the thermal stabilization of the telescope in order to obtain high astrometric accuracy of about 10 micro-arcsec. In order to accomplish a measurement of positions of stars with high accuracy, we must make a model of the distortion of the image on the focal plane with the accuracy of less than 0.1 nm. We have investigated numerically that the above requirement is achieved if the thermal variation is within about 1 K / 0.75 h. We also require the accuracy of the attitude-pointing stability of about 200 mas / 7 s. The utilization of the Tip-tilt mirror will make it possible to achieve such a stable pointing.
Adverse effects in dual-feed interferometry
NASA Astrophysics Data System (ADS)
Colavita, M. Mark
2009-11-01
Narrow-angle dual-star interferometric astrometry can provide very high accuracy in the presence of the Earth's turbulent atmosphere. However, to exploit the high atmospherically-limited accuracy requires control of systematic errors in measurement of the interferometer baseline, internal OPDs, and fringe phase. In addition, as high photometric SNR is required, care must be taken to maximize throughput and coherence to obtain high accuracy on faint stars. This article reviews the key aspects of the dual-star approach and implementation, the main contributors to the systematic error budget, and the coherence terms in the photometric error budget.
Optical registration of spaceborne low light remote sensing camera
NASA Astrophysics Data System (ADS)
Li, Chong-yang; Hao, Yan-hui; Xu, Peng-mei; Wang, Dong-jie; Ma, Li-na; Zhao, Ying-long
2018-02-01
For the high precision requirement of spaceborne low light remote sensing camera optical registration, optical registration of dual channel for CCD and EMCCD is achieved by the high magnification optical registration system. System integration optical registration and accuracy of optical registration scheme for spaceborne low light remote sensing camera with short focal depth and wide field of view is proposed in this paper. It also includes analysis of parallel misalignment of CCD and accuracy of optical registration. Actual registration results show that imaging clearly, MTF and accuracy of optical registration meet requirements, it provide important guarantee to get high quality image data in orbit.
Maneuver Recovery Analysis for the Magnetospheric Multiscale Mission
NASA Technical Reports Server (NTRS)
Gramling, Cheryl; Carpenter, Russell; Volle, Michael; Lee, Taesul; Long, Anne
2007-01-01
The use of spacecraft formations creates new and more demanding requirements for orbit determination accuracy. In addition to absolute navigation requirements, there are typically relative navigation requirements that are based on the size or shape of the formation. The difficulty in meeting these requirements is related to the relative dynamics of the spacecraft orbits and the frequency of the formation maintenance maneuvers. This paper examines the effects of bi-weekly formation maintenance maneuvers on the absolute and relative orbit determination accuracy for the four-spacecraft Magnetospheric Multiscale (MMS) formation. Results are presented from high fidelity simulations that include the effects of realistic orbit determination errors in the maneuver planning process. Solutions are determined using a high accuracy extended Kalman filter designed for onboard navigation. Three different solutions are examined, considering the effects of process noise and measurement rate on the solutions.
Milton, Martin J T; Wang, Jian
2003-01-01
A new isotope dilution mass spectrometry (IDMS) method for high-accuracy quantitative analysis of gases has been developed and validated by the analysis of standard mixtures of carbon dioxide in nitrogen. The method does not require certified isotopic reference materials and does not require direct measurements of the highly enriched spike. The relative uncertainty of the method is shown to be 0.2%. Reproduced with the permission of Her Majesty's Stationery Office. Copyright Crown copyright 2003.
Teaching High-Accuracy Global Positioning System to Undergraduates Using Online Processing Services
ERIC Educational Resources Information Center
Wang, Guoquan
2013-01-01
High-accuracy Global Positioning System (GPS) has become an important geoscientific tool used to measure ground motions associated with plate movements, glacial movements, volcanoes, active faults, landslides, subsidence, slow earthquake events, as well as large earthquakes. Complex calculations are required in order to achieve high-precision…
High accuracy wavelength calibration for a scanning visible spectrometer.
Scotti, Filippo; Bell, Ronald E
2010-10-01
Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤0.2 Å. An automated calibration, which is stable over time and environmental conditions without the need to recalibrate after each grating movement, was developed for a scanning spectrometer to achieve high wavelength accuracy over the visible spectrum. This method fits all relevant spectrometer parameters using multiple calibration spectra. With a stepping-motor controlled sine drive, an accuracy of ∼0.25 Å has been demonstrated. With the addition of a high resolution (0.075 arc sec) optical encoder on the grating stage, greater precision (∼0.005 Å) is possible, allowing absolute velocity measurements within ∼0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.
NASA Astrophysics Data System (ADS)
Xiong, Ling; Luo, Xiao; Hu, Hai-xiang; Zhang, Zhi-yu; Zhang, Feng; Zheng, Li-gong; Zhang, Xue-jun
2017-08-01
A feasible way to improve the manufacturing efficiency of large reaction-bonded silicon carbide optics is to increase the processing accuracy in the ground stage before polishing, which requires high accuracy metrology. A swing arm profilometer (SAP) has been used to measure large optics during the ground stage. A method has been developed for improving the measurement accuracy of SAP using a capacitive probe and implementing calibrations. The experimental result compared with the interferometer test shows the accuracy of 0.068 μm in root-mean-square (RMS) and maps in 37 low-order Zernike terms show accuracy of 0.048 μm RMS, which shows a powerful capability to provide a major input in high-precision grinding.
NASA Technical Reports Server (NTRS)
Bertiger, W.; Bar-Sever, Y.; Desai, S.; Duncan, C.; Haines, B.; Kuang, D.; Lough, M.; Reichert, A.; Romans, L.; Srinivasan, J.;
2000-01-01
The BlackJack family of GPS receivers has been developed at JPL to satisfy NASA's requirements for high-accuracy, dual-frequency, Y-codeless GPS receivers for NASA's Earth science missions. In this paper we will present the challenges that were overcome to meet this accuracy requirement. We will discuss the various reduced dynamic strategies, Space Shuttle dynamic models, and our tests for accuracy that included a military Y-code dual-frequency receiver (MAGR).
NASA Astrophysics Data System (ADS)
Liu, Dan; Fu, Xiu-hua; Jia, Zong-he; Wang, Zhe; Dong, Huan
2014-08-01
In the high-energy laser test system, surface profile and finish of the optical element are put forward higher request. Taking a focusing aspherical zerodur lens with a diameter of 100mm as example, using CNC and classical machining method of combining surface profile and surface quality of the lens were investigated. Taking profilometer and high power microscope measurement results as a guide, by testing and simulation analysis, process parameters were improved constantly in the process of manufacturing. Mid and high frequency error were trimmed and improved so that the surface form gradually converged to the required accuracy. The experimental results show that the final accuracy of the surface is less than 0.5μm and the surface finish is □, which fulfils the accuracy requirement of aspherical focusing lens in optical system.
Du, Lei; Sun, Qiao; Cai, Changqing; Bai, Jie; Fan, Zhe; Zhang, Yue
2018-01-01
Traffic speed meters are important legal measuring instruments specially used for traffic speed enforcement and must be tested and verified in the field every year using a vehicular mobile standard speed-measuring instrument to ensure speed-measuring performances. The non-contact optical speed sensor and the GPS speed sensor are the two most common types of standard speed-measuring instruments. The non-contact optical speed sensor requires extremely high installation accuracy, and its speed-measuring error is nonlinear and uncorrectable. The speed-measuring accuracy of the GPS speed sensor is rapidly reduced if the amount of received satellites is insufficient enough, which often occurs in urban high-rise regions, tunnels, and mountainous regions. In this paper, a new standard speed-measuring instrument using a dual-antenna Doppler radar sensor is proposed based on a tradeoff between the installation accuracy requirement and the usage region limitation, which has no specified requirements for its mounting distance and no limitation on usage regions and can automatically compensate for the effect of an inclined installation angle on its speed-measuring accuracy. Theoretical model analysis, simulated speed measurement results, and field experimental results compared with a GPS speed sensor with high accuracy showed that the dual-antenna Doppler radar sensor is effective and reliable as a new standard speed-measuring instrument. PMID:29621142
Du, Lei; Sun, Qiao; Cai, Changqing; Bai, Jie; Fan, Zhe; Zhang, Yue
2018-04-05
Traffic speed meters are important legal measuring instruments specially used for traffic speed enforcement and must be tested and verified in the field every year using a vehicular mobile standard speed-measuring instrument to ensure speed-measuring performances. The non-contact optical speed sensor and the GPS speed sensor are the two most common types of standard speed-measuring instruments. The non-contact optical speed sensor requires extremely high installation accuracy, and its speed-measuring error is nonlinear and uncorrectable. The speed-measuring accuracy of the GPS speed sensor is rapidly reduced if the amount of received satellites is insufficient enough, which often occurs in urban high-rise regions, tunnels, and mountainous regions. In this paper, a new standard speed-measuring instrument using a dual-antenna Doppler radar sensor is proposed based on a tradeoff between the installation accuracy requirement and the usage region limitation, which has no specified requirements for its mounting distance and no limitation on usage regions and can automatically compensate for the effect of an inclined installation angle on its speed-measuring accuracy. Theoretical model analysis, simulated speed measurement results, and field experimental results compared with a GPS speed sensor with high accuracy showed that the dual-antenna Doppler radar sensor is effective and reliable as a new standard speed-measuring instrument.
NASA Technical Reports Server (NTRS)
Farmer, Jeffrey T.; Wahls, Deborah M.; Wright, Robert L.
1990-01-01
The global change technology initiative calls for a geostationary platform for Earth science monitoring. One of the major science instruments is the high frequency microwave sounder (HFMS) which uses a large diameter, high resolution, high frequency microwave antenna. This antenna's size and required accuracy dictates the need for a segmented reflector. On-orbit disturbances may be a significant factor in its design. A study was performed to examine the effects of the geosynchronous thermal environment on the performance of the strongback structure for a proposed antenna concept for this application. The study included definition of the strongback and a corresponding numerical model to be used in the thermal and structural analyses definition of the thermal environment, determination of structural element temperature throughout potential orbits, estimation of resulting thermal distortions, and assessment of the structure's capability to meet surface accuracy requirements. Analyses show that shadows produced by the antenna reflector surface play a major role in increasing thermal distortions. Through customization of surface coating and element expansion characteristics, the segmented reflector concept can meet the tight surface accuracy requirements.
Calorimetric method for determination of {sup 51}Cr neutrino source activity
DOE Office of Scientific and Technical Information (OSTI.GOV)
Veretenkin, E. P., E-mail: veretenk@inr.ru; Gavrin, V. N.; Danshin, S. N.
Experimental study of nonstandard neutrino properties using high-intensity artificial neutrino sources requires the activity of the sources to be determined with high accuracy. In the BEST project, a calorimetric system for measurement of the activity of high-intensity (a few MCi) neutrino sources based on {sup 51}Cr with an accuracy of 0.5–1% is created. In the paper, the main factors affecting the accuracy of determining the neutrino source activity are discussed. The calorimetric system design and the calibration results using a thermal simulator of the source are presented.
A high-accuracy optical linear algebra processor for finite element applications
NASA Technical Reports Server (NTRS)
Casasent, D.; Taylor, B. K.
1984-01-01
Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.
Atropos: specific, sensitive, and speedy trimming of sequencing reads.
Didion, John P; Martin, Marcel; Collins, Francis S
2017-01-01
A key step in the transformation of raw sequencing reads into biological insights is the trimming of adapter sequences and low-quality bases. Read trimming has been shown to increase the quality and reliability while decreasing the computational requirements of downstream analyses. Many read trimming software tools are available; however, no tool simultaneously provides the accuracy, computational efficiency, and feature set required to handle the types and volumes of data generated in modern sequencing-based experiments. Here we introduce Atropos and show that it trims reads with high sensitivity and specificity while maintaining leading-edge speed. Compared to other state-of-the-art read trimming tools, Atropos achieves significant increases in trimming accuracy while remaining competitive in execution times. Furthermore, Atropos maintains high accuracy even when trimming data with elevated rates of sequencing errors. The accuracy, high performance, and broad feature set offered by Atropos makes it an appropriate choice for the pre-processing of Illumina, ABI SOLiD, and other current-generation short-read sequencing datasets. Atropos is open source and free software written in Python (3.3+) and available at https://github.com/jdidion/atropos.
Atropos: specific, sensitive, and speedy trimming of sequencing reads
Collins, Francis S.
2017-01-01
A key step in the transformation of raw sequencing reads into biological insights is the trimming of adapter sequences and low-quality bases. Read trimming has been shown to increase the quality and reliability while decreasing the computational requirements of downstream analyses. Many read trimming software tools are available; however, no tool simultaneously provides the accuracy, computational efficiency, and feature set required to handle the types and volumes of data generated in modern sequencing-based experiments. Here we introduce Atropos and show that it trims reads with high sensitivity and specificity while maintaining leading-edge speed. Compared to other state-of-the-art read trimming tools, Atropos achieves significant increases in trimming accuracy while remaining competitive in execution times. Furthermore, Atropos maintains high accuracy even when trimming data with elevated rates of sequencing errors. The accuracy, high performance, and broad feature set offered by Atropos makes it an appropriate choice for the pre-processing of Illumina, ABI SOLiD, and other current-generation short-read sequencing datasets. Atropos is open source and free software written in Python (3.3+) and available at https://github.com/jdidion/atropos. PMID:28875074
Accuracy Assessment of Coastal Topography Derived from Uav Images
NASA Astrophysics Data System (ADS)
Long, N.; Millescamps, B.; Pouget, F.; Dumon, A.; Lachaussée, N.; Bertin, X.
2016-06-01
To monitor coastal environments, Unmanned Aerial Vehicle (UAV) is a low-cost and easy to use solution to enable data acquisition with high temporal frequency and spatial resolution. Compared to Light Detection And Ranging (LiDAR) or Terrestrial Laser Scanning (TLS), this solution produces Digital Surface Model (DSM) with a similar accuracy. To evaluate the DSM accuracy on a coastal environment, a campaign was carried out with a flying wing (eBee) combined with a digital camera. Using the Photoscan software and the photogrammetry process (Structure From Motion algorithm), a DSM and an orthomosaic were produced. Compared to GNSS surveys, the DSM accuracy is estimated. Two parameters are tested: the influence of the methodology (number and distribution of Ground Control Points, GCPs) and the influence of spatial image resolution (4.6 cm vs 2 cm). The results show that this solution is able to reproduce the topography of a coastal area with a high vertical accuracy (< 10 cm). The georeferencing of the DSM require a homogeneous distribution and a large number of GCPs. The accuracy is correlated with the number of GCPs (use 19 GCPs instead of 10 allows to reduce the difference of 4 cm); the required accuracy should be dependant of the research problematic. Last, in this particular environment, the presence of very small water surfaces on the sand bank does not allow to improve the accuracy when the spatial resolution of images is decreased.
NASA Astrophysics Data System (ADS)
Dubroca, Guilhem; Richert, Michaël.; Loiseaux, Didier; Caron, Jérôme; Bézy, Jean-Loup
2015-09-01
To increase the accuracy of earth-observation spectro-imagers, it is necessary to achieve high levels of depolarization of the incoming beam. The preferred device in space instrument is the so-called polarization scrambler. It is made of birefringent crystal wedges arranged in a single or dual Babinet. Today, with required radiometric accuracies of the order of 0.1%, it is necessary to develop tools to find optimal and low sensitivity solutions quickly and to measure the performances with a high level of accuracy.
Region based Brain Computer Interface for a home control application.
Akman Aydin, Eda; Bay, Omer Faruk; Guler, Inan
2015-08-01
Environment control is one of the important challenges for disabled people who suffer from neuromuscular diseases. Brain Computer Interface (BCI) provides a communication channel between the human brain and the environment without requiring any muscular activation. The most important expectation for a home control application is high accuracy and reliable control. Region-based paradigm is a stimulus paradigm based on oddball principle and requires selection of a target at two levels. This paper presents an application of region based paradigm for a smart home control application for people with neuromuscular diseases. In this study, a region based stimulus interface containing 49 commands was designed. Five non-disabled subjects were attended to the experiments. Offline analysis results of the experiments yielded 95% accuracy for five flashes. This result showed that region based paradigm can be used to select commands of a smart home control application with high accuracy in the low number of repetitions successfully. Furthermore, a statistically significant difference was not observed between the level accuracies.
On a fast calculation of structure factors at a subatomic resolution.
Afonine, P V; Urzhumtsev, A
2004-01-01
In the last decade, the progress of protein crystallography allowed several protein structures to be solved at a resolution higher than 0.9 A. Such studies provide researchers with important new information reflecting very fine structural details. The signal from these details is very weak with respect to that corresponding to the whole structure. Its analysis requires high-quality data, which previously were available only for crystals of small molecules, and a high accuracy of calculations. The calculation of structure factors using direct formulae, traditional for 'small-molecule' crystallography, allows a relatively simple accuracy control. For macromolecular crystals, diffraction data sets at a subatomic resolution contain hundreds of thousands of reflections, and the number of parameters used to describe the corresponding models may reach the same order. Therefore, the direct way of calculating structure factors becomes very time expensive when applied to large molecules. These problems of high accuracy and computational efficiency require a re-examination of computer tools and algorithms. The calculation of model structure factors through an intermediate generation of an electron density [Sayre (1951). Acta Cryst. 4, 362-367; Ten Eyck (1977). Acta Cryst. A33, 486-492] may be much more computationally efficient, but contains some parameters (grid step, 'effective' atom radii etc.) whose influence on the accuracy of the calculation is not straightforward. At the same time, the choice of parameters within safety margins that largely ensure a sufficient accuracy may result in a significant loss of the CPU time, making it close to the time for the direct-formulae calculations. The impact of the different parameters on the computer efficiency of structure-factor calculation is studied. It is shown that an appropriate choice of these parameters allows the structure factors to be obtained with a high accuracy and in a significantly shorter time than that required when using the direct formulae. Practical algorithms for the optimal choice of the parameters are suggested.
NASA Astrophysics Data System (ADS)
Haring, Martijn T.; Liv, Nalan; Zonnevylle, A. Christiaan; Narvaez, Angela C.; Voortman, Lenard M.; Kruit, Pieter; Hoogenboom, Jacob P.
2017-03-01
In the biological sciences, data from fluorescence and electron microscopy is correlated to allow fluorescence biomolecule identification within the cellular ultrastructure and/or ultrastructural analysis following live-cell imaging. High-accuracy (sub-100 nm) image overlay requires the addition of fiducial markers, which makes overlay accuracy dependent on the number of fiducials present in the region of interest. Here, we report an automated method for light-electron image overlay at high accuracy, i.e. below 5 nm. Our method relies on direct visualization of the electron beam position in the fluorescence detection channel using cathodoluminescence pointers. We show that image overlay using cathodoluminescence pointers corrects for image distortions, is independent of user interpretation, and does not require fiducials, allowing image correlation with molecular precision anywhere on a sample.
Haring, Martijn T; Liv, Nalan; Zonnevylle, A Christiaan; Narvaez, Angela C; Voortman, Lenard M; Kruit, Pieter; Hoogenboom, Jacob P
2017-03-02
In the biological sciences, data from fluorescence and electron microscopy is correlated to allow fluorescence biomolecule identification within the cellular ultrastructure and/or ultrastructural analysis following live-cell imaging. High-accuracy (sub-100 nm) image overlay requires the addition of fiducial markers, which makes overlay accuracy dependent on the number of fiducials present in the region of interest. Here, we report an automated method for light-electron image overlay at high accuracy, i.e. below 5 nm. Our method relies on direct visualization of the electron beam position in the fluorescence detection channel using cathodoluminescence pointers. We show that image overlay using cathodoluminescence pointers corrects for image distortions, is independent of user interpretation, and does not require fiducials, allowing image correlation with molecular precision anywhere on a sample.
Haring, Martijn T.; Liv, Nalan; Zonnevylle, A. Christiaan; Narvaez, Angela C.; Voortman, Lenard M.; Kruit, Pieter; Hoogenboom, Jacob P.
2017-01-01
In the biological sciences, data from fluorescence and electron microscopy is correlated to allow fluorescence biomolecule identification within the cellular ultrastructure and/or ultrastructural analysis following live-cell imaging. High-accuracy (sub-100 nm) image overlay requires the addition of fiducial markers, which makes overlay accuracy dependent on the number of fiducials present in the region of interest. Here, we report an automated method for light-electron image overlay at high accuracy, i.e. below 5 nm. Our method relies on direct visualization of the electron beam position in the fluorescence detection channel using cathodoluminescence pointers. We show that image overlay using cathodoluminescence pointers corrects for image distortions, is independent of user interpretation, and does not require fiducials, allowing image correlation with molecular precision anywhere on a sample. PMID:28252673
Technique for Very High Order Nonlinear Simulation and Validation
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.
2001-01-01
Finding the sources of sound in large nonlinear fields via direct simulation currently requires excessive computational cost. This paper describes a simple technique for efficiently solving the multidimensional nonlinear Euler equations that significantly reduces this cost and demonstrates a useful approach for validating high order nonlinear methods. Up to 15th order accuracy in space and time methods were compared and it is shown that an algorithm with a fixed design accuracy approaches its maximal utility and then its usefulness exponentially decays unless higher accuracy is used. It is concluded that at least a 7th order method is required to efficiently propagate a harmonic wave using the nonlinear Euler equations to a distance of 5 wavelengths while maintaining an overall error tolerance that is low enough to capture both the mean flow and the acoustics.
10 CFR 72.11 - Completeness and accuracy of information.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 2 2013-01-01 2013-01-01 false Completeness and accuracy of information. 72.11 Section 72.11 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSING REQUIREMENTS FOR THE INDEPENDENT STORAGE OF SPENT NUCLEAR FUEL, HIGH-LEVEL RADIOACTIVE WASTE, AND REACTOR-RELATED GREATER THAN CLASS C...
10 CFR 72.11 - Completeness and accuracy of information.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 2 2014-01-01 2014-01-01 false Completeness and accuracy of information. 72.11 Section 72.11 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSING REQUIREMENTS FOR THE INDEPENDENT STORAGE OF SPENT NUCLEAR FUEL, HIGH-LEVEL RADIOACTIVE WASTE, AND REACTOR-RELATED GREATER THAN CLASS C...
10 CFR 72.11 - Completeness and accuracy of information.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 2 2012-01-01 2012-01-01 false Completeness and accuracy of information. 72.11 Section 72.11 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSING REQUIREMENTS FOR THE INDEPENDENT STORAGE OF SPENT NUCLEAR FUEL, HIGH-LEVEL RADIOACTIVE WASTE, AND REACTOR-RELATED GREATER THAN CLASS C...
10 CFR 72.11 - Completeness and accuracy of information.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 2 2011-01-01 2011-01-01 false Completeness and accuracy of information. 72.11 Section 72.11 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSING REQUIREMENTS FOR THE INDEPENDENT STORAGE OF SPENT NUCLEAR FUEL, HIGH-LEVEL RADIOACTIVE WASTE, AND REACTOR-RELATED GREATER THAN CLASS C...
10 CFR 72.11 - Completeness and accuracy of information.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 2 2010-01-01 2010-01-01 false Completeness and accuracy of information. 72.11 Section 72.11 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSING REQUIREMENTS FOR THE INDEPENDENT STORAGE OF SPENT NUCLEAR FUEL, HIGH-LEVEL RADIOACTIVE WASTE, AND REACTOR-RELATED GREATER THAN CLASS C...
Sampling error of cruises in the California pine region
A.A. Hasel
1942-01-01
To organize cruises so as to steer a desirable middle course between high accuracy at too much cost and little accuracy at low cost is a problem many foresters have to contend with. It can only be done when the cruiser in charge has a real knowledge of the required standard of accuracy and of the variability existing in the timber stand. The study reported in this...
Collection and processing of data from a phase-coherent meteor radar
NASA Technical Reports Server (NTRS)
Backof, C. A., Jr.; Bowhill, S. A.
1974-01-01
An analysis of the measurement accuracy requirement of a high resolution meteor radar for observing short period, atmospheric waves is presented, and a system which satisfies the requirements is described. A medium scale, real time computer is programmed to perform all echo recognition and coordinate measurement functions. The measurement algorithms are exercised on noisy data generated by a program which simulates the hardware system, in order to find the effects of noise on the measurement accuracies.
Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.
Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang
2016-06-22
An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs.
Establishment of a high accuracy geoid correction model and geodata edge match
NASA Astrophysics Data System (ADS)
Xi, Ruifeng
This research has developed a theoretical and practical methodology for efficiently and accurately determining sub-decimeter level regional geoids and centimeter level local geoids to meet regional surveying and local engineering requirements. This research also provides a highly accurate static DGPS network data pre-processing, post-processing and adjustment method and a procedure for a large GPS network like the state level HRAN project. The research also developed an efficient and accurate methodology to join soil coverages in GIS ARE/INFO. A total of 181 GPS stations has been pre-processed and post-processed to obtain an absolute accuracy better than 1.5cm at 95% of the stations, and at all stations having a 0.5 ppm average relative accuracy. A total of 167 GPS stations in Iowa and around Iowa have been included in the adjustment. After evaluating GEOID96 and GEOID99, a more accurate and suitable geoid model has been established in Iowa. This new Iowa regional geoid model improved the accuracy from a sub-decimeter 10˜20 centimeter to 5˜10 centimeter. The local kinematic geoid model, developed using Kalman filtering, gives results better than third order leveling accuracy requirement with 1.5 cm standard deviation.
Parallelism measurement for base plate of standard artifact with multiple tactile approaches
NASA Astrophysics Data System (ADS)
Ye, Xiuling; Zhao, Yan; Wang, Yiwen; Wang, Zhong; Fu, Luhua; Liu, Changjie
2018-01-01
Nowadays, as workpieces become more precise and more specialized which results in more sophisticated structures and higher accuracy for the artifacts, higher requirements have been put forward for measuring accuracy and measuring methods. As an important method to obtain the size of workpieces, coordinate measuring machine (CMM) has been widely used in many industries. In order to achieve the calibration of a self-developed CMM, it is found that the parallelism of the base plate used for fixing the standard artifact is an important factor which affects the measurement accuracy in the process of studying self-made high-precision standard artifact. And aimed to measure the parallelism of the base plate, by using the existing high-precision CMM, gauge blocks, dial gauge and marble platform with the tactile approach, three methods for parallelism measurement of workpieces are employed, and comparisons are made within the measurement results. The results of experiments show that the final accuracy of all the three methods is able to reach micron level and meets the measurement requirements. Simultaneously, these three approaches are suitable for different measurement conditions which provide a basis for rapid and high-precision measurement under different equipment conditions.
NASA Astrophysics Data System (ADS)
Wang, Jin; Li, Haoxu; Zhang, Xiaofeng; Wu, Rangzhong
2017-05-01
Indoor positioning using visible light communication has become a topic of intensive research in recent years. Because the normal of the receiver always deviates from that of the transmitter in application, the positioning systems which require that the normal of the receiver be aligned with that of the transmitter have large positioning errors. Some algorithms take the angular vibrations into account; nevertheless, these positioning algorithms cannot meet the requirement of high accuracy or low complexity. A visible light positioning algorithm combined with angular vibration compensation is proposed. The angle information from the accelerometer or other angle acquisition devices is used to calculate the angle of incidence even when the receiver is not horizontal. Meanwhile, a received signal strength technique with high accuracy is employed to determine the location. Moreover, an eight-light-emitting-diode (LED) system model is provided to improve the accuracy. The simulation results show that the proposed system can achieve a low positioning error with low complexity, and the eight-LED system exhibits improved performance. Furthermore, trust region-based positioning is proposed to determine three-dimensional locations and achieves high accuracy in both the horizontal and the vertical components.
An implicit spatial and high-order temporal finite difference scheme for 2D acoustic modelling
NASA Astrophysics Data System (ADS)
Wang, Enjiang; Liu, Yang
2018-01-01
The finite difference (FD) method exhibits great superiority over other numerical methods due to its easy implementation and small computational requirement. We propose an effective FD method, characterised by implicit spatial and high-order temporal schemes, to reduce both the temporal and spatial dispersions simultaneously. For the temporal derivative, apart from the conventional second-order FD approximation, a special rhombus FD scheme is included to reach high-order accuracy in time. Compared with the Lax-Wendroff FD scheme, this scheme can achieve nearly the same temporal accuracy but requires less floating-point operation times and thus less computational cost when the same operator length is adopted. For the spatial derivatives, we adopt the implicit FD scheme to improve the spatial accuracy. Apart from the existing Taylor series expansion-based FD coefficients, we derive the least square optimisation based implicit spatial FD coefficients. Dispersion analysis and modelling examples demonstrate that, our proposed method can effectively decrease both the temporal and spatial dispersions, thus can provide more accurate wavefields.
Rational calculation accuracy in acousto-optical matrix-vector processor
NASA Astrophysics Data System (ADS)
Oparin, V. V.; Tigin, Dmitry V.
1994-01-01
The high speed of parallel computations for a comparatively small-size processor and acceptable power consumption makes the usage of acousto-optic matrix-vector multiplier (AOMVM) attractive for processing of large amounts of information in real time. The limited accuracy of computations is an essential disadvantage of such a processor. The reduced accuracy requirements allow for considerable simplification of the AOMVM architecture and the reduction of the demands on its components.
NASA Astrophysics Data System (ADS)
Shea, Y.; Wielicki, B. A.; Sun-Mack, S.; Minnis, P.; Zelinka, M. D.
2016-12-01
Detecting trends in climate variables on global, decadal scales requires highly accurate, stable measurements and retrieval algorithms. Trend uncertainty depends on its magnitude, natural variability, and instrument and retrieval algorithm accuracy and stability. We applied a climate accuracy framework to quantify the impact of absolute calibration on cloud property trend uncertainty. The cloud properties studied were cloud fraction, effective temperature, optical thickness, and effective radius retrieved using the Clouds and the Earth's Radiant Energy System (CERES) Cloud Property Retrieval System, which uses Moderate-resolution Imaging Spectroradiometer measurements (MODIS). Modeling experiments from the fifth phase of the Climate Model Intercomparison Project (CMIP5) agree that net cloud feedback is likely positive but disagree regarding its magnitude, mainly due to uncertainty in shortwave cloud feedback. With the climate accuracy framework we determined the time to detect trends for instruments with various calibration accuracies. We estimated a relationship between cloud property trend uncertainty, cloud feedback, and Equilibrium Climate Sensitivity and also between effective radius trend uncertainty and aerosol indirect effect trends. The direct relationship between instrument accuracy requirements and climate model output provides the level of instrument absolute accuracy needed to reduce climate model projection uncertainty. Different cloud types have varied radiative impacts on the climate system depending on several attributes, such as their thermodynamic phase, altitude, and optical thickness. Therefore, we also conducted these studies by cloud types for a clearer understanding of instrument accuracy requirements needed to detect changes in their cloud properties. Combining this information with the radiative impact of different cloud types helps to prioritize among requirements for future satellite sensors and understanding the climate detection capabilities of existing sensors.
Design and Error Analysis of a Vehicular AR System with Auto-Harmonization.
Foxlin, Eric; Calloway, Thomas; Zhang, Hongsheng
2015-12-01
This paper describes the design, development and testing of an AR system that was developed for aerospace and ground vehicles to meet stringent accuracy and robustness requirements. The system uses an optical see-through HMD, and thus requires extremely low latency, high tracking accuracy and precision alignment and calibration of all subsystems in order to avoid mis-registration and "swim". The paper focuses on the optical/inertial hybrid tracking system and describes novel solutions to the challenges with the optics, algorithms, synchronization, and alignment with the vehicle and HMD systems. Tracker accuracy is presented with simulation results to predict the registration accuracy. A car test is used to create a through-the-eyepiece video demonstrating well-registered augmentations of the road and nearby structures while driving. Finally, a detailed covariance analysis of AR registration error is derived.
Ka-Band Radar Terminal Descent Sensor
NASA Technical Reports Server (NTRS)
Pollard, Brian; Berkun, Andrew; Tope, Michael; Andricos, Constantine; Okonek, Joseph; Lou, Yunling
2007-01-01
The terminal descent sensor (TDS) is a radar altimeter/velocimeter that improves the accuracy of velocity sensing by more than an order of magnitude when compared to existing sensors. The TDS is designed for the safe planetary landing of payloads, and may be used in helicopters and fixed-wing aircraft requiring high-accuracy velocity sensing
Evaluation of registration accuracy between Sentinel-2 and Landsat 8
NASA Astrophysics Data System (ADS)
Barazzetti, Luigi; Cuca, Branka; Previtali, Mattia
2016-08-01
Starting from June 2015, Sentinel-2A is delivering high resolution optical images (ground resolution up to 10 meters) to provide a global coverage of the Earth's land surface every 10 days. The planned launch of Sentinel-2B along with the integration of Landsat images will provide time series with an unprecedented revisit time indispensable for numerous monitoring applications, in which high resolution multi-temporal information is required. They include agriculture, water bodies, natural hazards to name a few. However, the combined use of multi-temporal images requires an accurate geometric registration, i.e. pixel-to-pixel correspondence for terrain-corrected products. This paper presents an analysis of spatial co-registration accuracy for several datasets of Sentinel-2 and Landsat 8 images distributed all around the world. Images were compared with digital correlation techniques for image matching, obtaining an evaluation of registration accuracy with an affine transformation as geometrical model. Results demonstrate that sub-pixel accuracy was achieved between 10 m resolution Sentinel-2 bands (band 3) and 15 m resolution panchromatic Landsat images (band 8).
Figure correction of a metallic ellipsoidal neutron focusing mirror
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guo, Jiang, E-mail: jiang.guo@riken.jp; Yamagata, Yutaka; Morita, Shin-ya
2015-06-15
An increasing number of neutron focusing mirrors is being adopted in neutron scattering experiments in order to provide high fluxes at sample positions, reduce measurement time, and/or increase statistical reliability. To realize a small focusing spot and high beam intensity, mirrors with both high form accuracy and low surface roughness are required. To achieve this, we propose a new figure correction technique to fabricate a two-dimensional neutron focusing mirror made with electroless nickel-phosphorus (NiP) by effectively combining ultraprecision shaper cutting and fine polishing. An arc envelope shaper cutting method is introduced to generate high form accuracy, while a fine polishingmore » method, in which the material is removed effectively without losing profile accuracy, is developed to reduce the surface roughness of the mirror. High form accuracy in the minor-axis and the major-axis is obtained through tool profile error compensation and corrective polishing, respectively, and low surface roughness is acquired under a low polishing load. As a result, an ellipsoidal neutron focusing mirror is successfully fabricated with high form accuracy of 0.5 μm peak-to-valley and low surface roughness of 0.2 nm root-mean-square.« less
NASA Technical Reports Server (NTRS)
Freedman, Adam; Hensley, Scott; Chapin, Elaine; Kroger, Peter; Hussain, Mushtaq; Allred, Bruce
1999-01-01
GeoSAR is an airborne, interferometric Synthetic Aperture Radar (IFSAR) system for terrain mapping, currently under development by a consortium including NASA's Jet Propulsion Laboratory (JPL), Calgis, Inc., a California mapping sciences company, and the California Department of Conservation (CaIDOC), with funding provided by the U.S. Army Corps of Engineers Topographic Engineering Center (TEC) and the U.S. Defense Advanced Research Projects Agency (DARPA). IFSAR data processing requires high-accuracy platform position and attitude knowledge. On 9 GeoSAR, these are provided by one or two Honeywell Embedded GPS Inertial Navigation Units (EGI) and an Ashtech Z12 GPS receiver. The EGIs provide real-time high-accuracy attitude and moderate-accuracy position data, while the Ashtech data, post-processed differentially with data from a nearby ground station using Ashtech PNAV software, provide high-accuracy differential GPS positions. These data are optimally combined using a Kalman filter within the GeoSAR motion measurement software, and the resultant position and orientation information are used to process the dual frequency (X-band and P-band) radar data to generate high-accuracy, high -resolution terrain imagery and digital elevation models (DEMs). GeoSAR requirements specify sub-meter level planimetric and vertical accuracies for the resultant DEMS. To achieve this, platform positioning errors well below one meter are needed. The goal of GeoSAR is to obtain 25 cm or better 3-D positions from the GPS systems on board the aircraft. By imaging a set of known point target corner-cube reflectors, the GeoSAR system can be calibrated. This calibration process yields the true position of the aircraft with an uncertainty of 20- 50 cm. This process thus allows an independent assessment of the accuracy of our GPS-based positioning systems. We will present an overview of the GeoSAR motion measurement system, focusing on the use of GPS and the blending of position data from the various systems. We will present the results of our calibration studies that relate to the accuracy the GPS positioning. We will discuss the effects these positioning, errors have on the resultant DEM products and imagery.
Effects of accuracy constraints on reach-to-grasp movements in cerebellar patients.
Rand, M K; Shimansky, Y; Stelmach, G E; Bracha, V; Bloedel, J R
2000-11-01
Reach-to-grasp movements of patients with pathology restricted to the cerebellum were compared with those of normal controls. Two types of paradigms with different accuracy constraints were used to examine whether cerebellar impairment disrupts the stereotypic relationship between arm transport and grip aperture and whether the variability of this relationship is altered when greater accuracy is required. The movements were made to either a vertical dowel or to a cross bar of a small cross. All subjects were asked to reach for either target at a fast but comfortable speed, grasp the object between the index finger and thumb, and lift it a short distance off the table. In terms of the relationship between arm transport and grip aperture, the control subjects showed a high consistency in grip aperture and wrist velocity profiles from trial to trial for movements to both the dowel and the cross. The relationship between the maximum velocity of the wrist and the time at which grip aperture was maximal during the reach was highly consistent throughout the experiment. In contrast, the time of maximum grip aperture and maximum wrist velocity of the cerebellar patients was quite variable from trial to trial, and the relationship of these measurements also varied considerably. These abnormalities were present regardless of the accuracy requirement. In addition, the cerebellar patients required a significantly longer time to grasp and lift the objects than the control subjects. Furthermore, the patients exhibited a greater grip aperture during reach than the controls. These data indicate that the cerebellum contributes substantially to the coordination of movements required to perform reach-to-grasp movements. Specifically, the cerebellum is critical for executing this behavior with a consistent, well-timed relationship between the transport and grasp components. This contribution is apparent even when accuracy demands are minimal.
Jiang, Jie; Yu, Wenbo; Zhang, Guangjun
2017-01-01
Navigation accuracy is one of the key performance indicators of an inertial navigation system (INS). Requirements for an accuracy assessment of an INS in a real work environment are exceedingly urgent because of enormous differences between real work and laboratory test environments. An attitude accuracy assessment of an INS based on the intensified high dynamic star tracker (IHDST) is particularly suitable for a real complex dynamic environment. However, the coupled systematic coordinate errors of an INS and the IHDST severely decrease the attitude assessment accuracy of an INS. Given that, a high-accuracy decoupling estimation method of the above systematic coordinate errors based on the constrained least squares (CLS) method is proposed in this paper. The reference frame of the IHDST is firstly converted to be consistent with that of the INS because their reference frames are completely different. Thereafter, the decoupling estimation model of the systematic coordinate errors is established and the CLS-based optimization method is utilized to estimate errors accurately. After compensating for error, the attitude accuracy of an INS can be assessed based on IHDST accurately. Both simulated experiments and real flight experiments of aircraft are conducted, and the experimental results demonstrate that the proposed method is effective and shows excellent performance for the attitude accuracy assessment of an INS in a real work environment. PMID:28991179
Development of CFRP mirrors for space telescopes
NASA Astrophysics Data System (ADS)
Utsunomiya, Shin; Kamiya, Tomohiro; Shimizu, Ryuzo
2013-09-01
CFRP (Caron fiber reinforced plastics) have superior properties of high specific elasticity and low thermal expansion for satellite telescope structures. However, difficulties to achieve required surface accuracy and to ensure stability in orbit have discouraged CFRP application as main mirrors. We have developed ultra-light weight and high precision CFRP mirrors of sandwich structures composed of CFRP skins and CFRP cores using a replica technique. Shape accuracy of the demonstrated mirrors of 150 mm in diameter was 0.8 μm RMS (Root Mean Square) and surface roughness was 5 nm RMS as fabricated. Further optimization of fabrication process conditions to improve surface accuracy was studied using flat sandwich panels. Then surface accuracy of the flat CFRP sandwich panels of 150 mm square was improved to flatness of 0.2 μm RMS with surface roughness of 6 nm RMS. The surface accuracy vs. size of trial models indicated high possibility of fabrication of over 1m size mirrors with surface accuracy of 1μm. Feasibility of CFRP mirrors for low temperature applications was examined for JASMINE project as an example. Stability of surface accuracy of CFRP mirrors against temperature and moisture was discussed.
Achieving Climate Change Absolute Accuracy in Orbit
NASA Technical Reports Server (NTRS)
Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.;
2013-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.
An angle encoder for super-high resolution and super-high accuracy using SelfA
NASA Astrophysics Data System (ADS)
Watanabe, Tsukasa; Kon, Masahito; Nabeshima, Nobuo; Taniguchi, Kayoko
2014-06-01
Angular measurement technology at high resolution for applications such as in hard disk drive manufacturing machines, precision measurement equipment and aspherical process machines requires a rotary encoder with high accuracy, high resolution and high response speed. However, a rotary encoder has angular deviation factors during operation due to scale error or installation error. It has been assumed to be impossible to achieve accuracy below 0.1″ in angular measurement or control after the installation onto the rotating axis. Self-calibration (Lu and Trumper 2007 CIRP Ann. 56 499; Kim et al 2011 Proc. MacroScale; Probst 2008 Meas. Sci. Technol. 19 015101; Probst et al Meas. Sci. Technol. 9 1059; Tadashi and Makoto 1993 J. Robot. Mechatronics 5 448; Ralf et al 2006 Meas. Sci. Technol. 17 2811) and cross-calibration (Probst et al 1998 Meas. Sci. Technol. 9 1059; Just et al 2009 Precis. Eng. 33 530; Burnashev 2013 Quantum Electron. 43 130) technologies for a rotary encoder have been actively discussed on the basis of the principle of circular closure. This discussion prompted the development of rotary tables which achieve reliable and high accuracy angular verification. We apply these technologies for the development of a rotary encoder not only to meet the requirement of super-high accuracy but also to meet that of super-high resolution. This paper presents the development of an encoder with 221 = 2097 152 resolutions per rotation (360°), that is, corresponding to a 0.62″ signal period, achieved by the combination of a laser rotary encoder supplied by Magnescale Co., Ltd and a self-calibratable encoder (SelfA) supplied by The National Institute of Advanced Industrial Science & Technology (AIST). In addition, this paper introduces the development of a rotary encoder to guarantee ±0.03″ accuracy at any point of the interpolated signal, with respect to the encoder at the minimum resolution of 233, that is, corresponding to a 0.0015″ signal period after interpolation of 212 (= 4096) divisions through the interpolator.
Image Stability Requirements For a Geostationary Imaging Fourier Transform Spectrometer (GIFTS)
NASA Technical Reports Server (NTRS)
Bingham, G. E.; Cantwell, G.; Robinson, R. C.; Revercomb, H. E.; Smith, W. L.
2001-01-01
A Geostationary Imaging Fourier Transform Spectrometer (GIFTS) has been selected for the NASA New Millennium Program (NMP) Earth Observing-3 (EO-3) mission. Our paper will discuss one of the key GIFTS measurement requirements, Field of View (FOV) stability, and its impact on required system performance. The GIFTS NMP mission is designed to demonstrate new and emerging sensor and data processing technologies with the goal of making revolutionary improvements in meteorological observational capability and forecasting accuracy. The GIFTS payload is a versatile imaging FTS with programmable spectral resolution and spatial scene selection that allows radiometric accuracy and atmospheric sounding precision to be traded in near real time for area coverage. The GIFTS sensor combines high sensitivity with a massively parallel spatial data collection scheme to allow high spatial resolution measurement of the Earth's atmosphere and rapid broad area coverage. An objective of the GIFTS mission is to demonstrate the advantages of high spatial resolution (4 km ground sample distance - gsd) on temperature and water vapor retrieval by allowing sampling in broken cloud regions. This small gsd, combined with the relatively long scan time required (approximately 10 s) to collect high resolution spectra from geostationary (GEO) orbit, may require extremely good pointing control. This paper discusses the analysis of this requirement.
High-order cyclo-difference techniques: An alternative to finite differences
NASA Technical Reports Server (NTRS)
Carpenter, Mark H.; Otto, John C.
1993-01-01
The summation-by-parts energy norm is used to establish a new class of high-order finite-difference techniques referred to here as 'cyclo-difference' techniques. These techniques are constructed cyclically from stable subelements, and require no numerical boundary conditions; when coupled with the simultaneous approximation term (SAT) boundary treatment, they are time asymptotically stable for an arbitrary hyperbolic system. These techniques are similar to spectral element techniques and are ideally suited for parallel implementation, but do not require special collocation points or orthogonal basis functions. The principal focus is on methods of sixth-order formal accuracy or less; however, these methods could be extended in principle to any arbitrary order of accuracy.
Adaptive optics using a MEMS deformable mirror for a segmented mirror telescope
NASA Astrophysics Data System (ADS)
Miyamura, Norihide
2017-09-01
For small satellite remote sensing missions, a large aperture telescope more than 400mm is required to realize less than 1m GSD observations. However, it is difficult or expensive to realize the large aperture telescope using a monolithic primary mirror with high surface accuracy. A segmented mirror telescope should be studied especially for small satellite missions. Generally, not only high accuracy of optical surface but also high accuracy of optical alignment is required for large aperture telescopes. For segmented mirror telescopes, the alignment is more difficult and more important. For conventional systems, the optical alignment is adjusted before launch to achieve desired imaging performance. However, it is difficult to adjust the alignment for large sized optics in high accuracy. Furthermore, thermal environment in orbit and vibration in a launch vehicle cause the misalignments of the optics. We are developing an adaptive optics system using a MEMS deformable mirror for an earth observing remote sensing sensor. An image based adaptive optics system compensates the misalignments and wavefront aberrations of optical elements using the deformable mirror by feedback of observed images. We propose the control algorithm of the deformable mirror for a segmented mirror telescope by using of observed image. The numerical simulation results and experimental results show that misalignment and wavefront aberration of the segmented mirror telescope are corrected and image quality is improved.
NASA Technical Reports Server (NTRS)
Jensen, J. R.; Tinney, L. R.; Estes, J. E.
1975-01-01
Cropland inventories utilizing high altitude and Landsat imagery were conducted in Kern County, California. It was found that in terms of the overall mean relative and absolute inventory accuracies, a Landsat multidate analysis yielded the most optimum results, i.e., 98% accuracy. The 1:125,000 CIR high altitude inventory is a serious alternative which can be very accurate (97% or more) if imagery is available for a specific study area. The operational remote sensing cropland inventories documented in this study are considered cost-effective. When compared to conventional survey costs of $62-66 per 10,000 acres, the Landsat and high-altitude inventories required only 3-5% of this amount, i.e., $1.97-2.98.
An accuracy measurement method for star trackers based on direct astronomic observation
Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping
2016-01-01
Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers. PMID:26948412
An accuracy measurement method for star trackers based on direct astronomic observation.
Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping
2016-03-07
Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers.
Testing of the high accuracy inertial navigation system in the Shuttle Avionics Integration Lab
NASA Technical Reports Server (NTRS)
Strachan, Russell L.; Evans, James M.
1991-01-01
The description, results, and interpretation is presented of comparison testing between the High Accuracy Inertial Navigation System (HAINS) and KT-70 Inertial Measurement Unit (IMU). The objective was to show the HAINS can replace the KT-70 IMU in the space shuttle Orbiter, both singularly and totally. This testing was performed in the Guidance, Navigation, and Control Test Station (GTS) of the Shuttle Avionics Integration Lab (SAIL). A variety of differences between the two instruments are explained. Four, 5 day test sessions were conducted varying the number and slot position of the HAINS and KT-70 IMUs. The various steps in the calibration and alignment procedure are explained. Results and their interpretation are presented. The HAINS displayed a high level of performance accuracy previously unseen with the KT-70 IMU. The most significant improvement of the performance came in the Tuned Inertial/Extended Launch Hold tests. The HAINS exceeded the 4 hr specification requirement. The results obtained from the SAIL tests were generally well beyond the requirements of the procurement specification.
NASA Technical Reports Server (NTRS)
Hoffbeck, Joseph P.; Landgrebe, David A.
1994-01-01
Many analysis algorithms for high-dimensional remote sensing data require that the remotely sensed radiance spectra be transformed to approximate reflectance to allow comparison with a library of laboratory reflectance spectra. In maximum likelihood classification, however, the remotely sensed spectra are compared to training samples, thus a transformation to reflectance may or may not be helpful. The effect of several radiance-to-reflectance transformations on maximum likelihood classification accuracy is investigated in this paper. We show that the empirical line approach, LOWTRAN7, flat-field correction, single spectrum method, and internal average reflectance are all non-singular affine transformations, and that non-singular affine transformations have no effect on discriminant analysis feature extraction and maximum likelihood classification accuracy. (An affine transformation is a linear transformation with an optional offset.) Since the Atmosphere Removal Program (ATREM) and the log residue method are not affine transformations, experiments with Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) data were conducted to determine the effect of these transformations on maximum likelihood classification accuracy. The average classification accuracy of the data transformed by ATREM and the log residue method was slightly less than the accuracy of the original radiance data. Since the radiance-to-reflectance transformations allow direct comparison of remotely sensed spectra with laboratory reflectance spectra, they can be quite useful in labeling the training samples required by maximum likelihood classification, but these transformations have only a slight effect or no effect at all on discriminant analysis and maximum likelihood classification accuracy.
NASA Astrophysics Data System (ADS)
Jiménez, A.; Morante, E.; Viera, T.; Núñez, M.; Reyes, M.
2010-07-01
European Extremely Large Telescope (E-ELT) based in 984 primary mirror segments achieving required optical performance; they must position relatively to adjacent segments with relative nanometer accuracy. CESA designed M1 Position Actuators (PACT) to comply with demanding performance requirements of EELT. Three PACT are located under each segment controlling three out of the plane degrees of freedom (tip, tilt, piston). To achieve a high linear accuracy in long operational displacements, PACT uses two stages in series. First stage based on Voice Coil Actuator (VCA) to achieve high accuracies in very short travel ranges, while second stage based on Brushless DC Motor (BLDC) provides large stroke ranges and allows positioning the first stage closer to the demanded position. A BLDC motor is used achieving a continuous smoothly movement compared to sudden jumps of a stepper. A gear box attached to the motor allows a high reduction of power consumption and provides a great challenge for sizing. PACT space envelope was reduced by means of two flat springs fixed to VCA. Its main characteristic is a low linear axial stiffness. To achieve best performance for PACT, sensors have been included in both stages. A rotary encoder is included in BLDC stage to close position/velocity control loop. An incremental optical encoder measures PACT travel range with relative nanometer accuracy and used to close the position loop of the whole actuator movement. For this purpose, four different optical sensors with different gratings will be evaluated. Control strategy show different internal closed loops that work together to achieve required performance.
76 FR 23713 - Wireless E911 Location Accuracy Requirements
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-28
... Location Accuracy Requirements AGENCY: Federal Communications Commission. ACTION: Final rule; announcement... contained in regulations concerning wireless E911 location accuracy requirements. The information collection... standards for wireless Enhanced 911 (E911) Phase II location accuracy and reliability to satisfy these...
Research on Modeling of Propeller in a Turboprop Engine
NASA Astrophysics Data System (ADS)
Huang, Jiaqin; Huang, Xianghua; Zhang, Tianhong
2015-05-01
In the simulation of engine-propeller integrated control system for a turboprop aircraft, a real-time propeller model with high-accuracy is required. A study is conducted to compare the real-time and precision performance of propeller models based on strip theory and lifting surface theory. The emphasis in modeling by strip theory is focused on three points as follows: First, FLUENT is adopted to calculate the lift and drag coefficients of the propeller. Next, a method to calculate the induced velocity which occurs in the ground rig test is presented. Finally, an approximate method is proposed to obtain the downwash angle of the propeller when the conventional algorithm has no solution. An advanced approximation of the velocities induced by helical horseshoe vortices is applied in the model based on lifting surface theory. This approximate method will reduce computing time and remain good accuracy. Comparison between the two modeling techniques shows that the model based on strip theory which owns more advantage on both real-time and high-accuracy can meet the requirement.
Recent advances in laser triangulation-based measurement of airfoil surfaces
NASA Astrophysics Data System (ADS)
Hageniers, Omer L.
1995-01-01
The measurement of aircraft jet engine turbine and compressor blades requires a high degree of accuracy. This paper will address the development and performance attributes of a noncontact electro-optical gaging system specifically designed to meet the airfoil dimensional measurement requirements inherent in turbine and compressor blade manufacture and repair. The system described consists of the following key components: a high accuracy, dual channel, laser based optical sensor, a four degree of freedom mechanical manipulator system and a computer based operator interface. Measurement modes of the system include point by point data gathering at rates up to 3 points per second and an 'on-the-fly' mode where points can be gathered at data rates up to 20 points per second at surface scanning speeds of up to 1 inch per second. Overall system accuracy is +/- 0.0005 inches in a configuration that is useable in the blade manufacturing area. The systems ability to input design data from CAD data bases and output measurement data in a CAD compatible data format is discussed.
Discontinuous Spectral Difference Method for Conservation Laws on Unstructured Grids
NASA Technical Reports Server (NTRS)
Liu, Yen; Vinokur, Marcel
2004-01-01
A new, high-order, conservative, and efficient discontinuous spectral finite difference (SD) method for conservation laws on unstructured grids is developed. The concept of discontinuous and high-order local representations to achieve conservation and high accuracy is utilized in a manner similar to the Discontinuous Galerkin (DG) and the Spectral Volume (SV) methods, but while these methods are based on the integrated forms of the equations, the new method is based on the differential form to attain a simpler formulation and higher efficiency. Conventional unstructured finite-difference and finite-volume methods require data reconstruction based on the least-squares formulation using neighboring point or cell data. Since each unknown employs a different stencil, one must repeat the least-squares inversion for every point or cell at each time step, or to store the inversion coefficients. In a high-order, three-dimensional computation, the former would involve impractically large CPU time, while for the latter the memory requirement becomes prohibitive. In addition, the finite-difference method does not satisfy the integral conservation in general. By contrast, the DG and SV methods employ a local, universal reconstruction of a given order of accuracy in each cell in terms of internally defined conservative unknowns. Since the solution is discontinuous across cell boundaries, a Riemann solver is necessary to evaluate boundary flux terms and maintain conservation. In the DG method, a Galerkin finite-element method is employed to update the nodal unknowns within each cell. This requires the inversion of a mass matrix, and the use of quadratures of twice the order of accuracy of the reconstruction to evaluate the surface integrals and additional volume integrals for nonlinear flux functions. In the SV method, the integral conservation law is used to update volume averages over subcells defined by a geometrically similar partition of each grid cell. As the order of accuracy increases, the partitioning for 3D requires the introduction of a large number of parameters, whose optimization to achieve convergence becomes increasingly more difficult. Also, the number of interior facets required to subdivide non-planar faces, and the additional increase in the number of quadrature points for each facet, increases the computational cost greatly.
Brushless tachometer gives speed and direction
NASA Technical Reports Server (NTRS)
Nola, F. J.
1977-01-01
Brushless electronic tachometer measures rotational speed and rotational direction, maintaining accuracy at high or low speeds. Unit is particularly useful in vacuum environments requiring low friction.
High Accuracy Transistor Compact Model Calibrations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hembree, Charles E.; Mar, Alan; Robertson, Perry J.
2015-09-01
Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirementsmore » require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.« less
Performance of the NASA Digitizing Core-Loss Instrumentation
NASA Technical Reports Server (NTRS)
Schwarze, Gene E. (Technical Monitor); Niedra, Janis M.
2003-01-01
The standard method of magnetic core loss measurement was implemented on a high frequency digitizing oscilloscope in order to explore the limits to accuracy when characterizing high Q cores at frequencies up to 1 MHz. This method computes core loss from the cycle mean of the product of the exciting current in a primary winding and induced voltage in a separate flux sensing winding. It is pointed out that just 20 percent accuracy for a Q of 100 core material requires a phase angle accuracy of 0.1 between the voltage and current measurements. Experiment shows that at 1 MHz, even high quality, high frequency current sensing transformers can introduce phase errors of a degree or more. Due to the fact that the Q of some quasilinear core materials can exceed 300 at frequencies below 100 kHz, phase angle errors can be a problem even at 50 kHz. Hence great care is necessary with current sensing and ground loops when measuring high Q cores. Best high frequency current sensing accuracy was obtained from a fabricated 0.1-ohm coaxial resistor, differentially sensed. Sample high frequency core loss data taken with the setup for a permeability-14 MPP core is presented.
Theoferometer for High Accuracy Optical Alignment and Metrology
NASA Technical Reports Server (NTRS)
Toland, Ronald; Leviton, Doug; Koterba, Seth
2004-01-01
The accurate measurement of the orientation of optical parts and systems is a pressing problem for upcoming space missions, such as stellar interferometers, requiring the knowledge and maintenance of positions to the sub-arcsecond level. Theodolites, the devices commonly used to make these measurements, cannot provide the needed level of accuracy. This paper describes the design, construction, and testing of an interferometer system to fill the widening gap between future requirements and current capabilities. A Twyman-Green interferometer mounted on a 2 degree of freedom rotation stage is able to obtain sub-arcsecond, gravity-referenced tilt measurements of a sample alignment cube. Dubbed a 'theoferometer,' this device offers greater ease-of-use, accuracy, and repeatability than conventional methods, making it a suitable 21st-century replacement for the theodolite.
Study design requirements for RNA sequencing-based breast cancer diagnostics.
Mer, Arvind Singh; Klevebring, Daniel; Grönberg, Henrik; Rantalainen, Mattias
2016-02-01
Sequencing-based molecular characterization of tumors provides information required for individualized cancer treatment. There are well-defined molecular subtypes of breast cancer that provide improved prognostication compared to routine biomarkers. However, molecular subtyping is not yet implemented in routine breast cancer care. Clinical translation is dependent on subtype prediction models providing high sensitivity and specificity. In this study we evaluate sample size and RNA-sequencing read requirements for breast cancer subtyping to facilitate rational design of translational studies. We applied subsampling to ascertain the effect of training sample size and the number of RNA sequencing reads on classification accuracy of molecular subtype and routine biomarker prediction models (unsupervised and supervised). Subtype classification accuracy improved with increasing sample size up to N = 750 (accuracy = 0.93), although with a modest improvement beyond N = 350 (accuracy = 0.92). Prediction of routine biomarkers achieved accuracy of 0.94 (ER) and 0.92 (Her2) at N = 200. Subtype classification improved with RNA-sequencing library size up to 5 million reads. Development of molecular subtyping models for cancer diagnostics requires well-designed studies. Sample size and the number of RNA sequencing reads directly influence accuracy of molecular subtyping. Results in this study provide key information for rational design of translational studies aiming to bring sequencing-based diagnostics to the clinic.
A Review of Heating and Temperature Control in Microfluidic Systems: Techniques and Applications
Miralles, Vincent; Huerre, Axel; Malloggi, Florent; Jullien, Marie-Caroline
2013-01-01
This review presents an overview of the different techniques developed over the last decade to regulate the temperature within microfluidic systems. A variety of different approaches has been adopted, from external heating sources to Joule heating, microwaves or the use of lasers to cite just a few examples. The scope of the technical solutions developed to date is impressive and encompasses for instance temperature ramp rates ranging from 0.1 to 2,000 °C/s leading to homogeneous temperatures from −3 °C to 120 °C, and constant gradients from 6 to 40 °C/mm with a fair degree of accuracy. We also examine some recent strategies developed for applications such as digital microfluidics, where integration of a heating source to generate a temperature gradient offers control of a key parameter, without necessarily requiring great accuracy. Conversely, Temperature Gradient Focusing requires high accuracy in order to control both the concentration and separation of charged species. In addition, the Polymerase Chain Reaction requires both accuracy (homogeneous temperature) and integration to carry out demanding heating cycles. The spectrum of applications requiring temperature regulation is growing rapidly with increasingly important implications for the physical, chemical and biotechnological sectors, depending on the relevant heating technique. PMID:26835667
A gimbaled low noise momentum wheel
NASA Technical Reports Server (NTRS)
Bichler, U.; Eckardt, T.
1993-01-01
The bus actuators are the heart and at the same time the Achilles' heel of accurate spacecraft stabilization systems, because both their performance and their perturbations can have a deciding influence on the achievable pointing accuracy of the mission. The main task of the attitude actuators, which are mostly wheels, is the generation of useful torques with sufficiently high bandwidth, resolution and accuracy. This is because the bandwidth of the whole attitude control loop and its disturbance rejection capability is dependent upon these factors. These useful torques shall be provided, without - as far as possible - parasitic noise like unbalance forces and torques and harmonics. This is because such variable frequency perturbations excite structural resonances which in turn disturb the operation of sensors and scientific instruments. High accuracy spacecraft will further require bus actuators for the three linear degrees of freedom (DOF) to damp structural oscillations excited by various sources. These actuators have to cover the dynamic range of these disturbances. Another interesting feature, which is not necessarily related to low noise performance, is a gimballing capability which enables, in a certain angular range, a three axis attitude control with only one wheel. The herein presented Teldix MWX, a five degree of freedom Magnetic Bearing Momentum Wheel, incorporates all the above required features. It is ideally suited to support, as a gyroscopic actuator in the attitude control system, all High Pointing Accuracy and Vibration Sensitive space missions.
New high order schemes in BATS-R-US
NASA Astrophysics Data System (ADS)
Toth, G.; van der Holst, B.; Daldorff, L.; Chen, Y.; Gombosi, T. I.
2013-12-01
The University of Michigan global magnetohydrodynamics code BATS-R-US has long relied on the block-adaptive mesh refinement (AMR) to increase accuracy in regions of interest, and we used a second order accurate TVD scheme. While AMR can in principle produce arbitrarily accurate results, there are still practical limitations due to computational resources. To further improve the accuracy of the BATS-R-US code, recently, we have implemented a 4th order accurate finite volume scheme (McCorquodale and Colella, 2011}), the 5th order accurate Monotonicity Preserving scheme (MP5, Suresh and Huynh, 1997) and the 5th order accurate CWENO5 scheme (Capdeville, 2008). In the first implementation the high order accuracy is achieved in the uniform parts of the Cartesian grids, and we still use the second order TVD scheme at resolution changes. For spherical grids the new schemes are only second order accurate so far, but still much less diffusive than the TVD scheme. We show a few verification tests that demonstrate the order of accuracy as well as challenging space physics applications. The high order schemes are less robust than the TVD scheme, and it requires some tricks and effort to make the code work. When the high order scheme works, however, we find that in most cases it can obtain similar or better results than the TVD scheme on twice finer grids. For three dimensional time dependent simulations this means that the high order scheme is almost 10 times faster requires 8 times less storage than the second order method.
NASA Astrophysics Data System (ADS)
Wang, Tao; Wang, Guilin; Zhu, Dengchao; Li, Shengyi
2015-02-01
In order to meet the requirement of aerodynamics, the infrared domes or windows with conformal and thin-wall structure becomes the development trend of high-speed aircrafts in the future. But these parts usually have low stiffness, the cutting force will change along with the axial position, and it is very difficult to meet the requirement of shape accuracy by single machining. Therefore, on-machine measurement and compensating turning are used to control the shape errors caused by the fluctuation of cutting force and the change of stiffness. In this paper, on the basis of ultra precision diamond lathe, a contact measuring system with five DOFs is developed to achieve on-machine measurement of conformal thin-wall parts with high accuracy. According to high gradient surface, the optimizing algorithm is designed on the distribution of measuring points by using the data screening method. The influence rule of sampling frequency is analyzed on measuring errors, the best sampling frequency is found out based on planning algorithm, the effect of environmental factors and the fitting errors are controlled within lower range, and the measuring accuracy of conformal dome is greatly improved in the process of on-machine measurement. According to MgF2 conformal dome with high gradient, the compensating turning is implemented by using the designed on-machine measuring algorithm. The shape error is less than PV 0.8μm, greatly superior compared with PV 3μm before compensating turning, which verifies the correctness of measuring algorithm.
Development of Accurate Structure for Mounting and Aligning Thin-Foil X-Ray Mirrors
NASA Technical Reports Server (NTRS)
Heilmann, Ralf K.
2001-01-01
The goal of this work was to improve the assembly accuracy for foil x-ray optics as produced by the high-energy astrophysics group at the NASA Goddard Space Flight Center. Two main design choices lead to an alignment concept that was shown to improve accuracy well within the requirements currently pursued by the Constellation-X Spectroscopy X-Ray Telescope (SXT).
Improving the Accuracy of the Chebyshev Rational Approximation Method Using Substeps
Isotalo, Aarno; Pusa, Maria
2016-05-01
The Chebyshev Rational Approximation Method (CRAM) for solving the decay and depletion of nuclides is shown to have a remarkable decrease in error when advancing the system with the same time step and microscopic reaction rates as the previous step. This property is exploited here to achieve high accuracy in any end-of-step solution by dividing a step into equidistant sub-steps. The computational cost of identical substeps can be reduced significantly below that of an equal number of regular steps, as the LU decompositions for the linear solves required in CRAM only need to be formed on the first substep. Themore » improved accuracy provided by substeps is most relevant in decay calculations, where there have previously been concerns about the accuracy and generality of CRAM. Lastly, with substeps, CRAM can solve any decay or depletion problem with constant microscopic reaction rates to an extremely high accuracy for all nuclides with concentrations above an arbitrary limit.« less
High accuracy in short ISS missions
NASA Astrophysics Data System (ADS)
Rüeger, J. M.
1986-06-01
Traditionally Inertial Surveying Systems ( ISS) are used for missions of 30 km to 100 km length. Today, a new type of ISS application is emanating from an increased need for survey control densification in urban areas often in connection with land information systems or cadastral surveys. The accuracy requirements of urban surveys are usually high. The loss in accuracy caused by the coordinate transfer between IMU and ground marks is investigated and an offsetting system based on electronic tacheometers is proposed. An offsetting system based on a Hewlett-Packard HP 3820A electronic tacheometer has been tested in Sydney (Australia) in connection with a vehicle mounted LITTON Auto-Surveyor System II. On missions over 750 m ( 8 stations, 25 minutes duration, 3.5 minute ZUPT intervals, mean offset distances 9 metres) accuracies of 37 mm (one sigma) in position and 8 mm in elevation were achieved. Some improvements to the LITTON Auto-Surveyor System II are suggested which would improve the accuracies even further.
An Application of the Quadrature-Free Discontinuous Galerkin Method
NASA Technical Reports Server (NTRS)
Lockard, David P.; Atkins, Harold L.
2000-01-01
The process of generating a block-structured mesh with the smoothness required for high-accuracy schemes is still a time-consuming process often measured in weeks or months. Unstructured grids about complex geometries are more easily generated, and for this reason, methods using unstructured grids have gained favor for aerodynamic analyses. The discontinuous Galerkin (DG) method is a compact finite-element projection method that provides a practical framework for the development of a high-order method using unstructured grids. Higher-order accuracy is obtained by representing the solution as a high-degree polynomial whose time evolution is governed by a local Galerkin projection. The traditional implementation of the discontinuous Galerkin uses quadrature for the evaluation of the integral projections and is prohibitively expensive. Atkins and Shu introduced the quadrature-free formulation in which the integrals are evaluated a-priori and exactly for a similarity element. The approach has been demonstrated to possess the accuracy required for acoustics even in cases where the grid is not smooth. Other issues such as boundary conditions and the treatment of non-linear fluxes have also been studied in earlier work This paper describes the application of the quadrature-free discontinuous Galerkin method to a two-dimensional shear layer problem. First, a brief description of the method is given. Next, the problem is described and the solution is presented. Finally, the resources required to perform the calculations are given.
NASA Astrophysics Data System (ADS)
Bramhe, V. S.; Ghosh, S. K.; Garg, P. K.
2018-04-01
With rapid globalization, the extent of built-up areas is continuously increasing. Extraction of features for classifying built-up areas that are more robust and abstract is a leading research topic from past many years. Although, various studies have been carried out where spatial information along with spectral features has been utilized to enhance the accuracy of classification. Still, these feature extraction techniques require a large number of user-specific parameters and generally application specific. On the other hand, recently introduced Deep Learning (DL) techniques requires less number of parameters to represent more abstract aspects of the data without any manual effort. Since, it is difficult to acquire high-resolution datasets for applications that require large scale monitoring of areas. Therefore, in this study Sentinel-2 image has been used for built-up areas extraction. In this work, pre-trained Convolutional Neural Networks (ConvNets) i.e. Inception v3 and VGGNet are employed for transfer learning. Since these networks are trained on generic images of ImageNet dataset which are having very different characteristics from satellite images. Therefore, weights of networks are fine-tuned using data derived from Sentinel-2 images. To compare the accuracies with existing shallow networks, two state of art classifiers i.e. Gaussian Support Vector Machine (SVM) and Back-Propagation Neural Network (BP-NN) are also implemented. Both SVM and BP-NN gives 84.31 % and 82.86 % overall accuracies respectively. Inception-v3 and VGGNet gives 89.43 % of overall accuracy using fine-tuned VGGNet and 92.10 % when using Inception-v3. The results indicate high accuracy of proposed fine-tuned ConvNets on a 4-channel Sentinel-2 dataset for built-up area extraction.
Cheng, Qi; Xue, Dabin; Wang, Guanyu; Ochieng, Washington Yotto
2017-01-01
The increasing number of vehicles in modern cities brings the problem of increasing crashes. One of the applications or services of Intelligent Transportation Systems (ITS) conceived to improve safety and reduce congestion is collision avoidance. This safety critical application requires sub-meter level vehicle state estimation accuracy with very high integrity, continuity and availability, to detect an impending collision and issue a warning or intervene in the case that the warning is not heeded. Because of the challenging city environment, to date there is no approved method capable of delivering this high level of performance in vehicle state estimation. In particular, the current Global Navigation Satellite System (GNSS) based collision avoidance systems have the major limitation that the real-time accuracy of dynamic state estimation deteriorates during abrupt acceleration and deceleration situations, compromising the integrity of collision avoidance. Therefore, to provide the Required Navigation Performance (RNP) for collision avoidance, this paper proposes a novel Particle Filter (PF) based model for the integration or fusion of real-time kinematic (RTK) GNSS position solutions with electronic compass and road segment data used in conjunction with an Autoregressive (AR) motion model. The real-time vehicle state estimates are used together with distance based collision avoidance algorithms to predict potential collisions. The algorithms are tested by simulation and in the field representing a low density urban environment. The results show that the proposed algorithm meets the horizontal positioning accuracy requirement for collision avoidance and is superior to positioning accuracy of GNSS only, traditional Constant Velocity (CV) and Constant Acceleration (CA) based motion models, with a significant improvement in the prediction accuracy of potential collision. PMID:29186851
Sun, Rui; Cheng, Qi; Xue, Dabin; Wang, Guanyu; Ochieng, Washington Yotto
2017-11-25
The increasing number of vehicles in modern cities brings the problem of increasing crashes. One of the applications or services of Intelligent Transportation Systems (ITS) conceived to improve safety and reduce congestion is collision avoidance. This safety critical application requires sub-meter level vehicle state estimation accuracy with very high integrity, continuity and availability, to detect an impending collision and issue a warning or intervene in the case that the warning is not heeded. Because of the challenging city environment, to date there is no approved method capable of delivering this high level of performance in vehicle state estimation. In particular, the current Global Navigation Satellite System (GNSS) based collision avoidance systems have the major limitation that the real-time accuracy of dynamic state estimation deteriorates during abrupt acceleration and deceleration situations, compromising the integrity of collision avoidance. Therefore, to provide the Required Navigation Performance (RNP) for collision avoidance, this paper proposes a novel Particle Filter (PF) based model for the integration or fusion of real-time kinematic (RTK) GNSS position solutions with electronic compass and road segment data used in conjunction with an Autoregressive (AR) motion model. The real-time vehicle state estimates are used together with distance based collision avoidance algorithms to predict potential collisions. The algorithms are tested by simulation and in the field representing a low density urban environment. The results show that the proposed algorithm meets the horizontal positioning accuracy requirement for collision avoidance and is superior to positioning accuracy of GNSS only, traditional Constant Velocity (CV) and Constant Acceleration (CA) based motion models, with a significant improvement in the prediction accuracy of potential collision.
Recent developments in heterodyne laser interferometry at Harbin Institute of Technology
NASA Astrophysics Data System (ADS)
Hu, P. C.; Tan, J. B. B.; Yang, H. X. X.; Fu, H. J. J.; Wang, Q.
2013-01-01
In order to fulfill the requirements for high-resolution and high-precision heterodyne interferometric technologies and instruments, the laser interferometry group of HIT has developed some novel techniques for high-resolution and high-precision heterodyne interferometers, such as high accuracy laser frequency stabilization, dynamic sub-nanometer resolution phase interpolation and dynamic nonlinearity measurement. Based on a novel lock point correction method and an asymmetric thermal structure, the frequency stabilized laser achieves a long term stability of 1.2×10-8, and it can be steadily stabilized even in the air flowing up to 1 m/s. In order to achieve dynamic sub-nanometer resolution of laser heterodyne interferometers, a novel phase interpolation method based on digital delay line is proposed. Experimental results show that, the proposed 0.62 nm, phase interpolator built with a 64 multiple PLL and an 8-tap digital delay line achieves a static accuracy better than 0.31nm and a dynamic accuracy better than 0.62 nm over the velocity ranging from -2 m/s to 2 m/s. Meanwhile, an accuracy beam polarization measuring setup is proposed to check and ensure the light's polarization state of the dual frequency laser head, and a dynamic optical nonlinearity measuring setup is built to measure the optical nonlinearity of the heterodyne system accurately and quickly. Analysis and experimental results show that, the beam polarization measuring setup can achieve an accuracy of 0.03° in ellipticity angles and an accuracy of 0.04° in the non-orthogonality angle respectively, and the optical nonlinearity measuring setup can achieve an accuracy of 0.13°.
NASA Technical Reports Server (NTRS)
Patt, Frederick S.; Hoisington, Charles M.; Gregg, Watson W.; Coronado, Patrick L.; Hooker, Stanford B. (Editor); Firestone, Elaine R. (Editor); Indest, A. W. (Editor)
1993-01-01
An analysis of orbit propagation models was performed by the Mission Operations element of the Sea-viewing Wide Field-of-View Sensor (SeaWiFS) Project, which has overall responsibility for the instrument scheduling. The orbit propagators selected for this analysis are widely available general perturbations models. The analysis includes both absolute accuracy determination and comparisons of different versions of the models. The results show that all of the models tested meet accuracy requirements for scheduling and data acquisition purposes. For internal Project use the SGP4 propagator, developed by the North American Air Defense (NORAD) Command, has been selected. This model includes atmospheric drag effects and, therefore, provides better accuracy. For High Resolution Picture Transmission (HRPT) ground stations, which have less stringent accuracy requirements, the publicly available Brouwer-Lyddane models are recommended. The SeaWiFS Project will make available portable source code for a version of this model developed by the Data Capture Facility (DCF).
Micro-assembly of three-dimensional rotary MEMS mirrors
NASA Astrophysics Data System (ADS)
Wang, Lidai; Mills, James K.; Cleghorn, William L.
2009-02-01
We present a novel approach to construct three-dimensional rotary micro-mirrors, which are fundamental components to build 1×N or N×M optical switching systems. A rotary micro-mirror consists of two microparts: a rotary micro-motor and a micro-mirror. Both of the two microparts are fabricated with PolyMUMPs, a surface micromachining process. A sequential robotic microassembly process is developed to join the two microparts together to construct a threedimensional device. In order to achieve high positioning accuracy and a strong mechanical connection, the micro-mirror is joined to the micro-motor using an adhesive mechanical fastener. The mechanical fastener has self-alignment ability and provides a temporary joint between the two microparts. The adhesive bonding can create a strong permanent connection, which does not require extra supporting plates for the micro-mirror. A hybrid manipulation strategy, which includes pick-and-place and pushing-based manipulations, is utilized to manipulation the micro-mirror. The pick-andplace manipulation has the ability to globally position the micro-mirror in six degrees of freedom. The pushing-based manipulation can achieve high positioning accuracy. This microassembly approach has great flexibility and high accuracy; furthermore, it does not require extra supporting plates, which greatly simplifies the assembly process.
NASA Astrophysics Data System (ADS)
Craven, S. M.; Hoenigman, J. R.; Moddeman, W. E.
1981-11-01
The potential use of secondary ion mass spectroscopy (SIMS) to analyze biological samples for calcium isotopes is discussed. Comparison of UTI and Extranuclear based quadrupole systems is made on the basis of the analysis of CaO and calcium metal. The Extranuclear quadrupole based system is superior in resolution and sensitivity to the UTI system and is recommended. For determination of calcium isotopes to within an accuracy of a few percent a high resolution quadrupole, such as the Extranuclear, and signal averaging capability are required. Charge neutralization will be mandated for calcium oxide, calcium nitrate, or calcium oxalate. SIMS is not capable of the high precision and high accuracy results possible by thermal ionization methods, but where faster analysis is desirable with an accuracy of a few percent, SIMS is a viable alternative.
Comparison of spike-sorting algorithms for future hardware implementation.
Gibson, Sarah; Judy, Jack W; Markovic, Dejan
2008-01-01
Applications such as brain-machine interfaces require hardware spike sorting in order to (1) obtain single-unit activity and (2) perform data reduction for wireless transmission of data. Such systems must be low-power, low-area, high-accuracy, automatic, and able to operate in real time. Several detection and feature extraction algorithms for spike sorting are described briefly and evaluated in terms of accuracy versus computational complexity. The nonlinear energy operator method is chosen as the optimal spike detection algorithm, being most robust over noise and relatively simple. The discrete derivatives method [1] is chosen as the optimal feature extraction method, maintaining high accuracy across SNRs with a complexity orders of magnitude less than that of traditional methods such as PCA.
NASA Astrophysics Data System (ADS)
Zhao, Qian; Wang, Lei; Wang, Jazer; Wang, ChangAn; Shi, Hong-Fei; Guerrero, James; Feng, Mu; Zhang, Qiang; Liang, Jiao; Guo, Yunbo; Zhang, Chen; Wallow, Tom; Rio, David; Wang, Lester; Wang, Alvin; Wang, Jen-Shiang; Gronlund, Keith; Lang, Jun; Koh, Kar Kit; Zhang, Dong Qing; Zhang, Hongxin; Krishnamurthy, Subramanian; Fei, Ray; Lin, Chiawen; Fang, Wei; Wang, Fei
2018-03-01
Classical SEM metrology, CD-SEM, uses low data rate and extensive frame-averaging technique to achieve high-quality SEM imaging for high-precision metrology. The drawbacks include prolonged data collection time and larger photoresist shrinkage due to excess electron dosage. This paper will introduce a novel e-beam metrology system based on a high data rate, large probe current, and ultra-low noise electron optics design. At the same level of metrology precision, this high speed e-beam metrology system could significantly shorten data collection time and reduce electron dosage. In this work, the data collection speed is higher than 7,000 images per hr. Moreover, a novel large field of view (LFOV) capability at high resolution was enabled by an advanced electron deflection system design. The area coverage by LFOV is >100x larger than classical SEM. Superior metrology precision throughout the whole image has been achieved, and high quality metrology data could be extracted from full field. This new capability on metrology will further improve metrology data collection speed to support the need for large volume of metrology data from OPC model calibration of next generation technology. The shrinking EPE (Edge Placement Error) budget places more stringent requirement on OPC model accuracy, which is increasingly limited by metrology errors. In the current practice of metrology data collection and data processing to model calibration flow, CD-SEM throughput becomes a bottleneck that limits the amount of metrology measurements available for OPC model calibration, impacting pattern coverage and model accuracy especially for 2D pattern prediction. To address the trade-off in metrology sampling and model accuracy constrained by the cycle time requirement, this paper employs the high speed e-beam metrology system and a new computational software solution to take full advantage of the large volume data and significantly reduce both systematic and random metrology errors. The new computational software enables users to generate large quantity of highly accurate EP (Edge Placement) gauges and significantly improve design pattern coverage with up to 5X gain in model prediction accuracy on complex 2D patterns. Overall, this work showed >2x improvement in OPC model accuracy at a faster model turn-around time.
Autonomous satellite navigation using starlight refraction angle measurements
NASA Astrophysics Data System (ADS)
Ning, Xiaolin; Wang, Longhua; Bai, Xinbei; Fang, Jiancheng
2013-05-01
An on-board autonomous navigation capability is required to reduce the operation costs and enhance the navigation performance of future satellites. Autonomous navigation by stellar refraction is a type of autonomous celestial navigation method that uses high-accuracy star sensors instead of Earth sensors to provide information regarding Earth's horizon. In previous studies, the refraction apparent height has typically been used for such navigation. However, the apparent height cannot be measured directly by a star sensor and can only be calculated by the refraction angle and an atmospheric refraction model. Therefore, additional errors are introduced by the uncertainty and nonlinearity of atmospheric refraction models, which result in reduced navigation accuracy and reliability. A new navigation method based on the direct measurement of the refraction angle is proposed to solve this problem. Techniques for the determination of the refraction angle are introduced, and a measurement model for the refraction angle is established. The method is tested and validated by simulations. When the starlight refraction height ranges from 20 to 50 km, a positioning accuracy of better than 100 m can be achieved for a low-Earth-orbit (LEO) satellite using the refraction angle, while the positioning accuracy of the traditional method using the apparent height is worse than 500 m under the same conditions. Furthermore, an analysis of the factors that affect navigation accuracy, including the measurement accuracy of the refraction angle, the number of visible refracted stars per orbit and the installation azimuth of star sensor, is presented. This method is highly recommended for small satellites in particular, as no additional hardware besides two star sensors is required.
A design of optical modulation system with pixel-level modulation accuracy
NASA Astrophysics Data System (ADS)
Zheng, Shiwei; Qu, Xinghua; Feng, Wei; Liang, Baoqiu
2018-01-01
Vision measurement has been widely used in the field of dimensional measurement and surface metrology. However, traditional methods of vision measurement have many limits such as low dynamic range and poor reconfigurability. The optical modulation system before image formation has the advantage of high dynamic range, high accuracy and more flexibility, and the modulation accuracy is the key parameter which determines the accuracy and effectiveness of optical modulation system. In this paper, an optical modulation system with pixel level accuracy is designed and built based on multi-points reflective imaging theory and digital micromirror device (DMD). The system consisted of digital micromirror device, CCD camera and lens. Firstly we achieved accurate pixel-to-pixel correspondence between the DMD mirrors and the CCD pixels by moire fringe and an image processing of sampling and interpolation. Then we built three coordinate systems and calculated the mathematic relationship between the coordinate of digital micro-mirror and CCD pixels using a checkerboard pattern. A verification experiment proves that the correspondence error is less than 0.5 pixel. The results show that the modulation accuracy of system meets the requirements of modulation. Furthermore, the high reflecting edge of a metal circular piece can be detected using the system, which proves the effectiveness of the optical modulation system.
Zhao, Yinzhi; Zhang, Peng; Guo, Jiming; Li, Xin; Wang, Jinling; Yang, Fei; Wang, Xinzhe
2018-06-20
Due to the great influence of multipath effect, noise, clock and error on pseudorange, the carrier phase double difference equation is widely used in high-precision indoor pseudolite positioning. The initial position is determined mostly by the known point initialization (KPI) method, and then the ambiguities can be fixed with the LAMBDA method. In this paper, a new method without using the KPI to achieve high-precision indoor pseudolite positioning is proposed. The initial coordinates can be quickly obtained to meet the accuracy requirement of the indoor LAMBDA method. The detailed processes of the method follows: Aiming at the low-cost single-frequency pseudolite system, the static differential pseudolite system (DPL) method is used to obtain the low-accuracy positioning coordinates of the rover station quickly. Then, the ambiguity function method (AFM) is used to search for the coordinates in the corresponding epoch. The real coordinates obtained by AFM can meet the initial accuracy requirement of the LAMBDA method, so that the double difference carrier phase ambiguities can be correctly fixed. Following the above steps, high-precision indoor pseudolite positioning can be realized. Several experiments, including static and dynamic tests, are conducted to verify the feasibility of the new method. According to the results of the experiments, the initial coordinates with the accuracy of decimeter level through the DPL can be obtained. For the AFM part, both a one-meter search scope and two-centimeter or four-centimeter search steps are used to ensure the precision at the centimeter level and high search efficiency. After dealing with the problem of multiple peaks caused by the ambiguity cosine function, the coordinate information of the maximum ambiguity function value (AFV) is taken as the initial value of the LAMBDA, and the ambiguities can be fixed quickly. The new method provides accuracies at the centimeter level for dynamic experiments and at the millimeter level for static ones.
High-order flux correction/finite difference schemes for strand grids
NASA Astrophysics Data System (ADS)
Katz, Aaron; Work, Dalon
2015-02-01
A novel high-order method combining unstructured flux correction along body surfaces and high-order finite differences normal to surfaces is formulated for unsteady viscous flows on strand grids. The flux correction algorithm is applied in each unstructured layer of the strand grid, and the layers are then coupled together via a source term containing derivatives in the strand direction. Strand-direction derivatives are approximated to high-order via summation-by-parts operators for first derivatives and second derivatives with variable coefficients. We show how this procedure allows for the proper truncation error canceling properties required for the flux correction scheme. The resulting scheme possesses third-order design accuracy, but often exhibits fourth-order accuracy when higher-order derivatives are employed in the strand direction, especially for highly viscous flows. We prove discrete conservation for the new scheme and time stability in the absence of the flux correction terms. Results in two dimensions are presented that demonstrate improvements in accuracy with minimal computational and algorithmic overhead over traditional second-order algorithms.
FPGA-Based Smart Sensor for Online Displacement Measurements Using a Heterodyne Interferometer
Vera-Salas, Luis Alberto; Moreno-Tapia, Sandra Veronica; Garcia-Perez, Arturo; de Jesus Romero-Troncoso, Rene; Osornio-Rios, Roque Alfredo; Serroukh, Ibrahim; Cabal-Yepez, Eduardo
2011-01-01
The measurement of small displacements on the nanometric scale demands metrological systems of high accuracy and precision. In this context, interferometer-based displacement measurements have become the main tools used for traceable dimensional metrology. The different industrial applications in which small displacement measurements are employed requires the use of online measurements, high speed processes, open architecture control systems, as well as good adaptability to specific process conditions. The main contribution of this work is the development of a smart sensor for large displacement measurement based on phase measurement which achieves high accuracy and resolution, designed to be used with a commercial heterodyne interferometer. The system is based on a low-cost Field Programmable Gate Array (FPGA) allowing the integration of several functions in a single portable device. This system is optimal for high speed applications where online measurement is needed and the reconfigurability feature allows the addition of different modules for error compensation, as might be required by a specific application. PMID:22164040
Achieving accuracy in first-principles calculations for EOS: basis completeness at high temperatures
NASA Astrophysics Data System (ADS)
Wills, John; Mattsson, Ann
2013-06-01
First-principles electronic structure calculations can provide EOS data in regimes of pressure and temperature where accurate experimental data is difficult or impossible to obtain. This lack, however, also precludes validation of calculations in those regimes. Factors that influence the accuracy of first-principles data include (1) theoretical approximations and (2) computational approximations used in implementing and solving the underlying equations. In the first category are the approximate exchange/correlation functionals and approximate wave equations approximating the Dirac equation; in the second are basis completeness, series convergence, and truncation errors. We are using two rather different electronic structure methods (VASP and RSPt) to make definitive the requirements for accuracy of the second type, common to both. In this talk, we discuss requirements for converged calculation at high temperature and moderated pressure. At convergence we show that both methods give identical results. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.
Calibration Assessment of Uncooled Thermal Cameras for Deployment on UAV platforms
NASA Astrophysics Data System (ADS)
Aragon, B.; Parkes, S. D.; Lucieer, A.; Turner, D.; McCabe, M.
2017-12-01
In recent years an array of miniaturized sensors have been developed and deployed on Unmanned Aerial Vehicles (UAVs). Prior to gaining useful data from these integrations, it is vitally important to quantify sensor accuracy, precision and cross-sensitivity of retrieved measurements on environmental variables. Small uncooled thermal frame cameras provide a novel solution to monitoring surface temperatures from UAVs with very high spatial resolution, with retrievals being used to investigate heat stress or evapotranspiration. For these studies, accuracies of a few degrees are generally required. Although radiometrically calibrated thermal cameras have recently become commercially available, confirmation of the accuracy of these sensors is required. Here we detail a system for investigating the accuracy and precision, start up stabilisation time, dependence of retrieved temperatures on ambient temperatures and image vignetting. The calibration system uses a relatively inexpensive blackbody source deployed with the sensor inside an environmental chamber to maintain and control the ambient temperature. Calibration of a number of different thermal sensors commonly used for UAV deployment was investigated. Vignetting was shown to be a major limitation on sensor accuracy, requiring characterization through measuring a spatially uniform temperature target such as the blackbody. Our results also showed that a stabilization period is required after powering on the sensors and before conducting an aerial survey. Through use of the environmental chamber it was shown the ambient temperature influenced the temperatures retrieved by the different sensors. This study illustrates the importance of determining the calibration and cross-sensitivities of thermal sensors to obtain accurate thermal maps that can be used to study crop ecosystems.
NASA Technical Reports Server (NTRS)
Radomski, M. S.; Doll, C. E.
1991-01-01
This investigation concerns the effects on Ocean Topography Experiment (TOPEX) spacecraft operational orbit determination of ionospheric refraction error affecting tracking measurements from the Tracking and Data Relay Satellite System (TDRSS). Although tracking error from this source is mitigated by the high frequencies (K-band) used for the space-to-ground links and by the high altitudes for the space-to-space links, these effects are of concern for the relatively high-altitude (1334 kilometers) TOPEX mission. This concern is due to the accuracy required for operational orbit-determination by the Goddard Space Flight Center (GSFC) and to the expectation that solar activity will still be relatively high at TOPEX launch in mid-1992. The ionospheric refraction error on S-band space-to-space links was calculated by a prototype observation-correction algorithm using the Bent model of ionosphere electron densities implemented in the context of the Goddard Trajectory Determination System (GTDS). Orbit determination error was evaluated by comparing parallel TOPEX orbit solutions, applying and omitting the correction, using the same simulated TDRSS tracking observations. The tracking scenarios simulated those planned for the observation phase of the TOPEX mission, with a preponderance of one-way return-link Doppler measurements. The results of the analysis showed most TOPEX operational accuracy requirements to be little affected by space-to-space ionospheric error. The determination of along-track velocity changes after ground-track adjustment maneuvers, however, is significantly affected when compared with the stringent 0.1-millimeter-per-second accuracy requirements, assuming uncoupled premaneuver and postmaneuver orbit determination. Space-to-space ionospheric refraction on the 24-hour postmaneuver arc alone causes 0.2 millimeter-per-second errors in along-track delta-v determination using uncoupled solutions. Coupling the premaneuver and postmaneuver solutions, however, appears likely to reduce this figure substantially. Plans and recommendations for response to these findings are presented.
Comparison of Classifier Architectures for Online Neural Spike Sorting.
Saeed, Maryam; Khan, Amir Ali; Kamboh, Awais Mehmood
2017-04-01
High-density, intracranial recordings from micro-electrode arrays need to undergo Spike Sorting in order to associate the recorded neuronal spikes to particular neurons. This involves spike detection, feature extraction, and classification. To reduce the data transmission and power requirements, on-chip real-time processing is becoming very popular. However, high computational resources are required for classifiers in on-chip spike-sorters, making scalability a great challenge. In this review paper, we analyze several popular classifiers to propose five new hardware architectures using the off-chip training with on-chip classification approach. These include support vector classification, fuzzy C-means classification, self-organizing maps classification, moving-centroid K-means classification, and Cosine distance classification. The performance of these architectures is analyzed in terms of accuracy and resource requirement. We establish that the neural networks based Self-Organizing Maps classifier offers the most viable solution. A spike sorter based on the Self-Organizing Maps classifier, requires only 7.83% of computational resources of the best-reported spike sorter, hierarchical adaptive means, while offering a 3% better accuracy at 7 dB SNR.
High-Reproducibility and High-Accuracy Method for Automated Topic Classification
NASA Astrophysics Data System (ADS)
Lancichinetti, Andrea; Sirer, M. Irmak; Wang, Jane X.; Acuna, Daniel; Körding, Konrad; Amaral, Luís A. Nunes
2015-01-01
Much of human knowledge sits in large databases of unstructured text. Leveraging this knowledge requires algorithms that extract and record metadata on unstructured text documents. Assigning topics to documents will enable intelligent searching, statistical characterization, and meaningful classification. Latent Dirichlet allocation (LDA) is the state of the art in topic modeling. Here, we perform a systematic theoretical and numerical analysis that demonstrates that current optimization techniques for LDA often yield results that are not accurate in inferring the most suitable model parameters. Adapting approaches from community detection in networks, we propose a new algorithm that displays high reproducibility and high accuracy and also has high computational efficiency. We apply it to a large set of documents in the English Wikipedia and reveal its hierarchical structure.
Ripamonti, Giancarlo; Abba, Andrea; Geraci, Angelo
2010-05-01
A method for measuring time intervals accurate to the picosecond range is based on phase measurements of oscillating waveforms synchronous with their beginning and/or end. The oscillation is generated by triggering an LC resonant circuit, whose capacitance is precharged. By using high Q resonators and a final active quenching of the oscillation, it is possible to conjugate high time resolution and a small measurement time, which allows a high measurement rate. Methods for fast analysis of the data are considered and discussed with reference to computing resource requirements, speed, and accuracy. Experimental tests show the feasibility of the method and a time accuracy better than 4 ps rms. Methods aimed at further reducing hardware resources are finally discussed.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Craven, S.M.; Hoenigman, J.R.; Moddeman, W.E.
1981-11-20
The potential use of secondary ion mass spectroscopy (SIMS) to analyze biological samples for calcium isotopes is discussed. Comparison of UTI and Extranuclear based quadrupole systems is made on the basis of the analysis of CaO and calcium metal. The Extranuclear quadrupole based system is superior in resolution and sensitivity to the UTI system and is recommended. For determination of calcium isotopes to within an accuracy of a few percent a high resolution quadrupole, such as the Extranuclear, and signal averaging capability are required. Charge neutralization will be mandated for calcium oxide, calcium nitrate, or calcium oxalate. SIMS is notmore » capable of the high precision and high accuracy results possible by thermal ionization methods, but where faster analysis is desirable with an accuracy of a few percent, SIMS is a viable alternative.« less
High Accuracy Evaluation of the Finite Fourier Transform Using Sampled Data
NASA Technical Reports Server (NTRS)
Morelli, Eugene A.
1997-01-01
Many system identification and signal processing procedures can be done advantageously in the frequency domain. A required preliminary step for this approach is the transformation of sampled time domain data into the frequency domain. The analytical tool used for this transformation is the finite Fourier transform. Inaccuracy in the transformation can degrade system identification and signal processing results. This work presents a method for evaluating the finite Fourier transform using cubic interpolation of sampled time domain data for high accuracy, and the chirp Zeta-transform for arbitrary frequency resolution. The accuracy of the technique is demonstrated in example cases where the transformation can be evaluated analytically. Arbitrary frequency resolution is shown to be important for capturing details of the data in the frequency domain. The technique is demonstrated using flight test data from a longitudinal maneuver of the F-18 High Alpha Research Vehicle.
A noncontact laser technique for circular contouring accuracy measurement
NASA Astrophysics Data System (ADS)
Wang, Charles; Griffin, Bob
2001-02-01
The worldwide competition in manufacturing frequently requires the high-speed machine tools to deliver contouring accuracy in the order of a few micrometers, while moving at relatively high feed rates. Traditional test equipment is rather limited in its capability to measure contours of small radius at high speed. Described here is a new noncontact laser measurement technique for the test of circular contouring accuracy. This technique is based on a single-aperture laser Doppler displacement meter with a flat mirror as the target. It is of a noncontact type with the ability to vary the circular path radius continuously at data rates of up to 1000 Hz. Using this instrument, the actual radius, feed rate, velocity, and acceleration profiles can also be determined. The basic theory of operation, the hardware setup, the data collection, the data processing, and the error budget are discussed.
Coincidence-anticipation timing requirements are different in racket sports.
Akpinar, Selçuk; Devrilmez, Erhan; Kirazci, Sadettin
2012-10-01
The aim of this study was to compare the coincidence-anticipation timing accuracy of athletes of different racket sports with various stimulus velocity requirements. Ninety players (15 girls, 15 boys for each sport) from tennis (M age = 12.4 yr., SD = 1.4), badminton (M age = 12.5 yr., SD = 1.4), and table tennis (M age = 12.4 yr., SD = 1.2) participated in this study. Three different stimulus velocities, low, moderate, and high, were used to simulate the velocity requirements of these racket sports. Tennis players had higher accuracy when they performed under the low stimulus velocity compared to badminton and table tennis players. Badminton players performed better under the moderate speed comparing to tennis and table tennis players. Table tennis players had better performance than tennis and badminton players under the high stimulus velocity. Therefore, visual and motor systems of players from different racket sports may adapt to a stimulus velocity in coincidence-anticipation timing, which is specific to each type of racket sports.
Tian, Bian; Zhao, Yulong; Jiang, Zhuangde; Zhang, Ling; Liao, Nansheng; Liu, Yuanhao; Meng, Chao
2009-01-01
In this paper we describe the design and testing of a micro piezoresistive pressure sensor for a Tire Pressure Measurement System (TPMS) which has the advantages of a minimized structure, high sensitivity, linearity and accuracy. Through analysis of the stress distribution of the diaphragm using the ANSYS software, a model of the structure was established. The fabrication on a single silicon substrate utilizes the technologies of anisotropic chemical etching and packaging through glass anodic bonding. The performance of this type of piezoresistive sensor, including size, sensitivity, and long-term stability, were investigated. The results indicate that the accuracy is 0.5% FS, therefore this design meets the requirements for a TPMS, and not only has a smaller size and simplicity of preparation, but also has high sensitivity and accuracy.
Gauge Blocks – A Zombie Technology
Doiron, Ted
2008-01-01
Gauge blocks have been the primary method for disseminating length traceability for over 100 years. Their longevity was based on two things: the relatively low cost of delivering very high accuracy to users, and the technical limitation that the range of high precision gauging systems was very small. While the first reason is still true, the second factor is being displaced by changes in measurement technology since the 1980s. New long range sensors do not require master gauges that are nearly the same length as the part being inspected, and thus one of the primary attributes of gauge blocks, wringing stacks to match the part, is no longer needed. Relaxing the requirement that gauges wring presents an opportunity to develop new types of end standards that would increase the accuracy and usefulness of gauging systems. PMID:27096119
NASA Technical Reports Server (NTRS)
Hodges, Richard E.; Sands, O. Scott; Huang, John; Bassily, Samir
2006-01-01
Improved surface accuracy for deployable reflectors has brought with it the possibility of Ka-band reflector antennas with extents on the order of 1000 wavelengths. Such antennas are being considered for high-rate data delivery from planetary distances. To maintain losses at reasonable levels requires a sufficiently capable Attitude Determination and Control System (ADCS) onboard the spacecraft. This paper provides an assessment of currently available ADCS strategies and performance levels. In addition to other issues, specific factors considered include: (1) use of "beaconless" or open loop tracking versus use of a beacon on the Earth side of the link, and (2) selection of fine pointing strategy (body-fixed/spacecraft pointing, reflector pointing or various forms of electronic beam steering). Capabilities of recent spacecraft are discussed.
Reliability Assessment for COTS Components in Space Flight Applications
NASA Technical Reports Server (NTRS)
Krishnan, G. S.; Mazzuchi, Thomas A.
2001-01-01
Systems built for space flight applications usually demand very high degree of performance and a very high level of accuracy. Hence, the design engineers are often prone to selecting state-of-art technologies for inclusion in their system design. The shrinking budgets also necessitate use of COTS (Commercial Off-The-Shelf) components, which are construed as being less expensive. The performance and accuracy requirements for space flight applications are much more stringent than those for the commercial applications. The quantity of systems designed and developed for space applications are much lower in number than those produced for the commercial applications. With a given set of requirements, are these COTS components reliable? This paper presents a model for assessing the reliability of COTS components in space applications and the associated affect on the system reliability. We illustrate the method with a real application.
Grazing Incidence Optics for X-rays Interferometry
NASA Technical Reports Server (NTRS)
Shipley, Ann; Zissa, David; Cash, Webster; Joy, Marshall
1999-01-01
Grazing incidence mirror parameters and constraints for x-ray interferometry are described. We present interferometer system tolerances and ray trace results used to define mirror surface accuracy requirements. Mirror material, surface figure, roughness, and geometry are evaluated based on analysis results. We also discuss mirror mount design constraints, finite element analysis, environmental issues, and solutions. Challenges associated with quantifying high accuracy mirror surface quality are addressed and test results are compared with theoretical predictions.
Liew, Jeffrey; Chen, Qi; Hughes, Jan N.
2009-01-01
The joint contributions of child effortful control (using inhibitory control and task accuracy as behavioral indices) and positive teacher-student relationships at first grade on reading and mathematics achievement at second grade were examined in 761 children who were predominantly from low-income and ethnic minority backgrounds and assessed to be academically at-risk at entry to first grade. Analyses accounted for clustering effects, covariates, baselines of effortful control measures, and prior levels of achievement. Even with such conservative statistical controls, interactive effects were found for task accuracy and positive teacher-student relationships on future achievement. Results suggest that task accuracy served as a protective factor so that children with high task accuracy performed well academically despite not having positive teacher-student relationships. Further, positive teacher-student relationships served as a compensatory factor so that children with low task accuracy performed just as well as those with high task accuracy if they were paired with a positive and supportive teacher. Importantly, results indicate that the influence of positive teacher-student relationships on future achievement was most pronounced for students with low effortful control on tasks that require fine motor skills, accuracy, and attention-related skills. Study results have implications for narrowing achievement disparities for academically at-risk children. PMID:20161421
Liew, Jeffrey; Chen, Qi; Hughes, Jan N
2010-01-01
The joint contributions of child effortful control (using inhibitory control and task accuracy as behavioral indices) and positive teacher-student relationships at first grade on reading and mathematics achievement at second grade were examined in 761 children who were predominantly from low-income and ethnic minority backgrounds and assessed to be academically at-risk at entry to first grade. Analyses accounted for clustering effects, covariates, baselines of effortful control measures, and prior levels of achievement. Even with such conservative statistical controls, interactive effects were found for task accuracy and positive teacher-student relationships on future achievement. Results suggest that task accuracy served as a protective factor so that children with high task accuracy performed well academically despite not having positive teacher-student relationships. Further, positive teacher-student relationships served as a compensatory factor so that children with low task accuracy performed just as well as those with high task accuracy if they were paired with a positive and supportive teacher. Importantly, results indicate that the influence of positive teacher-student relationships on future achievement was most pronounced for students with low effortful control on tasks that require fine motor skills, accuracy, and attention-related skills. Study results have implications for narrowing achievement disparities for academically at-risk children.
Science Requirements Document for OMI-EOS. 2
NASA Technical Reports Server (NTRS)
Bhartia, P. K.; Chance, K.; Isaksen, I.; Levelt, P. F.; Boersma, F.; Brinksma, E.; Carpay, J.; vanderA, R.; deHaan, J.; Hilsenrath, E.
2000-01-01
A Dutch-Finnish scientific and industrial consortium is supplying the Ozone Monitoring Instrument (OMI) for Earth Observing System-Aura (EOS-Aura). EOS-Aura is the next NASA mission to study the Earth's atmosphere extensively, and successor to the highly successful UARS (Upper Atmospheric Research Satellite) mission. The 'Science Requirements Document for OMI-EOS' presents an overview of the Aura and OMI mission objectives. It describes how OMI fits into the Aura mission and it reviews the synergy with the other instruments onboard Aura to fulfill the mission. This evolves in the Scientific Requirements for OMI (Chapter 3), stating which trace gases have to be measured with what necessary accuracy, in order for OMI to meet Aura's objectives. The most important data product of OMI, the ozone vertical column, densities shall have a better accuracy and an improved global coverage than the predecessor instruments TOMS (Total Ozone Monitoring Spectrometer) and GOME (Global Ozone Monitoring Experiment), which is a.o. achieved by a better signal to noise ratio, improved calibration and a wide field-of-view. Moreover, in order to meet its role on Aura, OMI shall measure trace gases, such as NO2, OClO, BrO, HCHO and SO2, aerosols, cloud top height and cloud coverage. Improved accuracy, better coverage, and finer ground grid than has been done in the past are goals for OMI. After the scientific requirements are defined, three sets of subordinate requirements are derived. These are: the algorithm requirements, i.e. what do the algorithms need in order to meet the scientific requirements; the instrument and calibration requirements, i.e. what has to be measured and how accurately in order to provide the quality of data necessary for deriving the data products; and the validation requirements, i.e. a strategy of how the OMI program will assure that its data products are valid in the atmosphere, at least to the required accuracy.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-15
...] Clinical Accuracy Requirements for Point of Care Blood Glucose Meters; Public Meeting; Request for Comments... Requirements for Point of Care Blood Glucose Meters. The purpose of the public meeting is to discuss the clinical accuracy requirements of blood glucose meters and other topics related to their use in point of...
An oscillation-free flow solver based on flux reconstruction
NASA Astrophysics Data System (ADS)
Aguerre, Horacio J.; Pairetti, Cesar I.; Venier, Cesar M.; Márquez Damián, Santiago; Nigro, Norberto M.
2018-07-01
In this paper, a segregated algorithm is proposed to suppress high-frequency oscillations in the velocity field for incompressible flows. In this context, a new velocity formula based on a reconstruction of face fluxes is defined eliminating high-frequency errors. In analogy to the Rhie-Chow interpolation, this approach is equivalent to including a flux-based pressure gradient with a velocity diffusion in the momentum equation. In order to guarantee second-order accuracy of the numerical solver, a set of conditions are defined for the reconstruction operator. To arrive at the final formulation, an outlook over the state of the art regarding velocity reconstruction procedures is presented comparing them through an error analysis. A new operator is then obtained by means of a flux difference minimization satisfying the required spatial accuracy. The accuracy of the new algorithm is analyzed by performing mesh convergence studies for unsteady Navier-Stokes problems with analytical solutions. The stabilization properties of the solver are then tested in a problem where spurious numerical oscillations arise for the velocity field. The results show a remarkable performance of the proposed technique eliminating high-frequency errors without losing accuracy.
On the Accuracy of Double Scattering Approximation for Atmospheric Polarization Computations
NASA Technical Reports Server (NTRS)
Korkin, Sergey V.; Lyapustin, Alexei I.; Marshak, Alexander L.
2011-01-01
Interpretation of multi-angle spectro-polarimetric data in remote sensing of atmospheric aerosols require fast and accurate methods of solving the vector radiative transfer equation (VRTE). The single and double scattering approximations could provide an analytical framework for the inversion algorithms and are relatively fast, however accuracy assessments of these approximations for the aerosol atmospheres in the atmospheric window channels have been missing. This paper provides such analysis for a vertically homogeneous aerosol atmosphere with weak and strong asymmetry of scattering. In both cases, the double scattering approximation gives a high accuracy result (relative error approximately 0.2%) only for the low optical path - 10(sup -2) As the error rapidly grows with optical thickness, a full VRTE solution is required for the practical remote sensing analysis. It is shown that the scattering anisotropy is not important at low optical thicknesses neither for reflected nor for transmitted polarization components of radiation.
Schueler, Sabine; Walther, Stefan; Schuetz, Georg M; Schlattmann, Peter; Dewey, Marc
2013-06-01
To evaluate the methodological quality of diagnostic accuracy studies on coronary computed tomography (CT) angiography using the QUADAS (Quality Assessment of Diagnostic Accuracy Studies included in systematic reviews) tool. Each QUADAS item was individually defined to adapt it to the special requirements of studies on coronary CT angiography. Two independent investigators analysed 118 studies using 12 QUADAS items. Meta-regression and pooled analyses were performed to identify possible effects of methodological quality items on estimates of diagnostic accuracy. The overall methodological quality of coronary CT studies was merely moderate. They fulfilled a median of 7.5 out of 12 items. Only 9 of the 118 studies fulfilled more than 75 % of possible QUADAS items. One QUADAS item ("Uninterpretable Results") showed a significant influence (P = 0.02) on estimates of diagnostic accuracy with "no fulfilment" increasing specificity from 86 to 90 %. Furthermore, pooled analysis revealed that each QUADAS item that is not fulfilled has the potential to change estimates of diagnostic accuracy. The methodological quality of studies investigating the diagnostic accuracy of non-invasive coronary CT is only moderate and was found to affect the sensitivity and specificity. An improvement is highly desirable because good methodology is crucial for adequately assessing imaging technologies. • Good methodological quality is a basic requirement in diagnostic accuracy studies. • Most coronary CT angiography studies have only been of moderate design quality. • Weak methodological quality will affect the sensitivity and specificity. • No improvement in methodological quality was observed over time. • Authors should consider the QUADAS checklist when undertaking accuracy studies.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hallstrom, Jason O.; Ni, Zheng Richard
This STTR Phase I project assessed the feasibility of a new CO 2 sensing system optimized for low-cost, high-accuracy, whole-building monitoring for use in demand control ventilation. The focus was on the development of a wireless networking platform and associated firmware to provide signal conditioning and conversion, fault- and disruptiontolerant networking, and multi-hop routing at building scales to avoid wiring costs. Early exploration of a bridge (or “gateway”) to direct digital control services was also explored. Results of the project contributed to an improved understanding of a new electrochemical sensor for monitoring indoor CO 2 concentrations, as well as themore » electronics and networking infrastructure required to deploy those sensors at building scales. New knowledge was acquired concerning the sensor’s accuracy, environmental response, and failure modes, and the acquisition electronics required to achieve accuracy over a wide range of CO 2 concentrations. The project demonstrated that the new sensor offers repeatable correspondence with commercial optical sensors, with supporting electronics that offer gain accuracy within 0.5%, and acquisition accuracy within 1.5% across three orders of magnitude variation in generated current. Considering production, installation, and maintenance costs, the technology presents a foundation for achieving whole-building CO 2 sensing at a price point below $0.066 / sq-ft – meeting economic feasibility criteria established by the Department of Energy. The technology developed under this award addresses obstacles on the critical path to enabling whole-building CO 2 sensing and demand control ventilation in commercial retrofits, small commercial buildings, residential complexes, and other highpotential structures that have been slow to adopt these technologies. It presents an opportunity to significantly reduce energy use throughout the United States.« less
Emergency positioning system accuracy with infrared LEDs in high-security facilities
NASA Astrophysics Data System (ADS)
Knoch, Sierra N.; Nelson, Charles; Walker, Owens
2017-05-01
Instantaneous personnel location presents a challenge in Department of Defense applications where high levels of security restrict real-time tracking of crew members. During emergency situations, command and control requires immediate accountability of all personnel. Current radio frequency (RF) based indoor positioning systems can be unsuitable due to RF leakage and electromagnetic interference with sensitively calibrated machinery on variable platforms like ships, submarines and high-security facilities. Infrared light provide a possible solution to this problem. This paper proposes and evaluates an indoor line-of-sight positioning system that is comprised of IR and high-sensitivity CMOS camera receivers. In this system the movement of the LEDs is captured by the camera, uploaded and analyzed; the highest point of power is located and plotted to create a blueprint of crewmember location. Results provided evaluate accuracy as a function of both wavelength and environmental conditions. Research will further evaluate the accuracy of the LED transmitter and CMOS camera receiver system. Transmissions in both the 780 and 850nm IR are analyzed.
NASA Astrophysics Data System (ADS)
Sun, Li-wei; Ye, Xin; Fang, Wei; He, Zhen-lei; Yi, Xiao-long; Wang, Yu-peng
2017-11-01
Hyper-spectral imaging spectrometer has high spatial and spectral resolution. Its radiometric calibration needs the knowledge of the sources used with high spectral resolution. In order to satisfy the requirement of source, an on-orbit radiometric calibration method is designed in this paper. This chain is based on the spectral inversion accuracy of the calibration light source. We compile the genetic algorithm progress which is used to optimize the channel design of the transfer radiometer and consider the degradation of the halogen lamp, thus realizing the high accuracy inversion of spectral curve in the whole working time. The experimental results show the average root mean squared error is 0.396%, the maximum root mean squared error is 0.448%, and the relative errors at all wavelengths are within 1% in the spectral range from 500 nm to 900 nm during 100 h operating time. The design lays a foundation for the high accuracy calibration of imaging spectrometer.
USDA-ARS?s Scientific Manuscript database
Many societal applications of soil moisture data products require high spatial resolution and numerical accuracy. Current thermal geostationary satellite sensors (GOES Imager and GOES-R ABI) could produce 2-16km resolution soil moisture proxy data. Passive microwave satellite radiometers (e.g. AMSR...
40 CFR 75.4 - Compliance dates.
Code of Federal Regulations, 2014 CFR
2014-07-01
... relative accuracy test audit (RATA) of the high measurement scale of the monitor is successfully completed... tests required under § 75.20(c) and section 6 of appendix A to this part for the high measurement scale... that all certification tests are completed no later than the following dates (except as provided in...
40 CFR 75.4 - Compliance dates.
Code of Federal Regulations, 2012 CFR
2012-07-01
... relative accuracy test audit (RATA) of the high measurement scale of the monitor is successfully completed... tests required under § 75.20(c) and section 6 of appendix A to this part for the high measurement scale... that all certification tests are completed no later than the following dates (except as provided in...
40 CFR 75.4 - Compliance dates.
Code of Federal Regulations, 2013 CFR
2013-07-01
... relative accuracy test audit (RATA) of the high measurement scale of the monitor is successfully completed... tests required under § 75.20(c) and section 6 of appendix A to this part for the high measurement scale... that all certification tests are completed no later than the following dates (except as provided in...
Improved Short-Term Clock Prediction Method for Real-Time Positioning.
Lv, Yifei; Dai, Zhiqiang; Zhao, Qile; Yang, Sheng; Zhou, Jinning; Liu, Jingnan
2017-06-06
The application of real-time precise point positioning (PPP) requires real-time precise orbit and clock products that should be predicted within a short time to compensate for the communication delay or data gap. Unlike orbit correction, clock correction is difficult to model and predict. The widely used linear model hardly fits long periodic trends with a small data set and exhibits significant accuracy degradation in real-time prediction when a large data set is used. This study proposes a new prediction model for maintaining short-term satellite clocks to meet the high-precision requirements of real-time clocks and provide clock extrapolation without interrupting the real-time data stream. Fast Fourier transform (FFT) is used to analyze the linear prediction residuals of real-time clocks. The periodic terms obtained through FFT are adopted in the sliding window prediction to achieve a significant improvement in short-term prediction accuracy. This study also analyzes and compares the accuracy of short-term forecasts (less than 3 h) by using different length observations. Experimental results obtained from International GNSS Service (IGS) final products and our own real-time clocks show that the 3-h prediction accuracy is better than 0.85 ns. The new model can replace IGS ultra-rapid products in the application of real-time PPP. It is also found that there is a positive correlation between the prediction accuracy and the short-term stability of on-board clocks. Compared with the accuracy of the traditional linear model, the accuracy of the static PPP using the new model of the 2-h prediction clock in N, E, and U directions is improved by about 50%. Furthermore, the static PPP accuracy of 2-h clock products is better than 0.1 m. When an interruption occurs in the real-time model, the accuracy of the kinematic PPP solution using 1-h clock prediction product is better than 0.2 m, without significant accuracy degradation. This model is of practical significance because it solves the problems of interruption and delay in data broadcast in real-time clock estimation and can meet the requirements of real-time PPP.
High-precision arithmetic in mathematical physics
Bailey, David H.; Borwein, Jonathan M.
2015-05-12
For many scientific calculations, particularly those involving empirical data, IEEE 32-bit floating-point arithmetic produces results of sufficient accuracy, while for other applications IEEE 64-bit floating-point is more appropriate. But for some very demanding applications, even higher levels of precision are often required. Furthermore, this article discusses the challenge of high-precision computation, in the context of mathematical physics, and highlights what facilities are required to support future computation, in light of emerging developments in computer architecture.
State of Jet Noise Prediction-NASA Perspective
NASA Technical Reports Server (NTRS)
Bridges, James E.
2008-01-01
This presentation covers work primarily done under the Airport Noise Technical Challenge portion of the Supersonics Project in the Fundamental Aeronautics Program. To provide motivation and context, the presentation starts with a brief overview of the Airport Noise Technical Challenge. It then covers the state of NASA s jet noise prediction tools in empirical, RANS-based, and time-resolved categories. The empirical tools, requires seconds to provide a prediction of noise spectral directivity with an accuracy of a few dB, but only for axisymmetric configurations. The RANS-based tools are able to discern the impact of three-dimensional features, but are currently deficient in predicting noise from heated jets and jets with high speed and require hours to produce their prediction. The time-resolved codes are capable of predicting resonances and other time-dependent phenomena, but are very immature, requiring months to deliver predictions without unknown accuracies and dependabilities. In toto, however, when one considers the progress being made it appears that aeroacoustic prediction tools are soon to approach the level of sophistication and accuracy of aerodynamic engineering tools.
High accuracy demodulation for twin-grating based sensor network with hybrid TDM/FDM
NASA Astrophysics Data System (ADS)
Ai, Fan; Sun, Qizhen; Cheng, Jianwei; Luo, Yiyang; Yan, Zhijun; Liu, Deming
2017-04-01
We demonstrate a high accuracy demodulation platform with a tunable Fabry-Perot filter (TFF) for twin-grating based fiber optic sensing network with hybrid TDM/FDM. The hybrid TDM/FDM scheme can improve the spatial resolution to centimeter but increases the requirement of high spectrum resolution. To realize the demodulation of the complex twin-grating spectrum, we adopt the TFF demodulation method and compensate the environmental temperature change and nonlinear effect through calibration FBGs. The performance of the demodulation module is tested by a temperature experiment. Spectrum resolution of 1pm is realized with precision of 2.5pm while the environmental temperature of TFF changes 9.3°C.
Innovative use of global navigation satellite systems for flight inspection
NASA Astrophysics Data System (ADS)
Kim, Eui-Ho
The International Civil Aviation Organization (ICAO) mandates flight inspection in every country to provide safety during flight operations. Among many criteria of flight inspection, airborne inspection of Instrument Landing Systems (ILS) is very important because the ILS is the primary landing guidance system worldwide. During flight inspection of the ILS, accuracy in ILS landing guidance is checked by using a Flight Inspection System (FIS). Therefore, a flight inspection system must have high accuracy in its positioning capability to detect any deviation so that accurate guidance of the ILS can be maintained. Currently, there are two Automated Flight Inspection Systems (AFIS). One is called Inertial-based AFIS, and the other one is called Differential GPS-based (DGPS-based) AFIS. The Inertial-based AFIS enables efficient flight inspection procedures, but its drawback is high cost because it requires a navigation-grade Inertial Navigation System (INS). On the other hand, the DGPS-based AFIS has relatively low cost, but flight inspection procedures require landing and setting up a reference receiver. Most countries use either one of the systems based on their own preferences. There are around 1200 ILS in the U.S., and each ILS must be inspected every 6 to 9 months. Therefore, it is important to manage the airborne inspection of the ILS in a very efficient manner. For this reason, the Federal Aviation Administration (FAA) mainly uses the Inertial-based AFIS, which has better efficiency than the DGPS-based AFIS in spite of its high cost. Obviously, the FAA spends tremendous resources on flight inspection. This thesis investigates the value of GPS and the FAA's augmentation to GPS for civil aviation called the Wide Area Augmentation System (or WAAS) for flight inspection. Because standard GPS or WAAS position outputs cannot meet the required accuracy for flight inspection, in this thesis, various algorithms are developed to improve the positioning ability of Flight Inspection Systems (FIS) by using GPS and WAAS in novel manners. The algorithms include Adaptive Carrier Smoothing (ACS), optimizing WAAS accuracy and stability, and reference point-based precise relative positioning for real-time and near-real-time applications. The developed systems are WAAS-aided FIS, WAAS-based FIS, and stand-alone GPS-based FIS. These systems offer both high efficiency and low cost, and they have different advantages over one another in terms of accuracy, integrity, and worldwide availability. The performance of each system is tested with experimental flight test data and shown to have accuracy that is sufficient for flight inspection and superior to the current Inertial-based AFIS.
Derivation of an artificial gene to improve classification accuracy upon gene selection.
Seo, Minseok; Oh, Sejong
2012-02-01
Classification analysis has been developed continuously since 1936. This research field has advanced as a result of development of classifiers such as KNN, ANN, and SVM, as well as through data preprocessing areas. Feature (gene) selection is required for very high dimensional data such as microarray before classification work. The goal of feature selection is to choose a subset of informative features that reduces processing time and provides higher classification accuracy. In this study, we devised a method of artificial gene making (AGM) for microarray data to improve classification accuracy. Our artificial gene was derived from a whole microarray dataset, and combined with a result of gene selection for classification analysis. We experimentally confirmed a clear improvement of classification accuracy after inserting artificial gene. Our artificial gene worked well for popular feature (gene) selection algorithms and classifiers. The proposed approach can be applied to any type of high dimensional dataset. Copyright © 2011 Elsevier Ltd. All rights reserved.
A high order accurate finite element algorithm for high Reynolds number flow prediction
NASA Technical Reports Server (NTRS)
Baker, A. J.
1978-01-01
A Galerkin-weighted residuals formulation is employed to establish an implicit finite element solution algorithm for generally nonlinear initial-boundary value problems. Solution accuracy, and convergence rate with discretization refinement, are quantized in several error norms, by a systematic study of numerical solutions to several nonlinear parabolic and a hyperbolic partial differential equation characteristic of the equations governing fluid flows. Solutions are generated using selective linear, quadratic and cubic basis functions. Richardson extrapolation is employed to generate a higher-order accurate solution to facilitate isolation of truncation error in all norms. Extension of the mathematical theory underlying accuracy and convergence concepts for linear elliptic equations is predicted for equations characteristic of laminar and turbulent fluid flows at nonmodest Reynolds number. The nondiagonal initial-value matrix structure introduced by the finite element theory is determined intrinsic to improved solution accuracy and convergence. A factored Jacobian iteration algorithm is derived and evaluated to yield a consequential reduction in both computer storage and execution CPU requirements while retaining solution accuracy.
Towards SSVEP-based, portable, responsive Brain-Computer Interface.
Kaczmarek, Piotr; Salomon, Pawel
2015-08-01
A Brain-Computer Interface in motion control application requires high system responsiveness and accuracy. SSVEP interface consisted of 2-8 stimuli and 2 channel EEG amplifier was presented in this paper. The observed stimulus is recognized based on a canonical correlation calculated in 1 second window, ensuring high interface responsiveness. A threshold classifier with hysteresis (T-H) was proposed for recognition purposes. Obtained results suggest that T-H classifier enables to significantly increase classifier performance (resulting in accuracy of 76%, while maintaining average false positive detection rate of stimulus different then observed one between 2-13%, depending on stimulus frequency). It was shown that the parameters of T-H classifier, maximizing true positive rate, can be estimated by gradient-based search since the single maximum was observed. Moreover the preliminary results, performed on a test group (N=4), suggest that for T-H classifier exists a certain set of parameters for which the system accuracy is similar to accuracy obtained for user-trained classifier.
NASA Astrophysics Data System (ADS)
Haines, P. E.; Esler, J. G.; Carver, G. D.
2014-06-01
A new methodology for the formulation of an adjoint to the transport component of the chemistry transport model TOMCAT is described and implemented in a new model, RETRO-TOM. The Eulerian backtracking method is used, allowing the forward advection scheme (Prather's second-order moments) to be efficiently exploited in the backward adjoint calculations. Prather's scheme is shown to be time symmetric, suggesting the possibility of high accuracy. To attain this accuracy, however, it is necessary to make a careful treatment of the "density inconsistency" problem inherent to offline transport models. The results are verified using a series of test experiments. These demonstrate the high accuracy of RETRO-TOM when compared with direct forward sensitivity calculations, at least for problems in which flux limiters in the advection scheme are not required. RETRO-TOM therefore combines the flexibility and stability of a "finite difference of adjoint" formulation with the accuracy of an "adjoint of finite difference" formulation.
NASA Astrophysics Data System (ADS)
Haines, P. E.; Esler, J. G.; Carver, G. D.
2014-01-01
A new methodology for the formulation of an adjoint to the transport component of the chemistry transport model TOMCAT is described and implemented in a new model RETRO-TOM. The Eulerian backtracking method is used, allowing the forward advection scheme (Prather's second-order moments), to be efficiently exploited in the backward adjoint calculations. Prather's scheme is shown to be time-symmetric suggesting the possibility of high accuracy. To attain this accuracy, however, it is necessary to make a careful treatment of the "density inconsistency" problem inherent to offline transport models. The results are verified using a series of test experiments. These demonstrate the high accuracy of RETRO-TOM when compared with direct forward sensitivity calculations, at least for problems in which flux-limiters in the advection scheme are not required. RETRO-TOM therefore combines the flexibility and stability of a "finite difference of adjoint" formulation with the accuracy of an "adjoint of finite difference" formulation.
Li, Zhen-hua; Li, Hong-bin; Zhang, Zhi
2013-07-01
Electronic transformers are widely used in power systems because of their wide bandwidth and good transient performance. However, as an emerging technology, the failure rate of electronic transformers is higher than that of traditional transformers. As a result, the calibration period needs to be shortened. Traditional calibration methods require the power of transmission line be cut off, which results in complicated operation and power off loss. This paper proposes an online calibration system which can calibrate electronic current transformers without power off. In this work, the high accuracy standard current transformer and online operation method are the key techniques. Based on the clamp-shape iron-core coil and clamp-shape air-core coil, a combined clamp-shape coil is designed as the standard current transformer. By analyzing the output characteristics of the two coils, the combined clamp-shape coil can achieve verification of the accuracy. So the accuracy of the online calibration system can be guaranteed. Moreover, by employing the earth potential working method and using two insulating rods to connect the combined clamp-shape coil to the high voltage bus, the operation becomes simple and safe. Tests in China National Center for High Voltage Measurement and field experiments show that the proposed system has a high accuracy of up to 0.05 class.
NASA Technical Reports Server (NTRS)
Korb, C. L.; Gentry, Bruce M.
1995-01-01
The goal of the Army Research Office (ARO) Geosciences Program is to measure the three dimensional wind field in the planetary boundary layer (PBL) over a measurement volume with a 50 meter spatial resolution and with measurement accuracies of the order of 20 cm/sec. The objective of this work is to develop and evaluate a high vertical resolution lidar experiment using the edge technique for high accuracy measurement of the atmospheric wind field to meet the ARO requirements. This experiment allows the powerful capabilities of the edge technique to be quantitatively evaluated. In the edge technique, a laser is located on the steep slope of a high resolution spectral filter. This produces large changes in measured signal for small Doppler shifts. A differential frequency technique renders the Doppler shift measurement insensitive to both laser and filter frequency jitter and drift. The measurement is also relatively insensitive to the laser spectral width for widths less than the width of the edge filter. Thus, the goal is to develop a system which will yield a substantial improvement in the state of the art of wind profile measurement in terms of both vertical resolution and accuracy and which will provide a unique capability for atmospheric wind studies.
Parallel Spectral Acquisition with an Ion Cyclotron Resonance Cell Array.
Park, Sung-Gun; Anderson, Gordon A; Navare, Arti T; Bruce, James E
2016-01-19
Mass measurement accuracy is a critical analytical figure-of-merit in most areas of mass spectrometry application. However, the time required for acquisition of high-resolution, high mass accuracy data limits many applications and is an aspect under continual pressure for development. Current efforts target implementation of higher electrostatic and magnetic fields because ion oscillatory frequencies increase linearly with field strength. As such, the time required for spectral acquisition of a given resolving power and mass accuracy decreases linearly with increasing fields. Mass spectrometer developments to include multiple high-resolution detectors that can be operated in parallel could further decrease the acquisition time by a factor of n, the number of detectors. Efforts described here resulted in development of an instrument with a set of Fourier transform ion cyclotron resonance (ICR) cells as detectors that constitute the first MS array capable of parallel high-resolution spectral acquisition. ICR cell array systems consisting of three or five cells were constructed with printed circuit boards and installed within a single superconducting magnet and vacuum system. Independent ion populations were injected and trapped within each cell in the array. Upon filling the array, all ions in all cells were simultaneously excited and ICR signals from each cell were independently amplified and recorded in parallel. Presented here are the initial results of successful parallel spectral acquisition, parallel mass spectrometry (MS) and MS/MS measurements, and parallel high-resolution acquisition with the MS array system.
Statistical Capability Study of a Helical Grinding Machine Producing Screw Rotors
NASA Astrophysics Data System (ADS)
Holmes, C. S.; Headley, M.; Hart, P. W.
2017-08-01
Screw compressors depend for their efficiency and reliability on the accuracy of the rotors, and therefore on the machinery used in their production. The machinery has evolved over more than half a century in response to customer demands for production accuracy, efficiency, and flexibility, and is now at a high level on all three criteria. Production equipment and processes must be capable of maintaining accuracy over a production run, and this must be assessed statistically under strictly controlled conditions. This paper gives numerical data from such a study of an innovative machine tool and shows that it is possible to meet the demanding statistical capability requirements.
NASA Astrophysics Data System (ADS)
Kamal, Muhammad; Johansen, Kasper
2017-10-01
Effective mangrove management requires spatially explicit information of mangrove tree crown map as a basis for ecosystem diversity study and health assessment. Accuracy assessment is an integral part of any mapping activities to measure the effectiveness of the classification approach. In geographic object-based image analysis (GEOBIA) the assessment of the geometric accuracy (shape, symmetry and location) of the created image objects from image segmentation is required. In this study we used an explicit area-based accuracy assessment to measure the degree of similarity between the results of the classification and reference data from different aspects, including overall quality (OQ), user's accuracy (UA), producer's accuracy (PA) and overall accuracy (OA). We developed a rule set to delineate the mangrove tree crown using WorldView-2 pan-sharpened image. The reference map was obtained by visual delineation of the mangrove tree crowns boundaries form a very high-spatial resolution aerial photograph (7.5cm pixel size). Ten random points with a 10 m radius circular buffer were created to calculate the area-based accuracy assessment. The resulting circular polygons were used to clip both the classified image objects and reference map for area comparisons. In this case, the area-based accuracy assessment resulted 64% and 68% for the OQ and OA, respectively. The overall quality of the calculation results shows the class-related area accuracy; which is the area of correctly classified as tree crowns was 64% out of the total area of tree crowns. On the other hand, the overall accuracy of 68% was calculated as the percentage of all correctly classified classes (tree crowns and canopy gaps) in comparison to the total class area (an entire image). Overall, the area-based accuracy assessment was simple to implement and easy to interpret. It also shows explicitly the omission and commission error variations of object boundary delineation with colour coded polygons.
Aiming Instruments On The Space Station
NASA Technical Reports Server (NTRS)
Estus, Jay M.; Laskin, Robert; Lin, Yu-Hwan
1989-01-01
Report discusses capabilities and requirements for aiming scientific instruments carried aboard proposed Space Station. Addresses two issues: whether system envisioned for pointing instruments at celestial targets offers sufficiently low jitter, high accuracy, and high stability to meet scientific requirements; whether it can do so even in presence of many vibrations and other disturbances on Space Station. Salient conclusion of study, recommendation to develop pointing-actuator system including mechanical/fluid base isolator underneath reactionaless gimbal subsystem. This kind of system offers greatest promise of high performance, cost-effectiveness, and modularity for job at hand.
McGarraugh, Geoffrey
2010-01-01
Continuous glucose monitoring (CGM) devices available in the United States are approved for use as adjuncts to self-monitoring of blood glucose (SMBG); all CGM alarms require SMBG confirmation before treatment. In this report, an analysis method is proposed to determine the CGM threshold alarm accuracy required to eliminate SMBG confirmation. The proposed method builds on the Clinical and Laboratory Standards Institute (CLSI) guideline for evaluating CGM threshold alarms using data from an in-clinic study of subjects with type 1 diabetes. The CLSI method proposes a maximum time limit of +/-30 minutes for the detection of hypo- and hyperglycemic events but does not include limits for glucose measurement accuracy. The International Standards Organization (ISO) standard for SMBG glucose measurement accuracy (ISO 15197) is +/-15 mg/dl for glucose <75 mg/dl and +/-20% for glucose > or = 75 mg/dl. This standard was combined with the CLSI method to more completely characterize the accuracy of CGM alarms. Incorporating the ISO 15197 accuracy margins, FreeStyle Navigator CGM system alarms detected 70 mg/dl hypoglycemia within 30 minutes at a rate of 70.3%, with a false alarm rate of 11.4%. The device detected high glucose in the range of 140-300 mg/dl within 30 minutes at an average rate of 99.2%, with a false alarm rate of 2.1%. Self-monitoring of blood glucose confirmation is necessary for detecting and treating hypoglycemia with the FreeStyle Navigator CGM system, but at high glucose levels, SMBG confirmation adds little incremental value to CGM alarms. 2010 Diabetes Technology Society.
NASA Technical Reports Server (NTRS)
Sandford, Stephen P.
2010-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) is one of four Tier 1 missions recommended by the recent NRC Decadal Survey report on Earth Science and Applications from Space (NRC, 2007). The CLARREO mission addresses the need to provide accurate, broadly acknowledged climate records that are used to enable validated long-term climate projections that become the foundation for informed decisions on mitigation and adaptation policies that address the effects of climate change on society. The CLARREO mission accomplishes this critical objective through rigorous SI traceable decadal change observations that are sensitive to many of the key uncertainties in climate radiative forcings, responses, and feedbacks that in turn drive uncertainty in current climate model projections. These same uncertainties also lead to uncertainty in attribution of climate change to anthropogenic forcing. For the first time CLARREO will make highly accurate, global, SI-traceable decadal change observations sensitive to the most critical, but least understood, climate forcings, responses, and feedbacks. The CLARREO breakthrough is to achieve the required levels of accuracy and traceability to SI standards for a set of observations sensitive to a wide range of key decadal change variables. The required accuracy levels are determined so that climate trend signals can be detected against a background of naturally occurring variability. Climate system natural variability therefore determines what level of accuracy is overkill, and what level is critical to obtain. In this sense, the CLARREO mission requirements are considered optimal from a science value perspective. The accuracy for decadal change traceability to SI standards includes uncertainties associated with instrument calibration, satellite orbit sampling, and analysis methods. Unlike most space missions, the CLARREO requirements are driven not by the instantaneous accuracy of the measurements, but by accuracy in the large time/space scale averages that are key to understanding decadal changes.
Tian, Bian; Zhao, Yulong; Jiang, Zhuangde; Zhang, Ling; Liao, Nansheng; Liu, Yuanhao; Meng, Chao
2009-01-01
In this paper we describe the design and testing of a micro piezoresistive pressure sensor for a Tire Pressure Measurement System (TPMS) which has the advantages of a minimized structure, high sensitivity, linearity and accuracy. Through analysis of the stress distribution of the diaphragm using the ANSYS software, a model of the structure was established. The fabrication on a single silicon substrate utilizes the technologies of anisotropic chemical etching and packaging through glass anodic bonding. The performance of this type of piezoresistive sensor, including size, sensitivity, and long-term stability, were investigated. The results indicate that the accuracy is 0.5% FS, therefore this design meets the requirements for a TPMS, and not only has a smaller size and simplicity of preparation, but also has high sensitivity and accuracy. PMID:22573960
High-order hydrodynamic algorithms for exascale computing
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morgan, Nathaniel Ray
Hydrodynamic algorithms are at the core of many laboratory missions ranging from simulating ICF implosions to climate modeling. The hydrodynamic algorithms commonly employed at the laboratory and in industry (1) typically lack requisite accuracy for complex multi- material vortical flows and (2) are not well suited for exascale computing due to poor data locality and poor FLOP/memory ratios. Exascale computing requires advances in both computer science and numerical algorithms. We propose to research the second requirement and create a new high-order hydrodynamic algorithm that has superior accuracy, excellent data locality, and excellent FLOP/memory ratios. This proposal will impact a broadmore » range of research areas including numerical theory, discrete mathematics, vorticity evolution, gas dynamics, interface instability evolution, turbulent flows, fluid dynamics and shock driven flows. If successful, the proposed research has the potential to radically transform simulation capabilities and help position the laboratory for computing at the exascale.« less
Wu, Xiaoping; Akgün, Can; Vaughan, J Thomas; Andersen, Peter; Strupp, John; Uğurbil, Kâmil; Van de Moortele, Pierre-François
2010-07-01
Parallel excitation holds strong promises to mitigate the impact of large transmit B1 (B+1) distortion at very high magnetic field. Accelerated RF pulses, however, inherently tend to require larger values in RF peak power which may result in substantial increase in Specific Absorption Rate (SAR) in tissues, which is a constant concern for patient safety at very high field. In this study, we demonstrate adapted rate RF pulse design allowing for SAR reduction while preserving excitation target accuracy. Compared with other proposed implementations of adapted rate RF pulses, our approach is compatible with any k-space trajectories, does not require an analytical expression of the gradient waveform and can be used for large flip angle excitation. We demonstrate our method with numerical simulations based on electromagnetic modeling and we include an experimental verification of transmit pattern accuracy on an 8 transmit channel 9.4 T system.
MATS and LaSpec: High-precision experiments using ion traps and lasers at FAIR
NASA Astrophysics Data System (ADS)
Rodríguez, D.; Blaum, K.; Nörtershäuser, W.; Ahammed, M.; Algora, A.; Audi, G.; Äystö, J.; Beck, D.; Bender, M.; Billowes, J.; Block, M.; Böhm, C.; Bollen, G.; Brodeur, M.; Brunner, T.; Bushaw, B. A.; Cakirli, R. B.; Campbell, P.; Cano-Ott, D.; Cortés, G.; Crespo López-Urrutia, J. R.; Das, P.; Dax, A.; de, A.; Delheij, P.; Dickel, T.; Dilling, J.; Eberhardt, K.; Eliseev, S.; Ettenauer, S.; Flanagan, K. T.; Ferrer, R.; García-Ramos, J.-E.; Gartzke, E.; Geissel, H.; George, S.; Geppert, C.; Gómez-Hornillos, M. B.; Gusev, Y.; Habs, D.; Heenen, P.-H.; Heinz, S.; Herfurth, F.; Herlert, A.; Hobein, M.; Huber, G.; Huyse, M.; Jesch, C.; Jokinen, A.; Kester, O.; Ketelaer, J.; Kolhinen, V.; Koudriavtsev, I.; Kowalska, M.; Krämer, J.; Kreim, S.; Krieger, A.; Kühl, T.; Lallena, A. M.; Lapierre, A.; Le Blanc, F.; Litvinov, Y. A.; Lunney, D.; Martínez, T.; Marx, G.; Matos, M.; Minaya-Ramirez, E.; Moore, I.; Nagy, S.; Naimi, S.; Neidherr, D.; Nesterenko, D.; Neyens, G.; Novikov, Y. N.; Petrick, M.; Plaß, W. R.; Popov, A.; Quint, W.; Ray, A.; Reinhard, P.-G.; Repp, J.; Roux, C.; Rubio, B.; Sánchez, R.; Schabinger, B.; Scheidenberger, C.; Schneider, D.; Schuch, R.; Schwarz, S.; Schweikhard, L.; Seliverstov, M.; Solders, A.; Suhonen, M.; Szerypo, J.; Taín, J. L.; Thirolf, P. G.; Ullrich, J.; van Duppen, P.; Vasiliev, A.; Vorobjev, G.; Weber, C.; Wendt, K.; Winkler, M.; Yordanov, D.; Ziegler, F.
2010-05-01
Nuclear ground state properties including mass, charge radii, spins and moments can be determined by applying atomic physics techniques such as Penning-trap based mass spectrometry and laser spectroscopy. The MATS and LaSpec setups at the low-energy beamline at FAIR will allow us to extend the knowledge of these properties further into the region far from stability. The mass and its inherent connection with the nuclear binding energy is a fundamental property of a nuclide, a unique “fingerprint”. Thus, precise mass values are important for a variety of applications, ranging from nuclear-structure studies like the investigation of shell closures and the onset of deformation, tests of nuclear mass models and mass formulas, to tests of the weak interaction and of the Standard Model. The required relative accuracy ranges from 10-5 to below 10-8 for radionuclides, which most often have half-lives well below 1 s. Substantial progress in Penning trap mass spectrometry has made this method a prime choice for precision measurements on rare isotopes. The technique has the potential to provide high accuracy and sensitivity even for very short-lived nuclides. Furthermore, ion traps can be used for precision decay studies and offer advantages over existing methods. With MATS (Precision Measurements of very short-lived nuclei using an A_dvanced Trapping System for highly-charged ions) at FAIR we aim to apply several techniques to very short-lived radionuclides: High-accuracy mass measurements, in-trap conversion electron and alpha spectroscopy, and trap-assisted spectroscopy. The experimental setup of MATS is a unique combination of an electron beam ion trap for charge breeding, ion traps for beam preparation, and a high-precision Penning trap system for mass measurements and decay studies. For the mass measurements, MATS offers both a high accuracy and a high sensitivity. A relative mass uncertainty of 10-9 can be reached by employing highly-charged ions and a non-destructive Fourier-Transform Ion-Cyclotron-Resonance (FT-ICR) detection technique on single stored ions. This accuracy limit is important for fundamental interaction tests, but also allows for the study of the fine structure of the nuclear mass surface with unprecedented accuracy, whenever required. The use of the FT-ICR technique provides true single ion sensitivity. This is essential to access isotopes that are produced with minimum rates which are very often the most interesting ones. Instead of pushing for highest accuracy, the high charge state of the ions can also be used to reduce the storage time of the ions, hence making measurements on even shorter-lived isotopes possible. Decay studies in ion traps will become possible with MATS. Novel spectroscopic tools for in-trap high-resolution conversion-electron and charged-particle spectroscopy from carrier-free sources will be developed, aiming e.g. at the measurements of quadrupole moments and E0 strengths. With the possibility of both high-accuracy mass measurements of the shortest-lived isotopes and decay studies, the high sensitivity and accuracy potential of MATS is ideally suited for the study of very exotic nuclides that will only be produced at the FAIR facility.Laser spectroscopy of radioactive isotopes and isomers is an efficient and model-independent approach for the determination of nuclear ground and isomeric state properties. Hyperfine structures and isotope shifts in electronic transitions exhibit readily accessible information on the nuclear spin, magnetic dipole and electric quadrupole moments as well as root-mean-square charge radii. The dependencies of the hyperfine splitting and isotope shift on the nuclear moments and mean square nuclear charge radii are well known and the theoretical framework for the extraction of nuclear parameters is well established. These extracted parameters provide fundamental information on the structure of nuclei at the limits of stability. Vital information on both bulk and valence nuclear properties are derived and an exceptional sensitivity to changes in nuclear deformation is achieved. Laser spectroscopy provides the only mechanism for such studies in exotic systems and uniquely facilitates these studies in a model-independent manner.The accuracy of laser-spectroscopic-determined nuclear properties is very high. Requirements concerning production rates are moderate; collinear spectroscopy has been performed with production rates as few as 100 ions per second and laser-desorption resonance ionization mass spectroscopy (combined with β-delayed neutron detection) has been achieved with rates of only a few atoms per second.This Technical Design Report describes a new Penning trap mass spectrometry setup as well as a number of complementary experimental devices for laser spectroscopy, which will provide a complete system with respect to the physics and isotopes that can be studied. Since MATS and LaSpec require high-quality low-energy beams, the two collaborations have a common beamline to stop the radioactive beam of in-flight produced isotopes and prepare them in a suitable way for transfer to the MATS and LaSpec setups, respectively.
Wang, Jingang; Gao, Can; Yang, Jie
2014-07-17
Currently available traditional electromagnetic voltage sensors fail to meet the measurement requirements of the smart grid, because of low accuracy in the static and dynamic ranges and the occurrence of ferromagnetic resonance attributed to overvoltage and output short circuit. This work develops a new non-contact high-bandwidth voltage measurement system for power equipment. This system aims at the miniaturization and non-contact measurement of the smart grid. After traditional D-dot voltage probe analysis, an improved method is proposed. For the sensor to work in a self-integrating pattern, the differential input pattern is adopted for circuit design, and grounding is removed. To prove the structure design, circuit component parameters, and insulation characteristics, Ansoft Maxwell software is used for the simulation. Moreover, the new probe was tested on a 10 kV high-voltage test platform for steady-state error and transient behavior. Experimental results ascertain that the root mean square values of measured voltage are precise and that the phase error is small. The D-dot voltage sensor not only meets the requirement of high accuracy but also exhibits satisfactory transient response. This sensor can meet the intelligence, miniaturization, and convenience requirements of the smart grid.
Identifying High-Rate Flows Based on Sequential Sampling
NASA Astrophysics Data System (ADS)
Zhang, Yu; Fang, Binxing; Luo, Hao
We consider the problem of fast identification of high-rate flows in backbone links with possibly millions of flows. Accurate identification of high-rate flows is important for active queue management, traffic measurement and network security such as detection of distributed denial of service attacks. It is difficult to directly identify high-rate flows in backbone links because tracking the possible millions of flows needs correspondingly large high speed memories. To reduce the measurement overhead, the deterministic 1-out-of-k sampling technique is adopted which is also implemented in Cisco routers (NetFlow). Ideally, a high-rate flow identification method should have short identification time, low memory cost and processing cost. Most importantly, it should be able to specify the identification accuracy. We develop two such methods. The first method is based on fixed sample size test (FSST) which is able to identify high-rate flows with user-specified identification accuracy. However, since FSST has to record every sampled flow during the measurement period, it is not memory efficient. Therefore the second novel method based on truncated sequential probability ratio test (TSPRT) is proposed. Through sequential sampling, TSPRT is able to remove the low-rate flows and identify the high-rate flows at the early stage which can reduce the memory cost and identification time respectively. According to the way to determine the parameters in TSPRT, two versions of TSPRT are proposed: TSPRT-M which is suitable when low memory cost is preferred and TSPRT-T which is suitable when short identification time is preferred. The experimental results show that TSPRT requires less memory and identification time in identifying high-rate flows while satisfying the accuracy requirement as compared to previously proposed methods.
THE MIRA–TITAN UNIVERSE: PRECISION PREDICTIONS FOR DARK ENERGY SURVEYS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Habib, Salman; Biswas, Rahul
2016-04-01
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
The mira-titan universe. Precision predictions for dark energy surveys
DOE Office of Scientific and Technical Information (OSTI.GOV)
Heitmann, Katrin; Bingham, Derek; Lawrence, Earl
2016-03-28
Large-scale simulations of cosmic structure formation play an important role in interpreting cosmological observations at high precision. The simulations must cover a parameter range beyond the standard six cosmological parameters and need to be run at high mass and force resolution. A key simulation-based task is the generation of accurate theoretical predictions for observables using a finite number of simulation runs, via the method of emulation. Using a new sampling technique, we explore an eight-dimensional parameter space including massive neutrinos and a variable equation of state of dark energy. We construct trial emulators using two surrogate models (the linear powermore » spectrum and an approximate halo mass function). The new sampling method allows us to build precision emulators from just 26 cosmological models and to systematically increase the emulator accuracy by adding new sets of simulations in a prescribed way. Emulator fidelity can now be continuously improved as new observational data sets become available and higher accuracy is required. Finally, using one ΛCDM cosmology as an example, we study the demands imposed on a simulation campaign to achieve the required statistics and accuracy when building emulators for investigations of dark energy.« less
NASA Astrophysics Data System (ADS)
Sierk, B.; Caron, J.; Bézy, J.-L.; Löscher, A.; Meijer, Y.; Jurado, P.
2017-11-01
CarbonSat is a candidate mission for ESA's Earth Explorer program, currently undergoing industrial feasibility studies. The primary mission objective is the identification and quantification of regional and local sources and sinks of carbon dioxide (CO2) and methane (CH4). The mission also aims at discriminating natural and anthropogenic fluxes. The space-borne instrument will quantify the spatial distribution of CO2 and CH4 by measuring dry air column-averaged mixing ratios with high precision and accuracy (0.5 ppm for CO2 and 5 ppb for CH4). These products are inferred from spectrally resolved measurements of Earth reflectance in three spectral bands in the Near Infrared (747-773 nm) and Short Wave Infrared (1590-1675 nm and 1925-2095 nm), at high and medium spectral resolution (0.1nm, 0.3 nm, and 0.55 nm). Three spatially co-aligned push-broom imaging spectrometers with a swath width <180 km will acquire observations at a spatial resolution of 2 x 3 km2 , reaching global coverage every 12 days above 40 degrees latitude (30 days at the equator). The targeted product accuracy translates into stringent radiometric, spectral and geometric requirements for the instrument. Because of the high sensitivity of the product retrieval to spurious spectral features of the instrument, special emphasis is placed on constraining relative spectral radiometric errors from polarisation sensitivity, diffuser speckles and stray light. A new requirement formulation targets to simultaneously constrain both the amplitude and the correlation of spectral features with the absorption structures of the targeted gases. The requirement performance analysis of the so-called effective spectral radiometric accuracy (ESRA) establishes a traceable link between instrumental artifacts and the impact on the level-2 products (column-averaged mixing ratios). This paper presents the derivation of system requirements from the demanding mission objectives and report preliminary results of the feasibility studies.
Advanced Typewriting Skill Building; Business Education: 7705.31.
ERIC Educational Resources Information Center
Schull, Amy P.
Intended for the student interested in obtaining high speed and control, the course includes drills that will enable the student to prepare more complex business forms and reports with a high degree of speed and accuracy. It is a culminating basic course for vocational competency, requiring the course Advanced Clerical Typewriting (7705.11) as a…
Development of the One Centimeter Accuracy Geoid Model of Latvia for GNSS Measurements
NASA Astrophysics Data System (ADS)
Balodis, J.; Silabriedis, G.; Haritonova, D.; Kaļinka, M.; Janpaule, I.; Morozova, K.; Jumāre, I.; Mitrofanovs, I.; Zvirgzds, J.; Kaminskis, J.; Liepiņš, I.
2015-11-01
There is an urgent necessity for a highly accurate and reliable geoid model to enable prompt determination of normal height with the use of GNSS coordinate determination due to the high precision requirements in geodesy, building and high precision road construction development. Additionally, the Latvian height system is in the process of transition from BAS- 77 (Baltic Height System) to EVRS2007 system. The accuracy of the geoid model must approach the precision of about ∼1 cm looking forward to the Baltic Rail and other big projects. The use of all the available and verified data sources is planned, including the use of enlarged set of GNSS/levelling data, gravimetric measurement data and, additionally, the vertical deflection measurements over the territory of Latvia. The work is going ahead stepwise. Just the issue of GNSS reference network stability is discussed. In order to achieve the ∼1 cm precision geoid, it is required to have a homogeneous high precision GNSS network as a basis for ellipsoidal height determination for GNSS/levelling points. Both the LatPos and EUPOS® - Riga network have been examined in this article.
Correlation methods in optical metrology with state-of-the-art x-ray mirrors
NASA Astrophysics Data System (ADS)
Yashchuk, Valeriy V.; Centers, Gary; Gevorkyan, Gevork S.; Lacey, Ian; Smith, Brian V.
2018-01-01
The development of fully coherent free electron lasers and diffraction limited storage ring x-ray sources has brought to focus the need for higher performing x-ray optics with unprecedented tolerances for surface slope and height errors and roughness. For example, the proposed beamlines for the future upgraded Advance Light Source, ALS-U, require optical elements characterized by a residual slope error of <100 nrad (root-mean-square) and height error of <1-2 nm (peak-tovalley). These are for optics with a length of up to one meter. However, the current performance of x-ray optical fabrication and metrology generally falls short of these requirements. The major limitation comes from the lack of reliable and efficient surface metrology with required accuracy and with reasonably high measurement rate, suitable for integration into the modern deterministic surface figuring processes. The major problems of current surface metrology relate to the inherent instrumental temporal drifts, systematic errors, and/or an unacceptably high cost, as in the case of interferometry with computer-generated holograms as a reference. In this paper, we discuss the experimental methods and approaches based on correlation analysis to the acquisition and processing of metrology data developed at the ALS X-Ray Optical Laboratory (XROL). Using an example of surface topography measurements of a state-of-the-art x-ray mirror performed at the XROL, we demonstrate the efficiency of combining the developed experimental correlation methods to the advanced optimal scanning strategy (AOSS) technique. This allows a significant improvement in the accuracy and capacity of the measurements via suppression of the instrumental low frequency noise, temporal drift, and systematic error in a single measurement run. Practically speaking, implementation of the AOSS technique leads to an increase of the measurement accuracy, as well as the capacity of ex situ metrology by a factor of about four. The developed method is general and applicable to a broad spectrum of high accuracy measurements.
Zahabi, Maryam; Zhang, Wenjuan; Pankok, Carl; Lau, Mei Ying; Shirley, James; Kaber, David
2017-11-01
Many occupations require both physical exertion and cognitive task performance. Knowledge of any interaction between physical demands and modalities of cognitive task information presentation can provide a basis for optimising performance. This study examined the effect of physical exertion and modality of information presentation on pattern recognition and navigation-related information processing. Results indicated males of equivalent high fitness, between the ages of 18 and 34, rely more on visual cues vs auditory or haptic for pattern recognition when exertion level is high. We found that navigation response time was shorter under low and medium exertion levels as compared to high intensity. Navigation accuracy was lower under high level exertion compared to medium and low levels. In general, findings indicated that use of the haptic modality for cognitive task cueing decreased accuracy in pattern recognition responses. Practitioner Summary: An examination was conducted on the effect of physical exertion and information presentation modality in pattern recognition and navigation. In occupations requiring information presentation to workers, who are simultaneously performing a physical task, the visual modality appears most effective under high level exertion while haptic cueing degrades performance.
Four years of Landsat-7 on-orbit geometric calibration and performance
Lee, D.S.; Storey, James C.; Choate, M.J.; Hayes, R.W.
2004-01-01
Unlike its predecessors, Landsat-7 has undergone regular geometric and radiometric performance monitoring and calibration since launch in April 1999. This ongoing activity, which includes issuing quarterly updates to calibration parameters, has generated a wealth of geometric performance data over the four-year on-orbit period of operations. A suite of geometric characterization (measurement and evaluation procedures) and calibration (procedures to derive improved estimates of instrument parameters) methods are employed by the Landsat-7 Image Assessment System to maintain the geometric calibration and to track specific aspects of geometric performance. These include geodetic accuracy, band-to-band registration accuracy, and image-to-image registration accuracy. These characterization and calibration activities maintain image product geometric accuracy at a high level - by monitoring performance to determine when calibration is necessary, generating new calibration parameters, and verifying that new parameters achieve desired improvements in accuracy. Landsat-7 continues to meet and exceed all geometric accuracy requirements, although aging components have begun to affect performance.
NASA Astrophysics Data System (ADS)
Stockert, Sven; Wehr, Matthias; Lohmar, Johannes; Abel, Dirk; Hirt, Gerhard
2017-10-01
In the electrical and medical industries the trend towards further miniaturization of devices is accompanied by the demand for smaller manufacturing tolerances. Such industries use a plentitude of small and narrow cold rolled metal strips with high thickness accuracy. Conventional rolling mills can hardly achieve further improvement of these tolerances. However, a model-based controller in combination with an additional piezoelectric actuator for high dynamic roll adjustment is expected to enable the production of the required metal strips with a thickness tolerance of +/-1 µm. The model-based controller has to be based on a rolling theory which can describe the rolling process very accurately. Additionally, the required computing time has to be low in order to predict the rolling process in real-time. In this work, four rolling theories from literature with different levels of complexity are tested for their suitability for the predictive controller. Rolling theories of von Kármán, Siebel, Bland & Ford and Alexander are implemented in Matlab and afterwards transferred to the real-time computer used for the controller. The prediction accuracy of these theories is validated using rolling trials with different thickness reduction and a comparison to the calculated results. Furthermore, the required computing time on the real-time computer is measured. Adequate results according the prediction accuracy can be achieved with the rolling theories developed by Bland & Ford and Alexander. A comparison of the computing time of those two theories reveals that Alexander's theory exceeds the sample rate of 1 kHz of the real-time computer.
Depth calibration of the Experimental Advanced Airborne Research Lidar, EAARL-B
Wright, C. Wayne; Kranenburg, Christine J.; Troche, Rodolfo J.; Mitchell, Richard W.; Nagle, David B.
2016-05-17
The resulting calibrated EAARL-B data were then analyzed and compared with the original reference dataset, the jet-ski-based dataset from the same Fort Lauderdale site, as well as the depth-accuracy requirements of the International Hydrographic Organization (IHO). We do not claim to meet all of the IHO requirements and standards. The IHO minimum depth-accuracy requirements were used as a reference only and we do not address the other IHO requirements such as “ Full Seafloor Search”. Our results show good agreement between the calibrated EAARL-B data and all reference datasets, with results that are within the 95 percent depth accuracy of the IHO Order 1 (a and b) depth-accuracy requirements.
A Novel Multi-Digital Camera System Based on Tilt-Shift Photography Technology
Sun, Tao; Fang, Jun-yong; Zhao, Dong; Liu, Xue; Tong, Qing-xi
2015-01-01
Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product. PMID:25835187
A novel multi-digital camera system based on tilt-shift photography technology.
Sun, Tao; Fang, Jun-Yong; Zhao, Dong; Liu, Xue; Tong, Qing-Xi
2015-03-31
Multi-digital camera systems (MDCS) are constantly being improved to meet the increasing requirement of high-resolution spatial data. This study identifies the insufficiencies of traditional MDCSs and proposes a new category MDCS based on tilt-shift photography to improve ability of the MDCS to acquire high-accuracy spatial data. A prototype system, including two or four tilt-shift cameras (TSC, camera model: Nikon D90), is developed to validate the feasibility and correctness of proposed MDCS. Similar to the cameras of traditional MDCSs, calibration is also essential for TSC of new MDCS. The study constructs indoor control fields and proposes appropriate calibration methods for TSC, including digital distortion model (DDM) approach and two-step calibrated strategy. The characteristics of TSC are analyzed in detail via a calibration experiment; for example, the edge distortion of TSC. Finally, the ability of the new MDCS to acquire high-accuracy spatial data is verified through flight experiments. The results of flight experiments illustrate that geo-position accuracy of prototype system achieves 0.3 m at a flight height of 800 m, and spatial resolution of 0.15 m. In addition, results of the comparison between the traditional (MADC II) and proposed MDCS demonstrate that the latter (0.3 m) provides spatial data with higher accuracy than the former (only 0.6 m) under the same conditions. We also take the attitude that using higher accuracy TSC in the new MDCS should further improve the accuracy of the photogrammetry senior product.
High-accuracy mass spectrometry for fundamental studies.
Kluge, H-Jürgen
2010-01-01
Mass spectrometry for fundamental studies in metrology and atomic, nuclear and particle physics requires extreme sensitivity and efficiency as well as ultimate resolving power and accuracy. An overview will be given on the global status of high-accuracy mass spectrometry for fundamental physics and metrology. Three quite different examples of modern mass spectrometric experiments in physics are presented: (i) the retardation spectrometer KATRIN at the Forschungszentrum Karlsruhe, employing electrostatic filtering in combination with magnetic-adiabatic collimation-the biggest mass spectrometer for determining the smallest mass, i.e. the mass of the electron anti-neutrino, (ii) the Experimental Cooler-Storage Ring at GSI-a mass spectrometer of medium size, relative to other accelerators, for determining medium-heavy masses and (iii) the Penning trap facility, SHIPTRAP, at GSI-the smallest mass spectrometer for determining the heaviest masses, those of super-heavy elements. Finally, a short view into the future will address the GSI project HITRAP at GSI for fundamental studies with highly-charged ions.
Feasibility of a GNSS-Probe for Creating Digital Maps of High Accuracy and Integrity
NASA Astrophysics Data System (ADS)
Vartziotis, Dimitris; Poulis, Alkis; Minogiannis, Alexandros; Siozos, Panayiotis; Goudas, Iraklis; Samson, Jaron; Tossaint, Michel
The “ROADSCANNER” project addresses the need for increased accuracy and integrity Digital Maps (DM) utilizing the latest developments in GNSS, in order to provide the required datasets for novel applications, such as navigation based Safety Applications, Advanced Driver Assistance Systems (ADAS) and Digital Automotive Simulations. The activity covered in the current paper is the feasibility study, preliminary tests, initial product design and development plan for an EGNOS enabled vehicle probe. The vehicle probe will be used for generating high accuracy, high integrity and ADAS compatible digital maps of roads, employing a multiple passes methodology supported by sophisticated refinement algorithms. Furthermore, the vehicle probe will be equipped with pavement scanning and other data fusion equipment, in order to produce 3D road surface models compatible with standards of road-tire simulation applications. The project was assigned to NIKI Ltd under the 1st Call for Ideas in the frame of the ESA - Greece Task Force.
a New Approach for Accuracy Improvement of Pulsed LIDAR Remote Sensing Data
NASA Astrophysics Data System (ADS)
Zhou, G.; Huang, W.; Zhou, X.; He, C.; Li, X.; Huang, Y.; Zhang, L.
2018-05-01
In remote sensing applications, the accuracy of time interval measurement is one of the most important parameters that affect the quality of pulsed lidar data. The traditional time interval measurement technique has the disadvantages of low measurement accuracy, complicated circuit structure and large error. A high-precision time interval data cannot be obtained in these traditional methods. In order to obtain higher quality of remote sensing cloud images based on the time interval measurement, a higher accuracy time interval measurement method is proposed. The method is based on charging the capacitance and sampling the change of capacitor voltage at the same time. Firstly, the approximate model of the capacitance voltage curve in the time of flight of pulse is fitted based on the sampled data. Then, the whole charging time is obtained with the fitting function. In this method, only a high-speed A/D sampler and capacitor are required in a single receiving channel, and the collected data is processed directly in the main control unit. The experimental results show that the proposed method can get error less than 3 ps. Compared with other methods, the proposed method improves the time interval accuracy by at least 20 %.
NASA Astrophysics Data System (ADS)
Lyu, Jiang-Tao; Zhou, Chen
2017-12-01
Ionospheric refraction is one of the principal error sources for limiting the accuracy of radar systems for space target detection. High-accuracy measurement of the ionospheric electron density along the propagation path of radar wave is the most important procedure for the ionospheric refraction correction. Traditionally, the ionospheric model and the ionospheric detection instruments, like ionosonde or GPS receivers, are employed for obtaining the electron density. However, both methods are not capable of satisfying the requirements of correction accuracy for the advanced space target radar system. In this study, we propose a novel technique for ionospheric refraction correction based on radar dual-frequency detection. Radar target range measurements at two adjacent frequencies are utilized for calculating the electron density integral exactly along the propagation path of the radar wave, which can generate accurate ionospheric range correction. The implementation of radar dual-frequency detection is validated by a P band radar located in midlatitude China. The experimental results present that the accuracy of this novel technique is more accurate than the traditional ionospheric model correction. The technique proposed in this study is very promising for the high-accuracy radar detection and tracking of objects in geospace.
Accuracy and safety of ward based pleural ultrasound in the Australian healthcare system.
Hammerschlag, Gary; Denton, Matthew; Wallbridge, Peter; Irving, Louis; Hew, Mark; Steinfort, Daniel
2017-04-01
Ultrasound has been shown to improve the accuracy and safety of pleural procedures. Studies to date have been performed in large, specialized units, where pleural procedures are performed by a small number of highly specialized physicians. There are no studies examining the safety and accuracy of ultrasound in the Australian healthcare system where procedures are performed by junior doctors with a high staff turnover. We performed a retrospective review of the ultrasound database in the Respiratory Department at the Royal Melbourne Hospital to determine accuracy and complications associated pleural procedures. A total of 357 ultrasounds were performed between October 2010 and June 2013. Accuracy of pleural procedures was 350 of 356 (98.3%). Aspiration of pleural fluid was successful in 121 of 126 (96%) of patients. Two (0.9%) patients required chest tube insertion for management of pneumothorax. There were no recorded pleural infections, haemorrhage or viscera puncture. Ward-based ultrasound for pleural procedures is safe and accurate when performed by appropriately trained and supported junior medical officers. Our findings support this model of pleural service care in the Australian healthcare system. © 2016 Asian Pacific Society of Respirology.
Freezing degrees of freedom under stress: kinematic evidence of constrained movement strategies.
Higuchi, Takahiro; Imanaka, Kuniyasu; Hatayama, Toshiteru
2002-12-01
The present study investigated the effect of psychological stress imposed on movement kinematics in a computer-simulated batting task involving a backward and forward swing of the forearm. The psychological stress was imposed by a mild electric stimulus following poor performance. Fourteen participants hit a moving ball with a horizontal lever and aimed at a distant target with as much accuracy as possible. The kinematic characteristics appearing under stress were delay of movement initiation, small amplitude of movement and low variability of spatial kinematic events between trials. These features were also found in previous studies in which the experimental task required high accuracy. The characteristic kinematics evident in the present study suggested that the movement strategies adopted by the stressed participants were similar to those that appear under high accuracy demand. Moreover, a correlation analysis between the onset times of kinematic events revealed that temporally consistent movements were reproduced under stress. Taken together, the present findings demonstrated that, under psychological stress, movement strategies tend to shift toward the production of more constrained trajectories, as is seen under conditions of high accuracy demand, even though the difficulty of the task itself does not change. Copyright 2002 Elsevier Science B.V.
Next-generation pushbroom filter radiometers for remote sensing
NASA Astrophysics Data System (ADS)
Tarde, Richard W.; Dittman, Michael G.; Kvaran, Geir E.
2012-09-01
Individual focal plane size, yield, and quality continue to improve, as does the technology required to combine these into large tiled formats. As a result, next-generation pushbroom imagers are replacing traditional scanning technologies in remote sensing applications. Pushbroom architecture has inherently better radiometric sensitivity and significantly reduced payload mass, power, and volume than previous generation scanning technologies. However, the architecture creates challenges achieving the required radiometric accuracy performance. Achieving good radiometric accuracy, including image spectral and spatial uniformity, requires creative optical design, high quality focal planes and filters, careful consideration of on-board calibration sources, and state-of-the-art ground test facilities. Ball Aerospace built the Landsat Data Continuity Mission (LDCM) next-generation Operational Landsat Imager (OLI) payload. Scheduled to launch in 2013, OLI provides imagery consistent with the historical Landsat spectral, spatial, radiometric, and geometric data record and completes the generational technology upgrade from the Enhanced Thematic Mapper (ETM+) whiskbroom technology to modern pushbroom technology afforded by advanced focal planes. We explain how Ball's capabilities allowed producing the innovative next-generational OLI pushbroom filter radiometer that meets challenging radiometric accuracy or calibration requirements. OLI will improve the multi-decadal land surface observation dataset dating back to the 1972 launch of ERTS-1 or Landsat 1.
Low cycle fatigue numerical estimation of a high pressure turbine disc for the AL-31F jet engine
NASA Astrophysics Data System (ADS)
Spodniak, Miroslav; Klimko, Marek; Hocko, Marián; Žitek, Pavel
This article deals with the description of an approximate numerical estimation approach of a low cycle fatigue of a high pressure turbine disc for the AL-31F turbofan jet engine. The numerical estimation is based on the finite element method carried out in the SolidWorks software. The low cycle fatigue assessment of a high pressure turbine disc was carried out on the basis of dimensional, shape and material disc characteristics, which are available for the particular high pressure engine turbine. The method described here enables relatively fast setting of economically feasible low cycle fatigue of the assessed high pressure turbine disc using a commercially available software. The numerical estimation of accuracy of a low cycle fatigue depends on the accuracy of required input data for the particular investigated object.
SINA: accurate high-throughput multiple sequence alignment of ribosomal RNA genes.
Pruesse, Elmar; Peplies, Jörg; Glöckner, Frank Oliver
2012-07-15
In the analysis of homologous sequences, computation of multiple sequence alignments (MSAs) has become a bottleneck. This is especially troublesome for marker genes like the ribosomal RNA (rRNA) where already millions of sequences are publicly available and individual studies can easily produce hundreds of thousands of new sequences. Methods have been developed to cope with such numbers, but further improvements are needed to meet accuracy requirements. In this study, we present the SILVA Incremental Aligner (SINA) used to align the rRNA gene databases provided by the SILVA ribosomal RNA project. SINA uses a combination of k-mer searching and partial order alignment (POA) to maintain very high alignment accuracy while satisfying high throughput performance demands. SINA was evaluated in comparison with the commonly used high throughput MSA programs PyNAST and mothur. The three BRAliBase III benchmark MSAs could be reproduced with 99.3, 97.6 and 96.1 accuracy. A larger benchmark MSA comprising 38 772 sequences could be reproduced with 98.9 and 99.3% accuracy using reference MSAs comprising 1000 and 5000 sequences. SINA was able to achieve higher accuracy than PyNAST and mothur in all performed benchmarks. Alignment of up to 500 sequences using the latest SILVA SSU/LSU Ref datasets as reference MSA is offered at http://www.arb-silva.de/aligner. This page also links to Linux binaries, user manual and tutorial. SINA is made available under a personal use license.
Compact and Hybrid Feature Description for Building Extraction
NASA Astrophysics Data System (ADS)
Li, Z.; Liu, Y.; Hu, Y.; Li, P.; Ding, Y.
2017-05-01
Building extraction in aerial orthophotos is crucial for various applications. Currently, deep learning has been shown to be successful in addressing building extraction with high accuracy and high robustness. However, quite a large number of samples is required in training a classifier when using deep learning model. In order to realize accurate and semi-interactive labelling, the performance of feature description is crucial, as it has significant effect on the accuracy of classification. In this paper, we bring forward a compact and hybrid feature description method, in order to guarantees desirable classification accuracy of the corners on the building roof contours. The proposed descriptor is a hybrid description of an image patch constructed from 4 sets of binary intensity tests. Experiments show that benefiting from binary description and making full use of color channels, this descriptor is not only computationally frugal, but also accurate than SURF for building extraction.
High Accuracy Fuel Flowmeter, Phase 1
NASA Technical Reports Server (NTRS)
Mayer, C.; Rose, L.; Chan, A.; Chin, B.; Gregory, W.
1983-01-01
Technology related to aircraft fuel mass - flowmeters was reviewed to determine what flowmeter types could provide 0.25%-of-point accuracy over a 50 to one range in flowrates. Three types were selected and were further analyzed to determine what problem areas prevented them from meeting the high accuracy requirement, and what the further development needs were for each. A dual-turbine volumetric flowmeter with densi-viscometer and microprocessor compensation was selected for its relative simplicity and fast response time. An angular momentum type with a motor-driven, spring-restrained turbine and viscosity shroud was selected for its direct mass-flow output. This concept also employed a turbine for fast response and a microcomputer for accurate viscosity compensation. The third concept employed a vortex precession volumetric flowmeter and was selected for its unobtrusive design. Like the turbine flowmeter, it uses a densi-viscometer and microprocessor for density correction and accurate viscosity compensation.
NASA Astrophysics Data System (ADS)
Cao, C.; Lee, X.; Xu, J.
2017-12-01
Unmanned Aerial Vehicles (UAVs) or drones have been widely used in environmental, ecological and engineering applications in recent years. These applications require assessment of positional and dimensional accuracy. In this study, positional accuracy refers to the accuracy of the latitudinal and longitudinal coordinates of locations on the mosaicked image in reference to the coordinates of the same locations measured by a Global Positioning System (GPS) in a ground survey, and dimensional accuracy refers to length and height of a ground target. Here, we investigate the effects of the number of Ground Control Points (GCPs) and the accuracy of the GPS used to measure the GCPs on positional and dimensional accuracy of a drone 3D model. Results show that using on-board GPS and a hand-held GPS produce a positional accuracy on the order of 2-9 meters. In comparison, using a differential GPS with high accuracy (30 cm) improves the positional accuracy of the drone model by about 40 %. Increasing the number of GCPs can compensate for the uncertainty brought by the GPS equipment with low accuracy. In terms of the dimensional accuracy of the drone model, even with the use of a low resolution GPS onboard the vehicle, the mean absolute errors are only 0.04 m for height and 0.10 m for length, which are well suited for some applications in precision agriculture and in land survey studies.
Model Predictions and Observed Performance of JWST's Cryogenic Position Metrology System
NASA Technical Reports Server (NTRS)
Lunt, Sharon R.; Rhodes, David; DiAntonio, Andrew; Boland, John; Wells, Conrad; Gigliotti, Trevis; Johanning, Gary
2016-01-01
The James Webb Space Telescope cryogenic testing requires measurement systems that both obtain a very high degree of accuracy and can function in that environment. Close-range photogrammetry was identified as meeting those criteria. Testing the capability of a close-range photogrammetric system prior to its existence is a challenging problem. Computer simulation was chosen over building a scaled mock-up to allow for increased flexibility in testing various configurations. Extensive validation work was done to ensure that the actual as-built system meet accuracy and repeatability requirements. The simulated image data predicted the uncertainty in measurement to be within specification and this prediction was borne out experimentally. Uncertainty at all levels was verified experimentally to be less than 0.1 millimeters.
Sun, Ting; Xing, Fei; You, Zheng; Wang, Xiaochu; Li, Bin
2014-03-10
The star tracker is one of the most promising attitude measurement devices widely used in spacecraft for its high accuracy. High dynamic performance is becoming its major restriction, and requires immediate focus and promotion. A star image restoration approach based on the motion degradation model of variable angular velocity is proposed in this paper. This method can overcome the problem of energy dispersion and signal to noise ratio (SNR) decrease resulting from the smearing of the star spot, thus preventing failed extraction and decreased star centroid accuracy. Simulations and laboratory experiments are conducted to verify the proposed methods. The restoration results demonstrate that the described method can recover the star spot from a long motion trail to the shape of Gaussian distribution under the conditions of variable angular velocity and long exposure time. The energy of the star spot can be concentrated to ensure high SNR and high position accuracy. These features are crucial to the subsequent star extraction and the whole performance of the star tracker.
A new ultra-high-accuracy angle generator: current status and future direction
NASA Astrophysics Data System (ADS)
Guertin, Christian F.; Geckeler, Ralf D.
2017-09-01
Lack of an extreme high-accuracy angular positioning device available in the United States has left a gap in industrial and scientific efforts conducted there, requiring certain user groups to undertake time-consuming work with overseas laboratories. Specifically, in x-ray mirror metrology the global research community is advancing the state-of-the-art to unprecedented levels. We aim to fill this U.S. gap by developing a versatile high-accuracy angle generator as a part of the national metrology tool set for x-ray mirror metrology and other important industries. Using an established calibration technique to measure the errors of the encoder scale graduations for full-rotation rotary encoders, we implemented an optimized arrangement of sensors positioned to minimize propagation of calibration errors. Our initial feasibility research shows that upon scaling to a full prototype and including additional calibration techniques we can expect to achieve uncertainties at the level of 0.01 arcsec (50 nrad) or better and offer the immense advantage of a highly automatable and customizable product to the commercial market.
Evaluation of centroiding algorithm error for Nano-JASMINE
NASA Astrophysics Data System (ADS)
Hara, Takuji; Gouda, Naoteru; Yano, Taihei; Yamada, Yoshiyuki
2014-08-01
The Nano-JASMINE mission has been designed to perform absolute astrometric measurements with unprecedented accuracy; the end-of-mission parallax standard error is required to be of the order of 3 milli arc seconds for stars brighter than 7.5 mag in the zw-band(0.6μm-1.0μm) .These requirements set a stringent constraint on the accuracy of the estimation of the location of the stellar image on the CCD for each observation. However each stellar images have individual shape depend on the spectral energy distribution of the star, the CCD properties, and the optics and its associated wave front errors. So it is necessity that the centroiding algorithm performs a high accuracy in any observables. Referring to the study of Gaia, we use LSF fitting method for centroiding algorithm, and investigate systematic error of the algorithm for Nano-JASMINE. Furthermore, we found to improve the algorithm by restricting sample LSF when we use a Principle Component Analysis. We show that centroiding algorithm error decrease after adapted the method.
NASA Technical Reports Server (NTRS)
Chang, Chau-Lyan; Venkatachari, Balaji Shankar; Cheng, Gary
2013-01-01
With the wide availability of affordable multiple-core parallel supercomputers, next generation numerical simulations of flow physics are being focused on unsteady computations for problems involving multiple time scales and multiple physics. These simulations require higher solution accuracy than most algorithms and computational fluid dynamics codes currently available. This paper focuses on the developmental effort for high-fidelity multi-dimensional, unstructured-mesh flow solvers using the space-time conservation element, solution element (CESE) framework. Two approaches have been investigated in this research in order to provide high-accuracy, cross-cutting numerical simulations for a variety of flow regimes: 1) time-accurate local time stepping and 2) highorder CESE method. The first approach utilizes consistent numerical formulations in the space-time flux integration to preserve temporal conservation across the cells with different marching time steps. Such approach relieves the stringent time step constraint associated with the smallest time step in the computational domain while preserving temporal accuracy for all the cells. For flows involving multiple scales, both numerical accuracy and efficiency can be significantly enhanced. The second approach extends the current CESE solver to higher-order accuracy. Unlike other existing explicit high-order methods for unstructured meshes, the CESE framework maintains a CFL condition of one for arbitrarily high-order formulations while retaining the same compact stencil as its second-order counterpart. For large-scale unsteady computations, this feature substantially enhances numerical efficiency. Numerical formulations and validations using benchmark problems are discussed in this paper along with realistic examples.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Li, Zhen-hua; Li, Hong-bin; Zhang, Zhi
Electronic transformers are widely used in power systems because of their wide bandwidth and good transient performance. However, as an emerging technology, the failure rate of electronic transformers is higher than that of traditional transformers. As a result, the calibration period needs to be shortened. Traditional calibration methods require the power of transmission line be cut off, which results in complicated operation and power off loss. This paper proposes an online calibration system which can calibrate electronic current transformers without power off. In this work, the high accuracy standard current transformer and online operation method are the key techniques. Basedmore » on the clamp-shape iron-core coil and clamp-shape air-core coil, a combined clamp-shape coil is designed as the standard current transformer. By analyzing the output characteristics of the two coils, the combined clamp-shape coil can achieve verification of the accuracy. So the accuracy of the online calibration system can be guaranteed. Moreover, by employing the earth potential working method and using two insulating rods to connect the combined clamp-shape coil to the high voltage bus, the operation becomes simple and safe. Tests in China National Center for High Voltage Measurement and field experiments show that the proposed system has a high accuracy of up to 0.05 class.« less
Thz and Long Path Fourier Transform Spectroscopy of Methanol; Torsionally Coupled High-K Levels
NASA Astrophysics Data System (ADS)
Pearson, John C.; Yu, Shanshan; Drouin, Brian J.; Lees, Ronald M.; Xu, Li-Hong; Billinghurst, Brant E.
2012-06-01
Methanol is nearly ubiquitous in the interstellar gas. The presence of both a-type and b-type dipole moments, asymmetry, and internal rotation assure that any small astronomical observation window will contain multiple methanol transitions. This often allows a great deal about the local physical conditions to be deduced, but only insofar as the spectra are characterized. The Herschel Space Observatory has detected numerous, clearly beam diluted, methanol transitions with quanta surpassing J = 35 in many regions. Unfortunately, observations of methanol often display strong non-thermal behavior whose modeling requires many additional levels to be included in a radiative transfer analysis. Additionally, the intensities of many more highly excited transitions are strongly dependent on the accuracy of the wave functions used in the calculation. We report a combined Fourier Transform Infrared and THz study targeting the high J and K transitions in the ground torsional manifold. Microwave accuracy energy levels have been derived to J > 40 and K as high as 20. These levels illuminate a number of strongly resonant torsional interactions that dominate the high K spectrum of the molecule. Comparison with levels calculated from the rho-axis method Hamiltonian suggest that the rho-axis method should be able to model v_t = 0, 1 and probably v_t = 2 to experimental accuracy. The challenges in determining methanol wave functions to experimental accuracy will be discussed.
Lyons, Mark; Al-Nakeeb, Yahya; Hankey, Joanne; Nevill, Alan
2013-01-01
Exploring the effects of fatigue on skilled performance in tennis presents a significant challenge to the researcher with respect to ecological validity. This study examined the effects of moderate and high-intensity fatigue on groundstroke accuracy in expert and non-expert tennis players. The research also explored whether the effects of fatigue are the same regardless of gender and player’s achievement motivation characteristics. 13 expert (7 male, 6 female) and 17 non-expert (13 male, 4 female) tennis players participated in the study. Groundstroke accuracy was assessed using the modified Loughborough Tennis Skills Test. Fatigue was induced using the Loughborough Intermittent Tennis Test with moderate (70%) and high-intensities (90%) set as a percentage of peak heart rate (attained during a tennis-specific maximal hitting sprint test). Ratings of perceived exertion were used as an adjunct to the monitoring of heart rate. Achievement goal indicators for each player were assessed using the 2 x 2 Achievement Goals Questionnaire for Sport in an effort to examine if this personality characteristic provides insight into how players perform under moderate and high-intensity fatigue conditions. A series of mixed ANOVA’s revealed significant fatigue effects on groundstroke accuracy regardless of expertise. The expert players however, maintained better groundstroke accuracy across all conditions compared to the novice players. Nevertheless, in both groups, performance following high-intensity fatigue deteriorated compared to performance at rest and performance while moderately fatigued. Groundstroke accuracy under moderate levels of fatigue was equivalent to that at rest. Fatigue effects were also similar regardless of gender. No fatigue by expertise, or fatigue by gender interactions were found. Fatigue effects were also equivalent regardless of player’s achievement goal indicators. Future research is required to explore the effects of fatigue on performance in tennis using ecologically valid designs that mimic more closely the demands of match play. Key Points Groundstroke accuracy under moderate-intensity fatigue is equivalent to performance at rest. Groundstroke accuracy declines significantly in both expert (40.3% decline) and non-expert (49.6%) tennis players following high-intensity fatigue. Expert players are more consistent, hit more accurate shots and fewer out shots across all fatigue intensities. The effects of fatigue on groundstroke accuracy are the same regardless of gender and player’s achievement goal indicators. PMID:24149809
NASA Astrophysics Data System (ADS)
Aumann, T.; Bertulani, C. A.; Schindler, F.; Typel, S.
2017-12-01
An experimentally constrained equation of state of neutron-rich matter is fundamental for the physics of nuclei and the astrophysics of neutron stars, mergers, core-collapse supernova explosions, and the synthesis of heavy elements. To this end, we investigate the potential of constraining the density dependence of the symmetry energy close to saturation density through measurements of neutron-removal cross sections in high-energy nuclear collisions of 0.4 to 1 GeV /nucleon . We show that the sensitivity of the total neutron-removal cross section is high enough so that the required accuracy can be reached experimentally with the recent developments of new detection techniques. We quantify two crucial points to minimize the model dependence of the approach and to reach the required accuracy: the contribution to the cross section from inelastic scattering has to be measured separately in order to allow a direct comparison of experimental cross sections to theoretical cross sections based on density functional theory and eikonal theory. The accuracy of the reaction model should be investigated and quantified by the energy and target dependence of various nucleon-removal cross sections. Our calculations explore the dependence of neutron-removal cross sections on the neutron skin of medium-heavy neutron-rich nuclei, and we demonstrate that the slope parameter L of the symmetry energy could be constrained down to ±10 MeV by such a measurement, with a 2% accuracy of the measured and calculated cross sections.
Wavefront Sensing for WFIRST with a Linear Optical Model
NASA Technical Reports Server (NTRS)
Jurling, Alden S.; Content, David A.
2012-01-01
In this paper we develop methods to use a linear optical model to capture the field dependence of wavefront aberrations in a nonlinear optimization-based phase retrieval algorithm for image-based wavefront sensing. The linear optical model is generated from a ray trace model of the system and allows the system state to be described in terms of mechanical alignment parameters rather than wavefront coefficients. This approach allows joint optimization over images taken at different field points and does not require separate convergence of phase retrieval at individual field points. Because the algorithm exploits field diversity, multiple defocused images per field point are not required for robustness. Furthermore, because it is possible to simultaneously fit images of many stars over the field, it is not necessary to use a fixed defocus to achieve adequate signal-to-noise ratio despite having images with high dynamic range. This allows high performance wavefront sensing using in-focus science data. We applied this technique in a simulation model based on the Wide Field Infrared Survey Telescope (WFIRST) Intermediate Design Reference Mission (IDRM) imager using a linear optical model with 25 field points. We demonstrate sub-thousandth-wave wavefront sensing accuracy in the presence of noise and moderate undersampling for both monochromatic and polychromatic images using 25 high-SNR target stars. Using these high-quality wavefront sensing results, we are able to generate upsampled point-spread functions (PSFs) and use them to determine PSF ellipticity to high accuracy in order to reduce the systematic impact of aberrations on the accuracy of galactic ellipticity determination for weak-lensing science.
Matrix-vector multiplication using digital partitioning for more accurate optical computing
NASA Technical Reports Server (NTRS)
Gary, C. K.
1992-01-01
Digital partitioning offers a flexible means of increasing the accuracy of an optical matrix-vector processor. This algorithm can be implemented with the same architecture required for a purely analog processor, which gives optical matrix-vector processors the ability to perform high-accuracy calculations at speeds comparable with or greater than electronic computers as well as the ability to perform analog operations at a much greater speed. Digital partitioning is compared with digital multiplication by analog convolution, residue number systems, and redundant number representation in terms of the size and the speed required for an equivalent throughput as well as in terms of the hardware requirements. Digital partitioning and digital multiplication by analog convolution are found to be the most efficient alogrithms if coding time and hardware are considered, and the architecture for digital partitioning permits the use of analog computations to provide the greatest throughput for a single processor.
Advanced sensors and instrumentation
NASA Technical Reports Server (NTRS)
Calloway, Raymond S.; Zimmerman, Joe E.; Douglas, Kevin R.; Morrison, Rusty
1990-01-01
NASA is currently investigating the readiness of Advanced Sensors and Instrumentation to meet the requirements of new initiatives in space. The following technical objectives and technologies are briefly discussed: smart and nonintrusive sensors; onboard signal and data processing; high capacity and rate adaptive data acquisition systems; onboard computing; high capacity and rate onboard storage; efficient onboard data distribution; high capacity telemetry; ground and flight test support instrumentation; power distribution; and workstations, video/lighting. The requirements for high fidelity data (accuracy, frequency, quantity, spatial resolution) in hostile environments will continue to push the technology developers and users to extend the performance of their products and to develop new generations.
NASA Astrophysics Data System (ADS)
Kamata, S.
2017-12-01
Solid-state thermal convection plays a major role in the thermal evolution of solid planetary bodies. Solving the equation system for thermal evolution considering convection requires 2-D or 3-D modeling, resulting in large calculation costs. A 1-D calculation scheme based on mixing length theory (MLT) requires a much lower calculation cost and is suitable for parameter studies. A major concern for the MLT scheme is its accuracy due to a lack of detailed comparisons with higher dimensional schemes. In this study, I quantify its accuracy via comparisons of thermal profiles obtained by 1-D MLT and 3-D numerical schemes. To improve the accuracy, I propose a new definition of the mixing length (l), which is a parameter controlling the efficiency of heat transportation due to convection. Adopting this new definition of l, I investigate the thermal evolution of Dione and Enceladus under a wide variety of parameter conditions. Calculation results indicate that each satellite requires several tens of GW of heat to possess a 30-km-thick global subsurface ocean. Dynamical tides may be able to account for such an amount of heat, though their ices need to be highly viscous.
Very accurate upward continuation to low heights in a test of non-Newtonian theory
NASA Technical Reports Server (NTRS)
Romaides, Anestis J.; Jekeli, Christopher
1989-01-01
Recently, gravity measurements were made on a tall, very stable television transmitting tower in order to detect a non-Newtonian gravitational force. This experiment required the upward continuation of gravity from the Earth's surface to points as high as only 600 m above ground. The upward continuation was based on a set of gravity anomalies in the vicinity of the tower whose data distribution exhibits essential circular symmetry and appropriate radial attenuation. Two methods were applied to perform the upward continuation - least-squares solution of a local harmonic expansion and least-squares collocation. Both methods yield comparable results, and have estimated accuracies on the order of 50 microGal or better (1 microGal = 10(exp -8) m/sq s). This order of accuracy is commensurate with the tower gravity measurments (which have an estimated accuracy of 20 microGal), and enabled a definitive detection of non-Newtonian gravity. As expected, such precise upward continuations require very dense data near the tower. Less expected was the requirement of data (though sparse) up to 220 km away from the tower (in the case that only an ellipsoidal reference gravity is applied).
The outlook for precipitation measurements from space
NASA Technical Reports Server (NTRS)
Atlas, D.; Eckerman, J.; Meneghini, R.; Moore, R. K.
1981-01-01
To provide useful precipitation measurements from space, two requirements must be met: adequate spatial and temporal sampling of the storm and sufficient accuracy in the estimate of precipitation intensity. Although presently no single instrument or method completely satisfies both requirements, the visible/IR, microwave radiometer and radar methods can be used in a complementary manner. Visible/IR instruments provide good temporal sampling and rain area depiction, but recourse must be made to microwave measurements for quantitative rainfall estimates. The inadequacy of microwave radiometer measurements over land suggests, in turn, the use of radar. Several recently developed attenuating-wavelength radar methods are discussed in terms of their accuracy, dynamic range and system implementation. Traditionally, the requirements of high resolution and adequate dynamic range led to fairly costly and complex radar systems. Some simplications and cost reduction can be made; however, by using K-band wavelengths which have the advantages of greater sensitivity at the low rain rates and higher resolution capabilities. Several recently proposed methods of this kind are reviewed in terms of accuracy and system implementation. Finally, an adaptive-pointing multi-sensor instrument is described that would exploit certain advantages of the IR, radiometric and radar methods.
van den Akker, Jeroen; Mishne, Gilad; Zimmer, Anjali D; Zhou, Alicia Y
2018-04-17
Next generation sequencing (NGS) has become a common technology for clinical genetic tests. The quality of NGS calls varies widely and is influenced by features like reference sequence characteristics, read depth, and mapping accuracy. With recent advances in NGS technology and software tools, the majority of variants called using NGS alone are in fact accurate and reliable. However, a small subset of difficult-to-call variants that still do require orthogonal confirmation exist. For this reason, many clinical laboratories confirm NGS results using orthogonal technologies such as Sanger sequencing. Here, we report the development of a deterministic machine-learning-based model to differentiate between these two types of variant calls: those that do not require confirmation using an orthogonal technology (high confidence), and those that require additional quality testing (low confidence). This approach allows reliable NGS-based calling in a clinical setting by identifying the few important variant calls that require orthogonal confirmation. We developed and tested the model using a set of 7179 variants identified by a targeted NGS panel and re-tested by Sanger sequencing. The model incorporated several signals of sequence characteristics and call quality to determine if a variant was identified at high or low confidence. The model was tuned to eliminate false positives, defined as variants that were called by NGS but not confirmed by Sanger sequencing. The model achieved very high accuracy: 99.4% (95% confidence interval: +/- 0.03%). It categorized 92.2% (6622/7179) of the variants as high confidence, and 100% of these were confirmed to be present by Sanger sequencing. Among the variants that were categorized as low confidence, defined as NGS calls of low quality that are likely to be artifacts, 92.1% (513/557) were found to be not present by Sanger sequencing. This work shows that NGS data contains sufficient characteristics for a machine-learning-based model to differentiate low from high confidence variants. Additionally, it reveals the importance of incorporating site-specific features as well as variant call features in such a model.
Modeling Fan Effects on the Time Course of Associative Recognition
Schneider, Darryl W.; Anderson, John R.
2011-01-01
We investigated the time course of associative recognition using the response signal procedure, whereby a stimulus is presented and followed after a variable lag by a signal indicating that an immediate response is required. More specifically, we examined the effects of associative fan (the number of associations that an item has with other items in memory) on speed–accuracy tradeoff functions obtained in a previous response signal experiment involving briefly studied materials and in a new experiment involving well-learned materials. High fan lowered asymptotic accuracy or the rate of rise in accuracy across lags, or both. We developed an Adaptive Control of Thought–Rational (ACT-R) model for the response signal procedure to explain these effects. The model assumes that high fan results in weak associative activation that slows memory retrieval, thereby decreasing the probability that retrieval finishes in time and producing a speed–accuracy tradeoff function. The ACT-R model provided an excellent account of the data, yielding quantitative fits that were as good as those of the best descriptive model for response signal data. PMID:22197797
Highly accurate symplectic element based on two variational principles
NASA Astrophysics Data System (ADS)
Qing, Guanghui; Tian, Jia
2018-02-01
For the stability requirement of numerical resultants, the mathematical theory of classical mixed methods are relatively complex. However, generalized mixed methods are automatically stable, and their building process is simple and straightforward. In this paper, based on the seminal idea of the generalized mixed methods, a simple, stable, and highly accurate 8-node noncompatible symplectic element (NCSE8) was developed by the combination of the modified Hellinger-Reissner mixed variational principle and the minimum energy principle. To ensure the accuracy of in-plane stress results, a simultaneous equation approach was also suggested. Numerical experimentation shows that the accuracy of stress results of NCSE8 are nearly the same as that of displacement methods, and they are in good agreement with the exact solutions when the mesh is relatively fine. NCSE8 has advantages of the clearing concept, easy calculation by a finite element computer program, higher accuracy and wide applicability for various linear elasticity compressible and nearly incompressible material problems. It is possible that NCSE8 becomes even more advantageous for the fracture problems due to its better accuracy of stresses.
Simulation of fiber optic liquid level sensor demodulation system
NASA Astrophysics Data System (ADS)
Yi, Cong-qin; Luo, Yun; Zhang, Zheng-ping
Measuring liquid level with high accuracy is an urgent requirement. This paper mainly focus on the demodulation system of fiber-optic liquid level sensor based on Fabry-Perot cavity, design and simulate the demodulation system by the single-chip simulation software.
Mapping High Dimensional Sparse Customer Requirements into Product Configurations
NASA Astrophysics Data System (ADS)
Jiao, Yao; Yang, Yu; Zhang, Hongshan
2017-10-01
Mapping customer requirements into product configurations is a crucial step for product design, while, customers express their needs ambiguously and locally due to the lack of domain knowledge. Thus the data mining process of customer requirements might result in fragmental information with high dimensional sparsity, leading the mapping procedure risk uncertainty and complexity. The Expert Judgment is widely applied against that background since there is no formal requirements for systematic or structural data. However, there are concerns on the repeatability and bias for Expert Judgment. In this study, an integrated method by adjusted Local Linear Embedding (LLE) and Naïve Bayes (NB) classifier is proposed to map high dimensional sparse customer requirements to product configurations. The integrated method adjusts classical LLE to preprocess high dimensional sparse dataset to satisfy the prerequisite of NB for classifying different customer requirements to corresponding product configurations. Compared with Expert Judgment, the adjusted LLE with NB performs much better in a real-world Tablet PC design case both in accuracy and robustness.
NASA Astrophysics Data System (ADS)
Zhang, L.; Cong, Y.; Wu, C.; Bai, C.; Wu, C.
2017-08-01
The recording of Architectural heritage information is the foundation of research, conservation, management, and the display of architectural heritage. In other words, the recording of architectural heritage information supports heritage research, conservation, management and architectural heritage display. What information do we record and collect and what technology do we use for information recording? How do we determine the level of accuracy required when recording architectural information? What method do we use for information recording? These questions should be addressed in relation to the nature of the particular heritage site and the specific conditions for the conservation work. In recent years, with the rapid development of information acquisition technology such as Close Range Photogrammetry, 3D Laser Scanning as well as high speed and high precision Aerial Photogrammetry, many Chinese universities, research institutes and heritage management bureaux have purchased considerable equipment for information recording. However, the lack of understanding of both the nature of architectural heritage and the purpose for which the information is being collected has led to several problems. For example: some institutions when recording architectural heritage information aim solely at high accuracy. Some consider that advanced measuring methods must automatically replace traditional measuring methods. Information collection becomes the purpose, rather than the means, of architectural heritage conservation. Addressing these issues, this paper briefly reviews the history of architectural heritage information recording at the Summer Palace (Yihe Yuan, first built in 1750), Beijing. Using the recording practices at the Summer Palace during the past ten years as examples, we illustrate our achievements and lessons in recording architectural heritage information with regard to the following aspects: (buildings') ideal status desired, (buildings') current status, structural distortion analysis, display, statue restoration and thematic research. Three points will be highlighted in our discussion: 1. Understanding of the heritage is more important than the particular technology used: Architectural heritage information collection and recording are based on an understanding of the value and nature of the architectural heritage. Understanding is the purpose, whereas information collection and recording are the means. 2. Demand determines technology: Collecting and recording architectural heritage information is to serve the needs of heritage research, conservation, management and display. These different needs determine the different technologies that we use. 3. Set the level of accuracy appropriately: For information recording, high accuracy is not the key criterion; rather an appropriate level of accuracy is key. There is considerable deviation between the nominal accuracy of any instrument and the accuracy of any particular measurement.
On-the-fly Locata/inertial navigation system integration for precise maritime application
NASA Astrophysics Data System (ADS)
Jiang, Wei; Li, Yong; Rizos, Chris
2013-10-01
The application of Global Navigation Satellite System (GNSS) technology has meant that marine navigators have greater access to a more consistent and accurate positioning capability than ever before. However, GNSS may not be able to meet all emerging navigation performance requirements for maritime applications with respect to service robustness, accuracy, integrity and availability. In particular, applications in port areas (for example automated docking) and in constricted waterways, have very stringent performance requirements. Even when an integrated inertial navigation system (INS)/GNSS device is used there may still be performance gaps. GNSS signals are easily blocked or interfered with, and sometimes the satellite geometry may not be good enough for high accuracy and high reliability applications. Furthermore, the INS accuracy degrades rapidly during GNSS outages. This paper investigates the use of a portable ground-based positioning system, known as ‘Locata’, which was integrated with an INS, to provide accurate navigation in a marine environment without reliance on GNSS signals. An ‘on-the-fly’ Locata resolution algorithm that takes advantage of geometry change via an extended Kalman filter is proposed in this paper. Single-differenced Locata carrier phase measurements are utilized to achieve accurate and reliable solutions. A ‘loosely coupled’ decentralized Locata/INS integration architecture based on the Kalman filter is used for data processing. In order to evaluate the system performance, a field trial was conducted on Sydney Harbour. A Locata network consisting of eight Locata transmitters was set up near the Sydney Harbour Bridge. The experiment demonstrated that the Locata on-the-fly (OTF) algorithm is effective and can improve the system accuracy in comparison with the conventional ‘known point initialization’ (KPI) method. After the OTF and KPI comparison, the OTF Locata/INS integration is then assessed further and its performance improvement on both stand-alone OTF Locata and INS is shown. The Locata/INS integration can achieve centimetre-level accuracy for position solutions, and centimetre-per-second accuracy for velocity determination.
NASA Astrophysics Data System (ADS)
Jende, Phillipp; Nex, Francesco; Gerke, Markus; Vosselman, George
2018-07-01
Mobile Mapping (MM) solutions have become a significant extension to traditional data acquisition methods over the last years. Independently from the sensor carried by a platform, may it be laser scanners or cameras, high-resolution data postings are opposing a poor absolute localisation accuracy in urban areas due to GNSS occlusions and multipath effects. Potentially inaccurate position estimations are propagated by IMUs which are furthermore prone to drift effects. Thus, reliable and accurate absolute positioning on a par with MM's high-quality data remains an open issue. Multiple and diverse approaches have shown promising potential to mitigate GNSS errors in urban areas, but cannot achieve decimetre accuracy, require manual effort, or have limitations with respect to costs and availability. This paper presents a fully automatic approach to support the correction of MM imaging data based on correspondences with airborne nadir images. These correspondences can be employed to correct the MM platform's orientation by an adjustment solution. Unlike MM as such, aerial images do not suffer from GNSS occlusions, and their accuracy is usually verified by employing well-established methods using ground control points. However, a registration between MM and aerial images is a non-standard matching scenario, and requires several strategies to yield reliable and accurate correspondences. Scale, perspective and content strongly vary between both image sources, thus traditional feature matching methods may fail. To this end, the registration process is designed to focus on common and clearly distinguishable elements, such as road markings, manholes, or kerbstones. With a registration accuracy of about 98%, reliable tie information between MM and aerial data can be derived. Even though, the adjustment strategy is not covered in its entirety in this paper, accuracy results after adjustment will be presented. It will be shown that a decimetre accuracy is well achievable in a real data test scenario.
Uprated fine guidance sensor study
NASA Technical Reports Server (NTRS)
1984-01-01
Future orbital observatories will require star trackers of extremely high precision. These sensors must maintain high pointing accuracy and pointing stability simultaneously with a low light level signal from a guide star. To establish the fine guidance sensing requirements and to evaluate candidate fine guidance sensing concepts, the Space Telescope Optical Telescope Assembly was used as the reference optical system. The requirements review was separated into three areas: Optical Telescope Assembly (OTA), Fine Guidance Sensing and astrometry. The results show that the detectors should be installed directly onto the focal surface presented by the optics. This would maximize throughput and minimize point stability error by not incoporating any additional optical elements.
Demonstration Of Ultra HI-FI (UHF) Methods
NASA Technical Reports Server (NTRS)
Dyson, Rodger W.
2004-01-01
Computational aero-acoustics (CAA) requires efficient, high-resolution simulation tools. Most current techniques utilize finite-difference approaches because high order accuracy is considered too difficult or expensive to achieve with finite volume or finite element methods. However, a novel finite volume approach (Ultra HI-FI or UHF) which utilizes Hermite fluxes is presented which can achieve both arbitrary accuracy and fidelity in space and time. The technique can be applied to unstructured grids with some loss of fidelity or with multi-block structured grids for maximum efficiency and resolution. In either paradigm, it is possible to resolve ultra-short waves (less than 2 PPW). This is demonstrated here by solving the 4th CAA workshop Category 1 Problem 1.
Designing Delta-DOR acquisition strategies to determine highly elliptical earth orbits
NASA Technical Reports Server (NTRS)
Frauenholz, R. B.
1986-01-01
Delta-DOR acquisition strategies are designed for use in determining highly elliptical earth orbits. The requirements for a possible flight demonstration are evaluated for the Charged Composition Explorer spacecraft of the Active Magnetospheric Particle Tracer Explorers. The best-performing strategy uses data spanning the view periods of two orthogonal baselines near the same orbit periapse. The rapidly changing viewing geometry yields both angular position and velocity information, but each observation may require a different reference quasar. The Delta-DOR data noise is highly dependent on acquisition geometry, varying several orders of magnitude across the baseline view periods. Strategies are selected to minimize the measurement noise predicted by a theoretical model. Although the CCE transponder is limited by S-band and a small bandwidth, the addition of Delta-DOR to coherent Doppler and range improves the one-sigma apogee position accuracy by more than an order of magnitude. Additional Delta-DOR accuracy improvements possible using dual-frequency (S/X) calibration, increased spanned bandwidth, and water-vapor radiometry are presented for comparison. With these benefits, the residual Delta-DOR data noise is primarily due to quasar position uncertainties.
Vanhoof, Chris; Corthouts, Valère; Tirez, Kristof
2004-04-01
To determine the heavy metal content in soil samples at contaminated locations, a static and time consuming procedure is used in most cases. Soil samples are collected and analyzed in the laboratory at high quality and high analytical costs. The demand by government and consultants for a more dynamic approach and by customers requiring performances in which analyses are performed in the field with immediate feedback of the analytical results, is growing. Especially during the follow-up of remediation projects or during the determination of the sampling strategy, field analyses are advisable. For this purpose four types of ED-XRF systems, ranging from portable up to high performance laboratory systems, have been evaluated. The evaluation criteria are based on the performance characteristics for all the ED-XRF systems such as limit of detection, accuracy and the measurement uncertainty on one hand, and also the influence of the sample pretreatment on the obtained results on the other hand. The study proved that the field portable system and the bench top system, placed in a mobile van, can be applied as field techniques, resulting in semi-quantitative analytical results. A limited homogenization of the analyzed sample significantly increases the representativeness of the soil sample. The ED-XRF systems can be differentiated by their limits of detection which are a factor of 10 to 20 higher for the portable system. The accuracy of the results and the measurement uncertainty also improved using the bench top system. Therefore, the selection criteria for applicability of both field systems are based on the required detection level and also the required accuracy of the results.
Wang, Jingang; Gao, Can; Yang, Jie
2014-01-01
Currently available traditional electromagnetic voltage sensors fail to meet the measurement requirements of the smart grid, because of low accuracy in the static and dynamic ranges and the occurrence of ferromagnetic resonance attributed to overvoltage and output short circuit. This work develops a new non-contact high-bandwidth voltage measurement system for power equipment. This system aims at the miniaturization and non-contact measurement of the smart grid. After traditional D-dot voltage probe analysis, an improved method is proposed. For the sensor to work in a self-integrating pattern, the differential input pattern is adopted for circuit design, and grounding is removed. To prove the structure design, circuit component parameters, and insulation characteristics, Ansoft Maxwell software is used for the simulation. Moreover, the new probe was tested on a 10 kV high-voltage test platform for steady-state error and transient behavior. Experimental results ascertain that the root mean square values of measured voltage are precise and that the phase error is small. The D-dot voltage sensor not only meets the requirement of high accuracy but also exhibits satisfactory transient response. This sensor can meet the intelligence, miniaturization, and convenience requirements of the smart grid. PMID:25036333
Design of a real-time system of moving ship tracking on-board based on FPGA in remote sensing images
NASA Astrophysics Data System (ADS)
Yang, Tie-jun; Zhang, Shen; Zhou, Guo-qing; Jiang, Chuan-xian
2015-12-01
With the broad attention of countries in the areas of sea transportation and trade safety, the requirements of efficiency and accuracy of moving ship tracking are becoming higher. Therefore, a systematic design of moving ship tracking onboard based on FPGA is proposed, which uses the Adaptive Inter Frame Difference (AIFD) method to track a ship with different speed. For the Frame Difference method (FD) is simple but the amount of computation is very large, it is suitable for the use of FPGA to implement in parallel. But Frame Intervals (FIs) of the traditional FD method are fixed, and in remote sensing images, a ship looks very small (depicted by only dozens of pixels) and moves slowly. By applying invariant FIs, the accuracy of FD for moving ship tracking is not satisfactory and the calculation is highly redundant. So we use the adaptation of FD based on adaptive extraction of key frames for moving ship tracking. A FPGA development board of Xilinx Kintex-7 series is used for simulation. The experiments show that compared with the traditional FD method, the proposed one can achieve higher accuracy of moving ship tracking, and can meet the requirement of real-time tracking in high image resolution.
Whitney, G. A.; Mansour, J. M.; Dennis, J. E.
2015-01-01
The mechanical loading environment encountered by articular cartilage in situ makes frictional-shear testing an invaluable technique for assessing engineered cartilage. Despite the important information that is gained from this testing, it remains under-utilized, especially for determining damage behavior. Currently, extensive visual inspection is required to assess damage; this is cumbersome and subjective. Tools to simplify, automate, and remove subjectivity from the analysis may increase the accessibility and usefulness of frictional-shear testing as an evaluation method. The objective of this study was to determine if the friction signal could be used to detect damage that occurred during the testing. This study proceeded in two phases: first, a simplified model of biphasic lubrication that does not require knowledge of interstitial fluid pressure was developed. In the second phase, frictional-shear tests were performed on 74 cartilage samples, and the simplified model was used to extract characteristic features from the friction signals. Using support vector machine classifiers, the extracted features were able to detect damage with a median accuracy of approximately 90%. The accuracy remained high even in samples with minimal damage. In conclusion, the friction signal acquired during frictional-shear testing can be used to detect resultant damage to a high level of accuracy. PMID:25691395
Winczewski, Lauren A; Bowen, Jeffrey D; Collins, Nancy L
2016-03-01
Growing evidence suggests that interpersonal responsiveness-feeling understood, validated, and cared for by other people-plays a key role in shaping the quality of one's social interactions and relationships. But what enables people to be interpersonally responsive to others? In the current study, we argued that responsiveness requires not only accurate understanding but also compassionate motivation. Specifically, we hypothesized that understanding another person's thoughts and feelings (empathic accuracy) would foster responsive behavior only when paired with benevolent motivation (empathic concern). To test this idea, we asked couples (N = 91) to discuss a personal or relationship stressor; we then assessed empathic accuracy, empathic concern, and responsive behavior. As predicted, when listeners' empathic concern was high, empathic accuracy facilitated responsiveness; but when empathic concern was low, empathic accuracy was unhelpful (and possibly harmful) for responsiveness. These findings provide the first evidence that cognitive and affective forms of empathy work together to facilitate responsive behavior. © The Author(s) 2016.
CO2 laser ranging systems study
NASA Technical Reports Server (NTRS)
Filippi, C. A.
1975-01-01
The conceptual design and error performance of a CO2 laser ranging system are analyzed. Ranging signal and subsystem processing alternatives are identified, and their comprehensive evaluation yields preferred candidate solutions which are analyzed to derive range and range rate error contributions. The performance results are presented in the form of extensive tables and figures which identify the ranging accuracy compromises as a function of the key system design parameters and subsystem performance indexes. The ranging errors obtained are noted to be within the high accuracy requirements of existing NASA/GSFC missions with a proper system design.
Miyashita, Kiyoteru; Oude Vrielink, Timo; Mylonas, George
2018-05-01
Endomicroscopy (EM) provides high resolution, non-invasive histological tissue information and can be used for scanning of large areas of tissue to assess cancerous and pre-cancerous lesions and their margins. However, current robotic solutions do not provide the accuracy and force sensitivity required to perform safe and accurate tissue scanning. A new surgical instrument has been developed that uses a cable-driven parallel mechanism (CPDM) to manipulate an EM probe. End-effector forces are determined by measuring the tensions in each cable. As a result, the instrument allows to accurately apply a contact force on a tissue, while at the same time offering high resolution and highly repeatable probe movement. 0.2 and 0.6 N force sensitivities were found for 1 and 2 DoF image acquisition methods, respectively. A back-stepping technique can be used when a higher force sensitivity is required for the acquisition of high quality tissue images. This method was successful in acquiring images on ex vivo liver tissue. The proposed approach offers high force sensitivity and precise control, which is essential for robotic EM. The technical benefits of the current system can also be used for other surgical robotic applications, including safe autonomous control, haptic feedback and palpation.
ERIC Educational Resources Information Center
Koolen, Sophieke; Vissers, Constance Th. W. M.; Hendriks, Angelique W. C. J.; Egger, Jos I. M.; Verhoeven, Ludo
2012-01-01
This study examined the hypothesis of an atypical interaction between attention and language in ASD. A dual-task experiment with three conditions was designed, in which sentences were presented that contained errors requiring attentional focus either at (a) low level, or (b) high level, or (c) both levels of language. Speed and accuracy for error…
A preliminary 6 DOF attitude and translation control system design for Starprobe
NASA Technical Reports Server (NTRS)
Mak, P.; Mettler, E.; Vijayarahgavan, A.
1981-01-01
The extreme thermal environment near perihelion and the high-accuracy gravitational science experiments impose unique design requirements on various subsystems of Starprobe. This paper examines some of these requirements and their impact on the preliminary design of a six-degree-of-freedom attitude and translational control system. Attention is given to design considerations, the baseline attitude/translational control system, system modeling, and simulation studies.
Aircraft Update Programmes. The Economical Alternative
2000-04-01
will drive the desired level of integration but cost will determine the achieved level. Paper #15 by Christian Dedieu-Eric Loffler ( SAGEM SA) presented...requirements. The SAGEM SA upgrade concept allows one to match specifications ranging from basics performance enhancement, such as high accuracy navigation for
A Modified Magnetic Gradient Contraction Based Method for Ferromagnetic Target Localization
Wang, Chen; Zhang, Xiaojuan; Qu, Xiaodong; Pan, Xiao; Fang, Guangyou; Chen, Luzhao
2016-01-01
The Scalar Triangulation and Ranging (STAR) method, which is based upon the unique properties of magnetic gradient contraction, is a high real-time ferromagnetic target localization method. Only one measurement point is required in the STAR method and it is not sensitive to changes in sensing platform orientation. However, the localization accuracy of the method is limited by the asphericity errors and the inaccurate value of position leads to larger errors in the estimation of magnetic moment. To improve the localization accuracy, a modified STAR method is proposed. In the proposed method, the asphericity errors of the traditional STAR method are compensated with an iterative algorithm. The proposed method has a fast convergence rate which meets the requirement of high real-time localization. Simulations and field experiments have been done to evaluate the performance of the proposed method. The results indicate that target parameters estimated by the modified STAR method are more accurate than the traditional STAR method. PMID:27999322
Localization of multiple defects using the compact phased array (CPA) method
NASA Astrophysics Data System (ADS)
Senyurek, Volkan Y.; Baghalian, Amin; Tashakori, Shervin; McDaniel, Dwayne; Tansel, Ibrahim N.
2018-01-01
Array systems of transducers have found numerous applications in detection and localization of defects in structural health monitoring (SHM) of plate-like structures. Different types of array configurations and analysis algorithms have been used to improve the process of localization of defects. For accurate and reliable monitoring of large structures by array systems, a high number of actuator and sensor elements are often required. In this study, a compact phased array system consisting of only three piezoelectric elements is used in conjunction with an updated total focusing method (TFM) for localization of single and multiple defects in an aluminum plate. The accuracy of the localization process was greatly improved by including wave propagation information in TFM. Results indicated that the proposed CPA approach can locate single and multiple defects with high accuracy while decreasing the processing costs and the number of required transducers. This method can be utilized in critical applications such as aerospace structures where the use of a large number of transducers is not desirable.
NASA Astrophysics Data System (ADS)
Millard, R. C.; Seaver, G.
1990-12-01
A 27-term index of refraction algorithm for pure and sea waters has been developed using four experimental data sets of differing accuracies. They cover the range 500-700 nm in wavelength, 0-30°C in temperature, 0-40 psu in salinity, and 0-11,000 db in pressure. The index of refraction algorithm has an accuracy that varies from 0.4 ppm for pure water at atmospheric pressure to 80 ppm at high pressures, but preserves the accuracy of each original data set. This algorithm is a significant improvement over existing descriptions as it is in analytical form with a better and more carefully defined accuracy. A salinometer algorithm with the same uncertainty has been created by numerically inverting the index algorithm using the Newton-Raphson method. The 27-term index algorithm was used to generate a pseudo-data set at the sodium D wavelength (589.26 nm) from which a 6-term densitometer algorithm was constructed. The densitometer algorithm also produces salinity as an intermediate step in the salinity inversion. The densitometer residuals have a standard deviation of 0.049 kg m -3 which is not accurate enough for most oceanographic applications. However, the densitometer algorithm was used to explore the sensitivity of density from this technique to temperature and pressure uncertainties. To achieve a deep ocean densitometer of 0.001 kg m -3 accuracy would require the index of refraction to have an accuracy of 0.3 ppm, the temperature an accuracy of 0.01°C and the pressure 1 db. Our assessment of the currently available index of refraction measurements finds that only the data for fresh water at atmospheric pressure produce an algorithm satisfactory for oceanographic use (density to 0.4 ppm). The data base for the algorithm at higher pressures and various salinities requires an order of magnitude or better improvement in index measurement accuracy before the resultant density accuracy will be comparable to the currently available oceanographic algorithm.
Duprez, Frédéric; Michotte, Jean Bernard; Cuvelier, Gregory; Legrand, Alexandre; Mashayekhi, Sharam; Reychler, Gregory
2018-03-01
Oxygen cylinders are widely used both in hospital and prehospital care. Excessive or inappropriate F IO 2 may be critical for patients with hypercapnia or hypoxia. Moreover, over-oxygenation could be deleterious in ischemic disorders. Supplemental oxygen from oxygen cylinder should therefore be delivered accurately. The aim of this study was to assess the accuracy of oxygen flows for oxygen cylinder in hospital and prehospital care. A prospective trial was conducted to evaluate accuracy of delivered oxygen flows (2, 4, 6, 9 and 12 L/min) for different oxygen cylinder ready for use in different hospital departments. Delivered flows were analyzed randomly using a calibrated thermal mass flow meter. Two types of oxygen cylinder were evaluated: 78 oxygen cylinder with a single-stage regulator and 70 oxygen cylinder with a dual-stage regulator. Delivered flows were compared to the required oxygen flow. The residual pressure value for each oxygen cylinder was considered. A coefficient of variation was calculated to compare the variability of the delivered flow between the two types of oxygen cylinder. The median values of delivered flows were all ≥ 100% of the required flow for single stage (range 100-109%) and < 100% of required flow for dual stage (range 95-97%). The median values of the delivered flow differed between single and dual stage. It was found that single stage is significantly higher than dual stage ( P = .01). At low flow, the dispersion of the measures for single stage was higher than with a high oxygen flow. Delivered flow differences were also found between low and high residual pressures, but only with single stage ( P = .02). The residual pressure for both oxygen cylinders (no. = 148) ranged from 73 to 2,900 pounds per square inch, and no significant difference was observed between the 2 types ( P = .86). The calculated coefficient of variation ranged from 7% (±1%) for dual stage to 8% (±2%) for single stage. This study shows good accuracy of oxygen flow delivered via oxygen cylinders. This accuracy was higher with dual stage. Single stage was also accurate, however, at low flow this accuracy is slightly less. Moreover, with single stage, when residual pressure decreases, the median value of delivered flow decreased. Copyright © 2018 by Daedalus Enterprises.
Qian, Shinan
2011-01-01
Nmore » anoradian Surface Profilers (SPs) are required for state-of-the-art synchrotron radiation optics and high-precision optical measurements. ano-radian accuracy must be maintained in the large-angle test range. However, the beams' notable lateral motions during tests of most operating profilers, combined with the insufficiencies of their optical components, generate significant errors of ∼ 1 μ rad rms in the measurements. The solution to nano-radian accuracy for the new generation of surface profilers in this range is to apply a scanning optical head, combined with nontilted reference beam. I describe here my comparison of different scan modes and discuss some test results.« less
Spacecraft attitude determination accuracy from mission experience
NASA Technical Reports Server (NTRS)
Brasoveanu, D.; Hashmall, J.
1994-01-01
This paper summarizes a compilation of attitude determination accuracies attained by a number of satellites supported by the Goddard Space Flight Center Flight Dynamics Facility. The compilation is designed to assist future mission planners in choosing and placing attitude hardware and selecting the attitude determination algorithms needed to achieve given accuracy requirements. The major goal of the compilation is to indicate realistic accuracies achievable using a given sensor complement based on mission experience. It is expected that the use of actual spacecraft experience will make the study especially useful for mission design. A general description of factors influencing spacecraft attitude accuracy is presented. These factors include determination algorithms, inertial reference unit characteristics, and error sources that can affect measurement accuracy. Possible techniques for mitigating errors are also included. Brief mission descriptions are presented with the attitude accuracies attained, grouped by the sensor pairs used in attitude determination. The accuracies for inactive missions represent a compendium of missions report results, and those for active missions represent measurements of attitude residuals. Both three-axis and spin stabilized missions are included. Special emphasis is given to high-accuracy sensor pairs, such as two fixed-head star trackers (FHST's) and fine Sun sensor plus FHST. Brief descriptions of sensor design and mode of operation are included. Also included are brief mission descriptions and plots summarizing the attitude accuracy attained using various sensor complements.
A Kinematic Calibration Process for Flight Robotic Arms
NASA Technical Reports Server (NTRS)
Collins, Curtis L.; Robinson, Matthew L.
2013-01-01
The Mars Science Laboratory (MSL) robotic arm is ten times more massive than any Mars robotic arm before it, yet with similar accuracy and repeatability positioning requirements. In order to assess and validate these requirements, a higher-fidelity model and calibration processes were needed. Kinematic calibration of robotic arms is a common and necessary process to ensure good positioning performance. Most methodologies assume a rigid arm, high-accuracy data collection, and some kind of optimization of kinematic parameters. A new detailed kinematic and deflection model of the MSL robotic arm was formulated in the design phase and used to update the initial positioning and orientation accuracy and repeatability requirements. This model included a higher-fidelity link stiffness matrix representation, as well as a link level thermal expansion model. In addition, it included an actuator backlash model. Analytical results highlighted the sensitivity of the arm accuracy to its joint initialization methodology. Because of this, a new technique for initializing the arm joint encoders through hardstop calibration was developed. This involved selecting arm configurations to use in Earth-based hardstop calibration that had corresponding configurations on Mars with the same joint torque to ensure repeatability in the different gravity environment. The process used to collect calibration data for the arm included the use of multiple weight stand-in turrets with enough metrology targets to reconstruct the full six-degree-of-freedom location of the rover and tool frames. The follow-on data processing of the metrology data utilized a standard differential formulation and linear parameter optimization technique.
An experimental apparatus for diffraction-limited soft x-ray nano-focusing
NASA Astrophysics Data System (ADS)
Merthe, Daniel J.; Goldberg, Kenneth A.; Yashchuk, Valeriy V.; Yuan, Sheng; McKinney, Wayne R.; Celestre, Richard; Mochi, Iacopo; Macdougall, James; Morrison, Gregory Y.; Rakawa, Senajith B.; Anderson, Erik; Smith, Brian V.; Domning, Edward E.; Warwick, Tony; Padmore, Howard
2011-09-01
Realizing the experimental potential of high-brightness, next generation synchrotron and free-electron laser light sources requires the development of reflecting x-ray optics capable of wavefront preservation and high-resolution nano-focusing. At the Advanced Light Source (ALS) beamline 5.3.1, we are developing broadly applicable, high-accuracy, in situ, at-wavelength wavefront measurement techniques to surpass 100-nrad slope measurement accuracy for diffraction-limited Kirkpatrick-Baez (KB) mirrors. The at-wavelength methodology we are developing relies on a series of wavefront-sensing tests with increasing accuracy and sensitivity, including scanning-slit Hartmann tests, grating-based lateral shearing interferometry, and quantitative knife-edge testing. We describe the original experimental techniques and alignment methodology that have enabled us to optimally set a bendable KB mirror to achieve a focused, FWHM spot size of 150 nm, with 1 nm (1.24 keV) photons at 3.7 mrad numerical aperture. The predictions of wavefront measurement are confirmed by the knife-edge testing. The side-profiled elliptically bent mirror used in these one-dimensional focusing experiments was originally designed for a much different glancing angle and conjugate distances. Visible-light long-trace profilometry was used to pre-align the mirror before installation at the beamline. This work demonstrates that high-accuracy, at-wavelength wavefront-slope feedback can be used to optimize the pitch, roll, and mirror-bending forces in situ, using procedures that are deterministic and repeatable.
He, Xiyang; Zhang, Xiaohong; Tang, Long; Liu, Wanke
2015-12-22
Many applications, such as marine navigation, land vehicles location, etc., require real time precise positioning under medium or long baseline conditions. In this contribution, we develop a model of real-time kinematic decimeter-level positioning with BeiDou Navigation Satellite System (BDS) triple-frequency signals over medium distances. The ambiguities of two extra-wide-lane (EWL) combinations are fixed first, and then a wide lane (WL) combination is reformed based on the two EWL combinations for positioning. Theoretical analysis and empirical analysis is given of the ambiguity fixing rate and the positioning accuracy of the presented method. The results indicate that the ambiguity fixing rate can be up to more than 98% when using BDS medium baseline observations, which is much higher than that of dual-frequency Hatch-Melbourne-Wübbena (HMW) method. As for positioning accuracy, decimeter level accuracy can be achieved with this method, which is comparable to that of carrier-smoothed code differential positioning method. Signal interruption simulation experiment indicates that the proposed method can realize fast high-precision positioning whereas the carrier-smoothed code differential positioning method needs several hundreds of seconds for obtaining high precision results. We can conclude that a relatively high accuracy and high fixing rate can be achieved for triple-frequency WL method with single-epoch observations, displaying significant advantage comparing to traditional carrier-smoothed code differential positioning method.
Classification of large-scale fundus image data sets: a cloud-computing framework.
Roychowdhury, Sohini
2016-08-01
Large medical image data sets with high dimensionality require substantial amount of computation time for data creation and data processing. This paper presents a novel generalized method that finds optimal image-based feature sets that reduce computational time complexity while maximizing overall classification accuracy for detection of diabetic retinopathy (DR). First, region-based and pixel-based features are extracted from fundus images for classification of DR lesions and vessel-like structures. Next, feature ranking strategies are used to distinguish the optimal classification feature sets. DR lesion and vessel classification accuracies are computed using the boosted decision tree and decision forest classifiers in the Microsoft Azure Machine Learning Studio platform, respectively. For images from the DIARETDB1 data set, 40 of its highest-ranked features are used to classify four DR lesion types with an average classification accuracy of 90.1% in 792 seconds. Also, for classification of red lesion regions and hemorrhages from microaneurysms, accuracies of 85% and 72% are observed, respectively. For images from STARE data set, 40 high-ranked features can classify minor blood vessels with an accuracy of 83.5% in 326 seconds. Such cloud-based fundus image analysis systems can significantly enhance the borderline classification performances in automated screening systems.
Overconfidence across the psychosis continuum: a calibration approach.
Balzan, Ryan P; Woodward, Todd S; Delfabbro, Paul; Moritz, Steffen
2016-11-01
An 'overconfidence in errors' bias has been consistently observed in people with schizophrenia relative to healthy controls, however, the bias is seldom found to be associated with delusional ideation. Using a more precise confidence-accuracy calibration measure of overconfidence, the present study aimed to explore whether the overconfidence bias is greater in people with higher delusional ideation. A sample of 25 participants with schizophrenia and 50 non-clinical controls (25 high- and 25 low-delusion-prone) completed 30 difficult trivia questions (accuracy <75%); 15 'half-scale' items required participants to indicate their level of confidence for accuracy, and the remaining 'confidence-range' items asked participants to provide lower/upper bounds in which they were 80% confident the true answer lay within. There was a trend towards higher overconfidence for half-scale items in the schizophrenia and high-delusion-prone groups, which reached statistical significance for confidence-range items. However, accuracy was particularly low in the two delusional groups and a significant negative correlation between clinical delusional scores and overconfidence was observed for half-scale items within the schizophrenia group. Evidence in support of an association between overconfidence and delusional ideation was therefore mixed. Inflated confidence-accuracy miscalibration for the two delusional groups may be better explained by their greater unawareness of their underperformance, rather than representing genuinely inflated overconfidence in errors.
NASA Astrophysics Data System (ADS)
Lieu, Richard
2018-01-01
A hierarchy of statistics of increasing sophistication and accuracy is proposed, to exploit an interesting and fundamental arithmetic structure in the photon bunching noise of incoherent light of large photon occupation number, with the purpose of suppressing the noise and rendering a more reliable and unbiased measurement of the light intensity. The method does not require any new hardware, rather it operates at the software level, with the help of high precision computers, to reprocess the intensity time series of the incident light to create a new series with smaller bunching noise coherence length. The ultimate accuracy improvement of this method of flux measurement is limited by the timing resolution of the detector and the photon occupation number of the beam (the higher the photon number the better the performance). The principal application is accuracy improvement in the bolometric flux measurement of a radio source.
Jiang, Qingan; Wu, Wenqi; Jiang, Mingming; Li, Yun
2017-01-01
High-accuracy railway track surveying is essential for railway construction and maintenance. The traditional approaches based on total station equipment are not efficient enough since high precision surveying frequently needs static measurements. This paper proposes a new filtering and smoothing algorithm based on the IMU/odometer and landmarks integration for the railway track surveying. In order to overcome the difficulty of estimating too many error parameters with too few landmark observations, a new model with completely observable error states is established by combining error terms of the system. Based on covariance analysis, the analytical relationship between the railway track surveying accuracy requirements and equivalent gyro drifts including bias instability and random walk noise are established. Experiment results show that the accuracy of the new filtering and smoothing algorithm for railway track surveying can reach 1 mm (1σ) when using a Ring Laser Gyroscope (RLG)-based Inertial Measurement Unit (IMU) with gyro bias instability of 0.03°/h and random walk noise of 0.005°/h while control points of the track control network (CPIII) position observations are provided by the optical total station in about every 60 m interval. The proposed approach can satisfy at the same time the demands of high accuracy and work efficiency for railway track surveying. PMID:28629191
NASA Astrophysics Data System (ADS)
Stewart, James M. P.; Ansell, Steve; Lindsay, Patricia E.; Jaffray, David A.
2015-12-01
Advances in precision microirradiators for small animal radiation oncology studies have provided the framework for novel translational radiobiological studies. Such systems target radiation fields at the scale required for small animal investigations, typically through a combination of on-board computed tomography image guidance and fixed, interchangeable collimators. Robust targeting accuracy of these radiation fields remains challenging, particularly at the millimetre scale field sizes achievable by the majority of microirradiators. Consistent and reproducible targeting accuracy is further hindered as collimators are removed and inserted during a typical experimental workflow. This investigation quantified this targeting uncertainty and developed an online method based on a virtual treatment isocenter to actively ensure high performance targeting accuracy for all radiation field sizes. The results indicated that the two-dimensional field placement uncertainty was as high as 1.16 mm at isocenter, with simulations suggesting this error could be reduced to 0.20 mm using the online correction method. End-to-end targeting analysis of a ball bearing target on radiochromic film sections showed an improved targeting accuracy with the three-dimensional vector targeting error across six different collimators reduced from 0.56+/- 0.05 mm (mean ± SD) to 0.05+/- 0.05 mm for an isotropic imaging voxel size of 0.1 mm.
Machine learning of molecular properties: Locality and active learning
NASA Astrophysics Data System (ADS)
Gubaev, Konstantin; Podryabinkin, Evgeny V.; Shapeev, Alexander V.
2018-06-01
In recent years, the machine learning techniques have shown great potent1ial in various problems from a multitude of disciplines, including materials design and drug discovery. The high computational speed on the one hand and the accuracy comparable to that of density functional theory on another hand make machine learning algorithms efficient for high-throughput screening through chemical and configurational space. However, the machine learning algorithms available in the literature require large training datasets to reach the chemical accuracy and also show large errors for the so-called outliers—the out-of-sample molecules, not well-represented in the training set. In the present paper, we propose a new machine learning algorithm for predicting molecular properties that addresses these two issues: it is based on a local model of interatomic interactions providing high accuracy when trained on relatively small training sets and an active learning algorithm of optimally choosing the training set that significantly reduces the errors for the outliers. We compare our model to the other state-of-the-art algorithms from the literature on the widely used benchmark tests.
NASA Astrophysics Data System (ADS)
Huang, S. S.; Huang, C. F.; Huang, K. N.; Young, M. S.
2002-10-01
A highly accurate binary frequency shift-keyed (BFSK) ultrasonic distance measurement system (UDMS) for use in isothermal air is described. This article presents an efficient algorithm which combines both the time-of-flight (TOF) method and the phase-shift method. The proposed method can obtain larger range measurement than the phase-shift method and also get higher accuracy compared with the TOF method. A single-chip microcomputer-based BFSK signal generator and phase detector was designed to record and compute the TOF, two phase shifts, and the resulting distance, which were then sent to either an LCD to display or a PC to calibrate. Experiments were done in air using BFSK with the frequencies of 40 and 41 kHz. Distance resolution of 0.05% of the wavelength corresponding to the frequency of 40 kHz was obtained. The range accuracy was found to be within ±0.05 mm at a range of over 6000 mm. The main advantages of this UDMS system are high resolution, low cost, narrow bandwidth requirement, and ease of implementation.
Handheld Reflective Foil Emissometer with 0.007 Absolute Accuracy at 0.05
NASA Astrophysics Data System (ADS)
van der Ham, E. W. M.; Ballico, M. J.
2014-07-01
The development and performance of a handheld emissometer for the measurement of the emissivity of highly reflective metallic foils used for the insulation of domestic and commercial buildings are described. Reflective roofing insulation based on a thin coating of metal on a more robust substrate is very widely used in hotter climates to reduce the radiant heat transfer between the ceiling and roof in commercial and residential buildings. The required normal emissivity of these foils is generally below 0.05, so stray reflected ambient infrared radiation (IR) makes traditional reflectance-based measurements of emissivity very difficult to achieve with the required accuracy. Many manufacturers apply additional coatings onto the metallic foil to reduce visible glare during installation on a roof, and to provide protection to the thin reflective layer; however, this layer can also substantially increase the IR emissivity. The system as developed at the National Measurement Institute, Australia (NMIA) is based on the principle of measurement of the modulation in thermal infrared radiation, as the sample is thermally modulated by hot and cold air streams. A commercial infrared to band radiation thermometer with a highly specialized stray and reflected radiation shroud attachment is used as the detector system, allowing for convenient handheld field measurements. The performance and accuracy of the system have been compared with NMIA's reference emissometer systems for a number of typical material samples, demonstrating its capability to measure the absolute thermal emissivity of these very highly reflective foils with an uncertainty of better than.
An onboard data analysis method to track the seasonal polar caps on Mars
NASA Technical Reports Server (NTRS)
Wagstaff, Kiri L.; Castano, Rebecca; Chien, Steve; Ivanov, Anton B.; Pounders, Erik; Titus, Timothy N.
2005-01-01
In this paper, we evaluate our method on uncalibrated THEMIS data and find 1) agreement with manual cap edge identifications to within 28.2 km, and 2) high accuracy even with a reduced context window, yielding large reductions in memory requirements.
NASA Astrophysics Data System (ADS)
Zhao, G.; Liu, J.; Chen, B.; Guo, R.; Chen, L.
2017-12-01
Forward modeling of gravitational fields at large-scale requires to consider the curvature of the Earth and to evaluate the Newton's volume integral in spherical coordinates. To acquire fast and accurate gravitational effects for subsurface structures, subsurface mass distribution is usually discretized into small spherical prisms (called tesseroids). The gravity fields of tesseroids are generally calculated numerically. One of the commonly used numerical methods is the 3D Gauss-Legendre quadrature (GLQ). However, the traditional GLQ integration suffers from low computational efficiency and relatively poor accuracy when the observation surface is close to the source region. We developed a fast and high accuracy 3D GLQ integration based on the equivalence of kernel matrix, adaptive discretization and parallelization using OpenMP. The equivalence of kernel matrix strategy increases efficiency and reduces memory consumption by calculating and storing the same matrix elements in each kernel matrix just one time. In this method, the adaptive discretization strategy is used to improve the accuracy. The numerical investigations show that the executing time of the proposed method is reduced by two orders of magnitude compared with the traditional method that without these optimized strategies. High accuracy results can also be guaranteed no matter how close the computation points to the source region. In addition, the algorithm dramatically reduces the memory requirement by N times compared with the traditional method, where N is the number of discretization of the source region in the longitudinal direction. It makes the large-scale gravity forward modeling and inversion with a fine discretization possible.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-02
...'s CMRS E911 location requirements without ensuring that time is taken to study location technologies... accuracy requirements on interconnected VoIP service without further study.'' A number of commenters... study the technical, operational and economic issues related to the provision of ALI for interconnected...
High accuracy transit photometry of the planet OGLE-TR-113b with a new deconvolution-based method
NASA Astrophysics Data System (ADS)
Gillon, M.; Pont, F.; Moutou, C.; Bouchy, F.; Courbin, F.; Sohy, S.; Magain, P.
2006-11-01
A high accuracy photometry algorithm is needed to take full advantage of the potential of the transit method for the characterization of exoplanets, especially in deep crowded fields. It has to reduce to the lowest possible level the negative influence of systematic effects on the photometric accuracy. It should also be able to cope with a high level of crowding and with large-scale variations of the spatial resolution from one image to another. A recent deconvolution-based photometry algorithm fulfills all these requirements, and it also increases the resolution of astronomical images, which is an important advantage for the detection of blends and the discrimination of false positives in transit photometry. We made some changes to this algorithm to optimize it for transit photometry and used it to reduce NTT/SUSI2 observations of two transits of OGLE-TR-113b. This reduction has led to two very high precision transit light curves with a low level of systematic residuals, used together with former photometric and spectroscopic measurements to derive new stellar and planetary parameters in excellent agreement with previous ones, but significantly more precise.
The role of feedback contingency in perceptual category learning.
Ashby, F Gregory; Vucovich, Lauren E
2016-11-01
Feedback is highly contingent on behavior if it eventually becomes easy to predict, and weakly contingent on behavior if it remains difficult or impossible to predict even after learning is complete. Many studies have demonstrated that humans and nonhuman animals are highly sensitive to feedback contingency, but no known studies have examined how feedback contingency affects category learning, and current theories assign little or no importance to this variable. Two experiments examined the effects of contingency degradation on rule-based and information-integration category learning. In rule-based tasks, optimal accuracy is possible with a simple explicit rule, whereas optimal accuracy in information-integration tasks requires integrating information from 2 or more incommensurable perceptual dimensions. In both experiments, participants each learned rule-based or information-integration categories under either high or low levels of feedback contingency. The exact same stimuli were used in all 4 conditions, and optimal accuracy was identical in every condition. Learning was good in both high-contingency conditions, but most participants showed little or no evidence of learning in either low-contingency condition. Possible causes of these effects, as well as their theoretical implications, are discussed. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
The Role of Feedback Contingency in Perceptual Category Learning
Ashby, F. Gregory; Vucovich, Lauren E.
2016-01-01
Feedback is highly contingent on behavior if it eventually becomes easy to predict, and weakly contingent on behavior if it remains difficult or impossible to predict even after learning is complete. Many studies have demonstrated that humans and nonhuman animals are highly sensitive to feedback contingency, but no known studies have examined how feedback contingency affects category learning, and current theories assign little or no importance to this variable. Two experiments examined the effects of contingency degradation on rule-based and information-integration category learning. In rule-based tasks, optimal accuracy is possible with a simple explicit rule, whereas optimal accuracy in information-integration tasks requires integrating information from two or more incommensurable perceptual dimensions. In both experiments, participants each learned rule-based or information-integration categories under either high or low levels of feedback contingency. The exact same stimuli were used in all four conditions and optimal accuracy was identical in every condition. Learning was good in both high-contingency conditions, but most participants showed little or no evidence of learning in either low-contingency condition. Possible causes of these effects are discussed, as well as their theoretical implications. PMID:27149393
Demitri, Nevine; Zoubir, Abdelhak M
2017-01-01
Glucometers present an important self-monitoring tool for diabetes patients and, therefore, must exhibit high accuracy as well as good usability features. Based on an invasive photometric measurement principle that drastically reduces the volume of the blood sample needed from the patient, we present a framework that is capable of dealing with small blood samples, while maintaining the required accuracy. The framework consists of two major parts: 1) image segmentation; and 2) convergence detection. Step 1 is based on iterative mode-seeking methods to estimate the intensity value of the region of interest. We present several variations of these methods and give theoretical proofs of their convergence. Our approach is able to deal with changes in the number and position of clusters without any prior knowledge. Furthermore, we propose a method based on sparse approximation to decrease the computational load, while maintaining accuracy. Step 2 is achieved by employing temporal tracking and prediction, herewith decreasing the measurement time, and, thus, improving usability. Our framework is tested on several real datasets with different characteristics. We show that we are able to estimate the underlying glucose concentration from much smaller blood samples than is currently state of the art with sufficient accuracy according to the most recent ISO standards and reduce measurement time significantly compared to state-of-the-art methods.
On the convergence and accuracy of the FDTD method for nanoplasmonics.
Lesina, Antonino Calà; Vaccari, Alessandro; Berini, Pierre; Ramunno, Lora
2015-04-20
Use of the Finite-Difference Time-Domain (FDTD) method to model nanoplasmonic structures continues to rise - more than 2700 papers have been published in 2014 on FDTD simulations of surface plasmons. However, a comprehensive study on the convergence and accuracy of the method for nanoplasmonic structures has yet to be reported. Although the method may be well-established in other areas of electromagnetics, the peculiarities of nanoplasmonic problems are such that a targeted study on convergence and accuracy is required. The availability of a high-performance computing system (a massively parallel IBM Blue Gene/Q) allows us to do this for the first time. We consider gold and silver at optical wavelengths along with three "standard" nanoplasmonic structures: a metal sphere, a metal dipole antenna and a metal bowtie antenna - for the first structure comparisons with the analytical extinction, scattering, and absorption coefficients based on Mie theory are possible. We consider different ways to set-up the simulation domain, we vary the mesh size to very small dimensions, we compare the simple Drude model with the Drude model augmented with two critical points correction, we compare single-precision to double-precision arithmetic, and we compare two staircase meshing techniques, per-component and uniform. We find that the Drude model with two critical points correction (at least) must be used in general. Double-precision arithmetic is needed to avoid round-off errors if highly converged results are sought. Per-component meshing increases the accuracy when complex geometries are modeled, but the uniform mesh works better for structures completely fillable by the Yee cell (e.g., rectangular structures). Generally, a mesh size of 0.25 nm is required to achieve convergence of results to ∼ 1%. We determine how to optimally setup the simulation domain, and in so doing we find that performing scattering calculations within the near-field does not necessarily produces large errors but reduces the computational resources required.
NASA Astrophysics Data System (ADS)
Goldberg, Kenneth A.; Naulleau, Patrick P.; Bokor, Jeffrey; Chapman, Henry N.
2002-07-01
As the quality of optical systems for extreme ultraviolet lithography improves, high-accuracy wavefront metrology for alignment and qualification becomes ever more important. To enable the development of diffraction-limited EUV projection optics, visible-light and EUV interferometries must work in close collaboration. We present a detailed comparison of EUV and visible-light wavefront measurements performed across the field of view of a lithographic-quality EUV projection optical system designed for use in the Engineering Test Stand developed by the Virtual National Laboratory and the EUV Limited Liability Company. The comparisons reveal that the present level of RMS agreement lies in the 0.3-0.4-nm range. Astigmatism is the most significant aberration component for the alignment of this optical system; it is also the dominant term in the discrepancy, and the aberration with the highest measurement uncertainty. With EUV optical systems requiring total wavefront quality in the (lambda) EUV/50 range, and even higher surface-figure quality for the individual mirror elements, improved accuracy through future comparisons, and additional studies, are required.
Geoscience laser altimeter system-stellar reference system
NASA Astrophysics Data System (ADS)
Millar, Pamela S.; Sirota, J. Marcos
1998-01-01
GLAS is an EOS space-based laser altimeter being developed to profile the height of the Earth's ice sheets with ~15 cm single shot accuracy from space under NASA's Mission to Planet Earth (MTPE). The primary science goal of GLAS is to determine if the ice sheets are increasing or diminishing for climate change modeling. This is achieved by measuring the ice sheet heights over Greenland and Antarctica to 1.5 cm/yr over 100 km×100 km areas by crossover analysis (Zwally 1994). This measurement performance requires the instrument to determine the pointing of the laser beam to ~5 urad (1 arcsecond), 1-sigma, with respect to the inertial reference frame. The GLAS design incorporates a stellar reference system (SRS) to relate the laser beam pointing angle to the star field with this accuracy. This is the first time a spaceborne laser altimeter is measuring pointing to such high accuracy. The design for the stellar reference system combines an attitude determination system (ADS) with a laser reference system (LRS) to meet this requirement. The SRS approach and expected performance are described in this paper.
Assessing and Ensuring GOES-R Magnetometer Accuracy
NASA Technical Reports Server (NTRS)
Kronenwetter, Jeffrey; Carter, Delano R.; Todirita, Monica; Chu, Donald
2016-01-01
The GOES-R magnetometer accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. To achieve this, the sensor itself has better than 1 nT accuracy. Because zero offset and scale factor drift over time, it is also necessary to perform annual calibration maneuvers. To predict performance, we used covariance analysis and attempted to corroborate it with simulations. Although not perfect, the two generally agree and show the expected behaviors. With the annual calibration regimen, these predictions suggest that the magnetometers will meet their accuracy requirements.
NASA Astrophysics Data System (ADS)
Boudria, Yacine; Feltane, Amal; Besio, Walter
2014-06-01
Objective. Brain-computer interfaces (BCIs) based on electroencephalography (EEG) have been shown to accurately detect mental activities, but the acquisition of high levels of control require extensive user training. Furthermore, EEG has low signal-to-noise ratio and low spatial resolution. The objective of the present study was to compare the accuracy between two types of BCIs during the first recording session. EEG and tripolar concentric ring electrode (TCRE) EEG (tEEG) brain signals were recorded and used to control one-dimensional cursor movements. Approach. Eight human subjects were asked to imagine either ‘left’ or ‘right’ hand movement during one recording session to control the computer cursor using TCRE and disc electrodes. Main results. The obtained results show a significant improvement in accuracies using TCREs (44%-100%) compared to disc electrodes (30%-86%). Significance. This study developed the first tEEG-based BCI system for real-time one-dimensional cursor movements and showed high accuracies with little training.
Terpitz, Ulrich; Zimmermann, Dirk
2010-01-01
The Eppendorf Piezo-Power Microdissection (PPMD) system uses a tungsten needle (MicroChisel) oscillating in a forward-backward (vertical) mode to cut cells from surrounding tissue. This technology competes with laser-based dissection systems, which offer high accuracy and precision, but are more expensive and require fixed tissue. In contrast, PPMD systems can dissect freshly prepared tissue, but their accuracy and precision is lower due to unwanted lateral vibrations of the MicroChisel. Especially in tissues where elasticity is high, these vibrations can limit the cutting resolution or hamper the dissection. Here we describe a cost-efficient and simple glass capillary-encapsulation modification of MicroChisels for effective attenuation of lateral vibrations. The use of modified MicroChisels enables accurate and precise tissue dissection from highly elastic material.
A Spectralon BRF Data Base for MISR Calibration Application
NASA Technical Reports Server (NTRS)
Bruegge, C.; Chrien, N.; Haner, D.
1999-01-01
The Multi-angle Imaging SpectroRadiometer (MISR) is an Earth observing sensor which will provide global retrievals of aerosols, clouds, and land surface parameters. Instrument specifications require high accuracy absolute calibration, as well as accurate camera-to-camera, band-to-band and pixel-to-pixel relative response determinations.
40 CFR 90.314 - Analyzer accuracy and specifications.
Code of Federal Regulations, 2014 CFR
2014-07-01
... engineering practice. Adhere to the minimum requirements given in §§ 90.316 through 90.325 and § 90.409. (c) Emission measurement accuracy—Bag sampling. (1) Good engineering practice dictates that exhaust emission...) Some high resolution read-out systems, such as computers, data loggers, and so forth, can provide...
40 CFR 90.314 - Analyzer accuracy and specifications.
Code of Federal Regulations, 2011 CFR
2011-07-01
... engineering practice. Adhere to the minimum requirements given in §§ 90.316 through 90.325 and § 90.409. (c) Emission measurement accuracy—Bag sampling. (1) Good engineering practice dictates that exhaust emission...) Some high resolution read-out systems, such as computers, data loggers, and so forth, can provide...
40 CFR 90.314 - Analyzer accuracy and specifications.
Code of Federal Regulations, 2012 CFR
2012-07-01
... engineering practice. Adhere to the minimum requirements given in §§ 90.316 through 90.325 and § 90.409. (c) Emission measurement accuracy—Bag sampling. (1) Good engineering practice dictates that exhaust emission...) Some high resolution read-out systems, such as computers, data loggers, and so forth, can provide...
40 CFR 90.314 - Analyzer accuracy and specifications.
Code of Federal Regulations, 2013 CFR
2013-07-01
... engineering practice. Adhere to the minimum requirements given in §§ 90.316 through 90.325 and § 90.409. (c) Emission measurement accuracy—Bag sampling. (1) Good engineering practice dictates that exhaust emission...) Some high resolution read-out systems, such as computers, data loggers, and so forth, can provide...
ADAPTIVE-GRID SIMULATION OF GROUNDWATER FLOW IN HETEROGENEOUS AQUIFERS. (R825689C068)
The prediction of contaminant transport in porous media requires the computation of the flow velocity. This work presents a methodology for high-accuracy computation of flow in a heterogeneous isotropic formation, employing a dual-flow formulation and adaptive...
Soil Moisture Active Passive (SMAP) Calibration and validation plan and current activities
USDA-ARS?s Scientific Manuscript database
The primary objective of the SMAP calibration and validation (Cal/Val) program is demonstrating that the science requirements (product accuracy and bias) have been met over the mission life. This begins during pre-launch with activities that contribute to high quality products and establishing post-...
Surface refractivity measurements at NASA spacecraft tracking sites
NASA Technical Reports Server (NTRS)
Schmid, P. E.
1972-01-01
High-accuracy spacecraft tracking requires tropospheric modeling which is generally scaled by either estimated or measured values of surface refractivity. This report summarizes the results of a worldwide surface-refractivity test conducted in 1968 in support of the Apollo program. The results are directly applicable to all NASA radio-tracking systems.
NASA Technical Reports Server (NTRS)
Rapp, Richard H.
1993-01-01
The determination of the geoid and equipotential surface of the Earth's gravity field, has long been of interest to geodesists and oceanographers. The geoid provides a surface to which the actual ocean surface can be compared with the differences implying information on the circulation patterns of the oceans. For use in oceanographic applications the geoid is ideally needed to a high accuracy and to a high resolution. There are applications that require geoid undulation information to an accuracy of +/- 10 cm with a resolution of 50 km. We are far from this goal today but substantial improvement in geoid determination has been made. In 1979 the cumulative geoid undulation error to spherical harmonic degree 20 was +/- 1.4 m for the GEM10 potential coefficient model. Today the corresponding value has been reduced to +/- 25 cm for GEM-T3 or +/- 11 cm for the OSU91A model. Similar improvements are noted by harmonic degree (wave-length) and in resolution. Potential coefficient models now exist to degree 360 based on a combination of data types. This paper discusses the accuracy changes that have taken place in the past 12 years in the determination of geoid undulations.
Wang, Mi; Fan, Chengcheng; Yang, Bo; Jin, Shuying; Pan, Jun
2016-01-01
Satellite attitude accuracy is an important factor affecting the geometric processing accuracy of high-resolution optical satellite imagery. To address the problem whereby the accuracy of the Yaogan-24 remote sensing satellite’s on-board attitude data processing is not high enough and thus cannot meet its image geometry processing requirements, we developed an approach involving on-ground attitude data processing and digital orthophoto (DOM) and the digital elevation model (DEM) verification of a geometric calibration field. The approach focuses on three modules: on-ground processing based on bidirectional filter, overall weighted smoothing and fitting, and evaluation in the geometric calibration field. Our experimental results demonstrate that the proposed on-ground processing method is both robust and feasible, which ensures the reliability of the observation data quality, convergence and stability of the parameter estimation model. In addition, both the Euler angle and quaternion could be used to build a mathematical fitting model, while the orthogonal polynomial fitting model is more suitable for modeling the attitude parameter. Furthermore, compared to the image geometric processing results based on on-board attitude data, the image uncontrolled and relative geometric positioning result accuracy can be increased by about 50%. PMID:27483287
Kinematic and kinetic analysis of overhand, sidearm and underhand lacrosse shot techniques.
Macaulay, Charles A J; Katz, Larry; Stergiou, Pro; Stefanyshyn, Darren; Tomaghelli, Luciano
2017-12-01
Lacrosse requires the coordinated performance of many complex skills. One of these skills is shooting on the opponents' net using one of three techniques: overhand, sidearm or underhand. The purpose of this study was to (i) determine which technique generated the highest ball velocity and greatest shot accuracy and (ii) identify kinematic and kinetic variables that contribute to a high velocity and high accuracy shot. Twelve elite male lacrosse players participated in this study. Kinematic data were sampled at 250 Hz, while two-dimensional force plates collected ground reaction force data (1000 Hz). Statistical analysis showed significantly greater ball velocity for the sidearm technique than overhand (P < 0.001) and underhand (P < 0.001) techniques. No statistical difference was found for shot accuracy (P > 0.05). Kinematic and kinetic variables were not significantly correlated to shot accuracy or velocity across all shot types; however, when analysed independently, the lead foot horizontal impulse showed a negative correlation with underhand ball velocity (P = 0.042). This study identifies the technique with the highest ball velocity, defines kinematic and kinetic predictors related to ball velocity and provides information to coaches and athletes concerned with improving lacrosse shot performance.
Analysis of Sources of Large Positioning Errors in Deterministic Fingerprinting
2017-01-01
Wi-Fi fingerprinting is widely used for indoor positioning and indoor navigation due to the ubiquity of wireless networks, high proliferation of Wi-Fi-enabled mobile devices, and its reasonable positioning accuracy. The assumption is that the position can be estimated based on the received signal strength intensity from multiple wireless access points at a given point. The positioning accuracy, within a few meters, enables the use of Wi-Fi fingerprinting in many different applications. However, it has been detected that the positioning error might be very large in a few cases, which might prevent its use in applications with high accuracy positioning requirements. Hybrid methods are the new trend in indoor positioning since they benefit from multiple diverse technologies (Wi-Fi, Bluetooth, and Inertial Sensors, among many others) and, therefore, they can provide a more robust positioning accuracy. In order to have an optimal combination of technologies, it is crucial to identify when large errors occur and prevent the use of extremely bad positioning estimations in hybrid algorithms. This paper investigates why large positioning errors occur in Wi-Fi fingerprinting and how to detect them by using the received signal strength intensities. PMID:29186921
A novel redundant INS based on triple rotary inertial measurement units
NASA Astrophysics Data System (ADS)
Chen, Gang; Li, Kui; Wang, Wei; Li, Peng
2016-10-01
Accuracy and reliability are two key performances of inertial navigation system (INS). Rotation modulation (RM) can attenuate the bias of inertial sensors and make it possible for INS to achieve higher navigation accuracy with lower-class sensors. Therefore, the conflict between the accuracy and cost of INS can be eased. Traditional system redundancy and recently researched sensor redundancy are two primary means to improve the reliability of INS. However, how to make the best use of the redundant information from redundant sensors hasn’t been studied adequately, especially in rotational INS. This paper proposed a novel triple rotary unit strapdown inertial navigation system (TRUSINS), which combines RM and sensor redundancy design to enhance the accuracy and reliability of rotational INS. Each rotary unit independently rotates to modulate the errors of two gyros and two accelerometers. Three units can provide double sets of measurements along all three axes of body frame to constitute a couple of INSs which make TRUSINS redundant. Experiments and simulations based on a prototype which is made up of six fiber-optic gyros with drift stability of 0.05° h-1 show that TRUSINS can achieve positioning accuracy of about 0.256 n mile h-1, which is ten times better than that of a normal non-rotational INS with the same level inertial sensors. The theoretical analysis and the experimental results show that due to the advantage of the innovative structure, the designed fault detection and isolation (FDI) strategy can tolerate six sensor faults at most, and is proved to be effective and practical. Therefore, TRUSINS is particularly suitable and highly beneficial for the applications where high accuracy and high reliability is required.
A Novel Multi-Aperture Based Sun Sensor Based on a Fast Multi-Point MEANSHIFT (FMMS) Algorithm
You, Zheng; Sun, Jian; Xing, Fei; Zhang, Gao-Fei
2011-01-01
With the current increased widespread interest in the development and applications of micro/nanosatellites, it was found that we needed to design a small high accuracy satellite attitude determination system, because the star trackers widely used in large satellites are large and heavy, and therefore not suitable for installation on micro/nanosatellites. A Sun sensor + magnetometer is proven to be a better alternative, but the conventional sun sensor has low accuracy, and cannot meet the requirements of the attitude determination systems of micro/nanosatellites, so the development of a small high accuracy sun sensor with high reliability is very significant. This paper presents a multi-aperture based sun sensor, which is composed of a micro-electro-mechanical system (MEMS) mask with 36 apertures and an active pixels sensor (APS) CMOS placed below the mask at a certain distance. A novel fast multi-point MEANSHIFT (FMMS) algorithm is proposed to improve the accuracy and reliability, the two key performance features, of an APS sun sensor. When the sunlight illuminates the sensor, a sun spot array image is formed on the APS detector. Then the sun angles can be derived by analyzing the aperture image location on the detector via the FMMS algorithm. With this system, the centroid accuracy of the sun image can reach 0.01 pixels, without increasing the weight and power consumption, even when some missing apertures and bad pixels appear on the detector due to aging of the devices and operation in a harsh space environment, while the pointing accuracy of the single-aperture sun sensor using the conventional correlation algorithm is only 0.05 pixels. PMID:22163770
Tauscher, Sebastian; Fuchs, Alexander; Baier, Fabian; Kahrs, Lüder A; Ortmaier, Tobias
2017-10-01
Assistance of robotic systems in the operating room promises higher accuracy and, hence, demanding surgical interventions become realisable (e.g. the direct cochlear access). Additionally, an intuitive user interface is crucial for the use of robots in surgery. Torque sensors in the joints can be employed for intuitive interaction concepts. Regarding the accuracy, they lead to a lower structural stiffness and, thus, to an additional error source. The aim of this contribution is to examine, if an accuracy needed for demanding interventions can be achieved by such a system or not. Feasible accuracy results of the robot-assisted process depend on each work-flow step. This work focuses on the determination of the tool coordinate frame. A method for drill axis definition is implemented and analysed. Furthermore, a concept of admittance feed control is developed. This allows the user to control feeding along the planned path by applying a force to the robots structure. The accuracy is researched by drilling experiments with a PMMA phantom and artificial bone blocks. The described drill axis estimation process results in a high angular repeatability ([Formula: see text]). In the first set of drilling results, an accuracy of [Formula: see text] at entrance and [Formula: see text] at target point excluding imaging was achieved. With admittance feed control an accuracy of [Formula: see text] at target point was realised. In a third set twelve holes were drilled in artificial temporal bone phantoms including imaging. In this set-up an error of [Formula: see text] and [Formula: see text] was achieved. The results of conducted experiments show that accuracy requirements for demanding procedures such as the direct cochlear access can be fulfilled with compliant systems. Furthermore, it was shown that with the presented admittance feed control an accuracy of less then [Formula: see text] is achievable.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-07
... FEDERAL COMMUNICATIONS COMMISSION 47 CFR Part 20 [PS Docket No. 07-114; WC Docket No. 05-196; FCC 10-177; DA 10-2267] Wireless E911 Location Accuracy Requirements; E911 Requirements for IP-Enabled Service Providers AGENCY: Federal Communications Commission. ACTION: Proposed rule; extension of comment...
BRDF-dependent accuracy of array-projection-based 3D sensors.
Heist, Stefan; Kühmstedt, Peter; Tünnermann, Andreas; Notni, Gunther
2017-03-10
In order to perform high-speed three-dimensional (3D) shape measurements with structured light systems, high-speed projectors are required. One possibility is an array projector, which allows pattern projection at several tens of kilohertz by switching on and off the LEDs of various slide projectors. The different projection centers require a separate analysis, as the intensity received by the cameras depends on the projection direction and the object's bidirectional reflectance distribution function (BRDF). In this contribution, we investigate the BRDF-dependent errors of array-projection-based 3D sensors and propose an error compensation process.
Medical accuracy in sexuality education: ideology and the scientific process.
Santelli, John S
2008-10-01
Recently, many states have implemented requirements for scientific or medical accuracy in sexuality education and HIV prevention programs. Although seemingly uncontroversial, these requirements respond to the increasing injection of ideology into sexuality education, as represented by abstinence-only programs. I describe the process by which health professionals and government advisory groups within the United States reach scientific consensus and review the legal requirements and definitions for medical accuracy. Key elements of this scientific process include the weight of scientific evidence, the importance of scientific theory, peer review, and recognition by mainstream scientific and health organizations. I propose a concise definition of medical accuracy that may be useful to policymakers, health educators, and other health practitioners.
Upper Atmosphere Research Satellite (UARS) onboard attitude determination using a Kalman filter
NASA Technical Reports Server (NTRS)
Garrick, Joseph
1993-01-01
The Upper Atmospheric Research Satellite (UARS) requires a highly accurate knowledge of its attitude to accomplish its mission. Propagation of the attitude state using gyro measurements is not sufficient to meet the accuracy requirements, and must be supplemented by a observer/compensation process to correct for dynamics and observation anomalies. The process of amending the attitude state utilizes a well known method, the discrete Kalman Filter. This study is a sensitivity analysis of the discrete Kalman Filter as implemented in the UARS Onboard Computer (OBC). The stability of the Kalman Filter used in the normal on-orbit control mode within the OBC, is investigated for the effects of corrupted observations and nonlinear errors. Also, a statistical analysis on the residuals of the Kalman Filter is performed. These analysis is based on simulations using the UARS Dynamics Simulator (UARSDSIM) and compared against attitude requirements as defined by General Electric (GE). An independent verification of expected accuracies is performed using the Attitude Determination Error Analysis System (ADEAS).
Pressure profiles of the BRing based on the simulation used in the CSRm
NASA Astrophysics Data System (ADS)
Wang, J. C.; Li, P.; Yang, J. C.; Yuan, Y. J.; Wu, B.; Chai, Z.; Luo, C.; Dong, Z. Q.; Zheng, W. H.; Zhao, H.; Ruan, S.; Wang, G.; Liu, J.; Chen, X.; Wang, K. D.; Qin, Z. M.; Yin, B.
2017-07-01
HIAF-BRing, a new multipurpose accelerator facility of the High Intensity heavy-ion Accelerator Facility project, requires an extremely high vacuum lower than 10-11 mbar to fulfill the requirements of radioactive beam physics and high energy density physics. To achieve the required process pressure, the bench-marked codes of VAKTRAK and Molflow+ are used to simulate the pressure profiles of the BRing system. In order to ensure the accuracy of the implementation of VAKTRAK, the computational results are verified by measured pressure data and compared with a new simulation code BOLIDE on the current synchrotron CSRm. Since the verification of VAKTRAK has been done, the pressure profiles of the BRing are calculated with different parameters such as conductance, out-gassing rates and pumping speeds. According to the computational results, the optimal parameters are selected to achieve the required pressure for the BRing.
Highly efficient, very low-thrust transfer to geosynchronous orbit - Exact and approximate solutions
NASA Astrophysics Data System (ADS)
Redding, D. C.
1984-04-01
An overview is provided of the preflight, postflight, and accuracy analysis of the Titan IIIC launch vehicle that injects payloads into geosynchronous orbits. The postflight trajectory reconstruction plays an important role in determining payload injection accuracy. Furthermore, the postflight analysis provides useful information about the characteristics of measuring instruments subjected to a flight environment. Suitable approaches for meeting mission specifications, trajectory requirements, and instrument constraints are considered, taking into account the importance of preflight trajectory analysis activities. Gimbal flip avoidance algorithms in the flight software, and considerable beta gimbal analysis ensures a singularity-free trajectory.
The effect of low velocity impact in the strength characteristics of composite materials laminates
NASA Technical Reports Server (NTRS)
Liebowitz, H.
1983-01-01
The nonlinear vibration response of a double cantilevered beam subjected to pulse loading over a central sector is studied. The initial response is generated in detail to ascertain the energetics of the response. The total energy is used as a gauge of the stability and accuracy of the solution. It is shown that to obtain accurate and stable initial solutions an extremely high spatial and time resolution is required. This requirement was only evident through an examination of the energy of the system. It is proposed, therefore, to use the total energy of the system as a necessary stability and accuracy criterion for the nonlinear response of conservative systems. The results also demonstrate that even for moderate nonlinearities, the effects of membrane forces have a significant influence on the system.
World-wide precision airports for SVS
NASA Astrophysics Data System (ADS)
Schiefele, Jens; Lugsch, Bill; Launer, Marc; Baca, Diana
2004-08-01
Future cockpit and aviation applications require high quality airport databases. Accuracy, resolution, integrity, completeness, traceability, and timeliness [1] are key requirements. For most aviation applications, attributed vector databases are needed. The geometry is based on points, lines, and closed polygons. To document the needs for aviation industry RTCA and EUROCAE developed in a joint committee, the DO-272/ED-99 document. It states industry needs for data features, attributes, coding, and capture rules for Airport Mapping Databases (AMDB). This paper describes the technical approach Jeppesen has taken to generate a world-wide set of three-hundred AMDB airports. All AMDB airports are DO-200A/ED-76 [1] and DO-272/ED-99 [2] compliant. Jeppesen airports have a 5m (CE90) accuracy and an 10-3 integrity. World-wide all AMDB data is delivered in WGS84 coordinates. Jeppesen continually updates the databases.
NASA Astrophysics Data System (ADS)
Murata, C. H.; Fernandes, D. C.; Lavínia, N. C.; Caldas, L. V. E.; Pires, S. R.; Medeiros, R. B.
2014-02-01
The performance of radiological equipment can be assessed using non-invasive methods and portable instruments that can analyze an X-ray beam with just one exposure. These instruments use either an ionization chamber or a state solid detector (SSD) to evaluate X-ray beam parameters. In Brazil, no such instruments are currently being manufactured; consequently, these instruments come at a higher cost to users due to importation taxes. Additionally, quality control tests are time consuming and impose a high workload on the X-ray tubes when evaluating their performance parameters. The assessment of some parameters, such as the half-value layer (HVL), requires several exposures; however, this can be reduced by using a SSD that requires only a single exposure. One such SSD uses photodiodes designed for high X-ray sensitivity without the use of scintillation crystals. This sensitivity allows one electron-hole pair to be created per 3.63 eV of incident energy, resulting in extremely high and stable quantum efficiencies. These silicon photodiodes operate by absorbing photons and generating a flow of current that is proportional to the incident power. The aim of this study was to show the response of the solid sensor PIN RD100A detector in a multifunctional X-ray analysis system that is designed to evaluate the average peak voltage (kVp), exposure time, and HVL of radiological equipment. For this purpose, a prototype board that uses four SSDs was developed to measure kVp, exposure time, and HVL using a single exposure. The reproducibility and accuracy of the results were compared to that of different X-ray beam analysis instruments. The kVp reproducibility and accuracy results were 2% and 3%, respectively; the exposure time reproducibility and accuracy results were 2% and 1%, respectively; and the HVL accuracy was ±2%. The prototype's methodology was able to calculate these parameters with appropriate reproducibility and accuracy. Therefore, the prototype can be considered a multifunctional instrument that can appropriately evaluate the performance of radiological equipment.
NASA Astrophysics Data System (ADS)
Eugster, H.; Huber, F.; Nebiker, S.; Gisi, A.
2012-07-01
Stereovision based mobile mapping systems enable the efficient capturing of directly georeferenced stereo pairs. With today's camera and onboard storage technologies imagery can be captured at high data rates resulting in dense stereo sequences. These georeferenced stereo sequences provide a highly detailed and accurate digital representation of the roadside environment which builds the foundation for a wide range of 3d mapping applications and image-based geo web-services. Georeferenced stereo images are ideally suited for the 3d mapping of street furniture and visible infrastructure objects, pavement inspection, asset management tasks or image based change detection. As in most mobile mapping systems, the georeferencing of the mapping sensors and observations - in our case of the imaging sensors - normally relies on direct georeferencing based on INS/GNSS navigation sensors. However, in urban canyons the achievable direct georeferencing accuracy of the dynamically captured stereo image sequences is often insufficient or at least degraded. Furthermore, many of the mentioned application scenarios require homogeneous georeferencing accuracy within a local reference frame over the entire mapping perimeter. To achieve these demands georeferencing approaches are presented and cost efficient workflows are discussed which allows validating and updating the INS/GNSS based trajectory with independently estimated positions in cases of prolonged GNSS signal outages in order to increase the georeferencing accuracy up to the project requirements.
Integrated calibration of multiview phase-measuring profilometry
NASA Astrophysics Data System (ADS)
Lee, Yeong Beum; Kim, Min H.
2017-11-01
Phase-measuring profilometry (PMP) measures per-pixel height information of a surface with high accuracy. Height information captured by a camera in PMP relies on its screen coordinates. Therefore, a PMP measurement from a view cannot be integrated directly to other measurements from different views due to the intrinsic difference of the screen coordinates. In order to integrate multiple PMP scans, an auxiliary calibration of each camera's intrinsic and extrinsic properties is required, in addition to principal PMP calibration. This is cumbersome and often requires physical constraints in the system setup, and multiview PMP is consequently rarely practiced. In this work, we present a novel multiview PMP method that yields three-dimensional global coordinates directly so that three-dimensional measurements can be integrated easily. Our PMP calibration parameterizes intrinsic and extrinsic properties of the configuration of both a camera and a projector simultaneously. It also does not require any geometric constraints on the setup. In addition, we propose a novel calibration target that can remain static without requiring any mechanical operation while conducting multiview calibrations, whereas existing calibration methods require manually changing the target's position and orientation. Our results validate the accuracy of measurements and demonstrate the advantages on our multiview PMP.
Scalable Photogrammetric Motion Capture System "mosca": Development and Application
NASA Astrophysics Data System (ADS)
Knyaz, V. A.
2015-05-01
Wide variety of applications (from industrial to entertainment) has a need for reliable and accurate 3D information about motion of an object and its parts. Very often the process of movement is rather fast as in cases of vehicle movement, sport biomechanics, animation of cartoon characters. Motion capture systems based on different physical principles are used for these purposes. The great potential for obtaining high accuracy and high degree of automation has vision-based system due to progress in image processing and analysis. Scalable inexpensive motion capture system is developed as a convenient and flexible tool for solving various tasks requiring 3D motion analysis. It is based on photogrammetric techniques of 3D measurements and provides high speed image acquisition, high accuracy of 3D measurements and highly automated processing of captured data. Depending on the application the system can be easily modified for different working areas from 100 mm to 10 m. The developed motion capture system uses from 2 to 4 technical vision cameras for video sequences of object motion acquisition. All cameras work in synchronization mode at frame rate up to 100 frames per second under the control of personal computer providing the possibility for accurate calculation of 3D coordinates of interest points. The system was used for a set of different applications fields and demonstrated high accuracy and high level of automation.
Block Adjustment and Image Matching of WORLDVIEW-3 Stereo Pairs and Accuracy Evaluation
NASA Astrophysics Data System (ADS)
Zuo, C.; Xiao, X.; Hou, Q.; Li, B.
2018-05-01
WorldView-3, as a high-resolution commercial earth observation satellite, which is launched by Digital Global, provides panchromatic imagery of 0.31 m resolution. The positioning accuracy is less than 3.5 meter CE90 without ground control, which can use for large scale topographic mapping. This paper presented the block adjustment for WorldView-3 based on RPC model and achieved the accuracy of 1 : 2000 scale topographic mapping with few control points. On the base of stereo orientation result, this paper applied two kinds of image matching algorithm for DSM extraction: LQM and SGM. Finally, this paper compared the accuracy of the point cloud generated by the two image matching methods with the reference data which was acquired by an airborne laser scanner. The results showed that the RPC adjustment model of WorldView-3 image with small number of GCPs could satisfy the requirement of Chinese Surveying and Mapping regulations for 1 : 2000 scale topographic maps. And the point cloud result obtained through WorldView-3 stereo image matching had higher elevation accuracy, the RMS error of elevation for bare ground area is 0.45 m, while for buildings the accuracy can almost reach 1 meter.
NASA Astrophysics Data System (ADS)
Mohebbi, Akbar
2018-02-01
In this paper we propose two fast and accurate numerical methods for the solution of multidimensional space fractional Ginzburg-Landau equation (FGLE). In the presented methods, to avoid solving a nonlinear system of algebraic equations and to increase the accuracy and efficiency of method, we split the complex problem into simpler sub-problems using the split-step idea. For a homogeneous FGLE, we propose a method which has fourth-order of accuracy in time component and spectral accuracy in space variable and for nonhomogeneous one, we introduce another scheme based on the Crank-Nicolson approach which has second-order of accuracy in time variable. Due to using the Fourier spectral method for fractional Laplacian operator, the resulting schemes are fully diagonal and easy to code. Numerical results are reported in terms of accuracy, computational order and CPU time to demonstrate the accuracy and efficiency of the proposed methods and to compare the results with the analytical solutions. The results show that the present methods are accurate and require low CPU time. It is illustrated that the numerical results are in good agreement with the theoretical ones.
Assessing and Ensuring GOES-R Magnetometer Accuracy
NASA Technical Reports Server (NTRS)
Carter, Delano R.; Todirita, Monica; Kronenwetter, Jeffrey; Chu, Donald
2016-01-01
The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. Both behave as expected, and their accuracy predictions agree within 30%. With the proposed calibration regimen, both suggest that the GOES-R magnetometer subsystem will meet its accuracy requirements.
Zhao, Yi-Jiao; Xiong, Yu-Xue; Wang, Yong
2017-01-01
In this study, the practical accuracy (PA) of optical facial scanners for facial deformity patients in oral clinic was evaluated. Ten patients with a variety of facial deformities from oral clinical were included in the study. For each patient, a three-dimensional (3D) face model was acquired, via a high-accuracy industrial "line-laser" scanner (Faro), as the reference model and two test models were obtained, via a "stereophotography" (3dMD) and a "structured light" facial scanner (FaceScan) separately. Registration based on the iterative closest point (ICP) algorithm was executed to overlap the test models to reference models, and "3D error" as a new measurement indicator calculated by reverse engineering software (Geomagic Studio) was used to evaluate the 3D global and partial (upper, middle, and lower parts of face) PA of each facial scanner. The respective 3D accuracy of stereophotography and structured light facial scanners obtained for facial deformities was 0.58±0.11 mm and 0.57±0.07 mm. The 3D accuracy of different facial partitions was inconsistent; the middle face had the best performance. Although the PA of two facial scanners was lower than their nominal accuracy (NA), they all met the requirement for oral clinic use.
Comparing ordinary kriging and inverse distance weighting for soil as pollution in Beijing.
Qiao, Pengwei; Lei, Mei; Yang, Sucai; Yang, Jun; Guo, Guanghui; Zhou, Xiaoyong
2018-06-01
Spatial interpolation method is the basis of soil heavy metal pollution assessment and remediation. The existing evaluation index for interpolation accuracy did not combine with actual situation. The selection of interpolation methods needs to be based on specific research purposes and research object characteristics. In this paper, As pollution in soils of Beijing was taken as an example. The prediction accuracy of ordinary kriging (OK) and inverse distance weighted (IDW) were evaluated based on the cross validation results and spatial distribution characteristics of influencing factors. The results showed that, under the condition of specific spatial correlation, the cross validation results of OK and IDW for every soil point and the prediction accuracy of spatial distribution trend are similar. But the prediction accuracy of OK for the maximum and minimum is less than IDW, while the number of high pollution areas identified by OK are less than IDW. It is difficult to identify the high pollution areas fully by OK, which shows that the smoothing effect of OK is obvious. In addition, with increasing of the spatial correlation of As concentration, the cross validation error of OK and IDW decreases, and the high pollution area identified by OK is approaching the result of IDW, which can identify the high pollution areas more comprehensively. However, because the semivariogram constructed by OK interpolation method is more subjective and requires larger number of soil samples, IDW is more suitable for spatial prediction of heavy metal pollution in soils.
Komori, Mie
2016-01-01
Monitoring is an executive function of working memory that serves to update novel information, focusing attention on task-relevant targets, and eliminating task-irrelevant noise. The present research used a verbal working memory task to examine how working memory capacity limits affect monitoring. Participants performed a Japanese listening span test that included maintenance of target words and listening comprehension. On each trial, participants responded to the target word and then immediately estimated confidence in recall performance for that word (metacognitive judgment). The results confirmed significant differences in monitoring accuracy between high and low capacity groups in a multi-task situation. That is, confidence judgments were superior in high vs. low capacity participants in terms of absolute accuracy and discrimination. The present research further investigated how memory load and interference affect underestimation of successful recall. The results indicated that the level of memory load that reduced word recall performance and led to an underconfidence bias varied according to participants' memory capacity. In addition, irrelevant information associated with incorrect true/ false decisions (secondary task) and word recall within the current trial impaired monitoring accuracy in both participant groups. These findings suggest that interference from unsuccessful decisions only influences low, but not high, capacity participants. Therefore, monitoring accuracy, which requires high working memory capacity, improves metacognitive abilities by inhibiting task-irrelevant noise and focusing attention on detecting task-relevant targets or useful retrieval cues, which could improve actual cognitive performance.
NASA Astrophysics Data System (ADS)
Kankare, Ville; Vauhkonen, Jari; Tanhuanpää, Topi; Holopainen, Markus; Vastaranta, Mikko; Joensuu, Marianna; Krooks, Anssi; Hyyppä, Juha; Hyyppä, Hannu; Alho, Petteri; Viitala, Risto
2014-11-01
Detailed information about timber assortments and diameter distributions is required in forest management. Forest owners can make better decisions concerning the timing of timber sales and forest companies can utilize more detailed information to optimize their wood supply chain from forest to factory. The objective here was to compare the accuracies of high-density laser scanning techniques for the estimation of tree-level diameter distribution and timber assortments. We also introduce a method that utilizes a combination of airborne and terrestrial laser scanning in timber assortment estimation. The study was conducted in Evo, Finland. Harvester measurements were used as a reference for 144 trees within a single clear-cut stand. The results showed that accurate tree-level timber assortments and diameter distributions can be obtained, using terrestrial laser scanning (TLS) or a combination of TLS and airborne laser scanning (ALS). Saw log volumes were estimated with higher accuracy than pulpwood volumes. The saw log volumes were estimated with relative root-mean-squared errors of 17.5% and 16.8% with TLS and a combination of TLS and ALS, respectively. The respective accuracies for pulpwood were 60.1% and 59.3%. The differences in the bucking method used also caused some large errors. In addition, tree quality factors highly affected the bucking accuracy, especially with pulpwood volume.
Shimamune, Satoru; Jitsumori, Masako
1999-01-01
In a computer-assisted sentence completion task, the effects of grammar instruction and fluency training on learning the use of the definite and indefinite articles of English were examined. Forty-eight native Japanese-speaking students were assigned to four groups: with grammar/accuracy (G/A), without grammar/accuracy (N/A), with grammar/fluency (G/F), and without grammar/fluency (N/F). In the G/A and N/A groups, training continued until performance reached 100% accuracy (accuracy criterion). In the G/F and N/F groups, training continued until 100% accuracy was reached and the correct responses were made at a high speed (fluency criterion). Grammar instruction was given to participants in the G/A and G/F groups but not to those in the N/A and N/F groups. Generalization to new sentences was tested immediately after reaching the required criterion. High levels of generalization occurred, regardless of the type of mastery criterion and whether the grammar instruction was given. Retention tests were conducted 4, 6, and 8 weeks after training. Fluency training effectively improved retention of the performance attained without the grammar instruction. This effect was diminished when grammar instruction was given during training. Learning grammatical rules was not necessary for the generalized use of appropriate definite and indefinite articles or for the maintenance of the performance attained through fluency training. PMID:22477154
NASA Astrophysics Data System (ADS)
Green, K. N.; van Alstine, R. L.
This paper presents the current performance levels of the SDG-5 gyro, a high performance two-axis dynamically tuned gyro, and the DRIRU II redundant inertial reference unit relating to stabilization and pointing applications. Also presented is a discussion of a product improvement program aimed at further noise reductions to meet the demanding requirements of future space defense applications.
Development of advanced high-temperature heat flux sensors
NASA Technical Reports Server (NTRS)
Atkinson, W. H.; Strange, R. R.
1982-01-01
Various configurations of high temperature, heat flux sensors were studied to determine their suitability for use in experimental combustor liners of advanced aircraft gas turbine engines. It was determined that embedded thermocouple sensors, laminated sensors, and Gardon gauge sensors, were the most viable candidates. Sensors of all three types were fabricated, calibrated, and endurance tested. All three types of sensors met the fabricability survivability, and accuracy requirements established for their application.
2017-01-09
2017 Distribution A – Approved for public release; Distribution Unlimited. PA Clearance 17030 Introduction • Filtering schemes offer a less...dissipative alternative to the standard artificial dissipation operators when applied to high- order spatial/temporal schemes • Limiting Fact: Filters impart...systems require a preconditioned dual-time framework to be solved efficiently • Limiting Fact: Filtering cannot be applied only at the physical- time
High-accuracy calculations of the rotation-vibration spectrum of {{\\rm{H}}}_{3}^{+}
NASA Astrophysics Data System (ADS)
Tennyson, Jonathan; Polyansky, Oleg L.; Zobov, Nikolai F.; Alijah, Alexander; Császár, Attila G.
2017-12-01
Calculation of the rotation-vibration spectrum of {{{H}}}3+, as well as of its deuterated isotopologues, with near-spectroscopic accuracy requires the development of sophisticated theoretical models, methods, and codes. The present paper reviews the state-of-the-art in these fields. Computation of rovibrational states on a given potential energy surface (PES) has now become standard for triatomic molecules, at least up to intermediate energies, due to developments achieved by the present authors and others. However, highly accurate Born-Oppenheimer energies leading to highly accurate PESs are not accessible even for this two-electron system using conventional electronic structure procedures (e.g. configuration-interaction or coupled-cluster techniques with extrapolation to the complete (atom-centered Gaussian) basis set limit). For this purpose, highly specialized techniques must be used, e.g. those employing explicitly correlated Gaussians and nonlinear parameter optimizations. It has also become evident that a very dense grid of ab initio points is required to obtain reliable representations of the computed points extending from the minimum to the asymptotic limits. Furthermore, adiabatic, relativistic, and quantum electrodynamic correction terms need to be considered to achieve near-spectroscopic accuracy during calculation of the rotation-vibration spectrum of {{{H}}}3+. The remaining and most intractable problem is then the treatment of the effects of non-adiabatic coupling on the rovibrational energies, which, in the worst cases, may lead to corrections on the order of several cm-1. A promising way of handling this difficulty is the further development of effective, motion- or even coordinate-dependent, masses and mass surfaces. Finally, the unresolved challenge of how to describe and elucidate the experimental pre-dissociation spectra of {{{H}}}3+ and its isotopologues is discussed.
AVHRR channel selection for land cover classification
Maxwell, S.K.; Hoffer, R.M.; Chapman, P.L.
2002-01-01
Mapping land cover of large regions often requires processing of satellite images collected from several time periods at many spectral wavelength channels. However, manipulating and processing large amounts of image data increases the complexity and time, and hence the cost, that it takes to produce a land cover map. Very few studies have evaluated the importance of individual Advanced Very High Resolution Radiometer (AVHRR) channels for discriminating cover types, especially the thermal channels (channels 3, 4 and 5). Studies rarely perform a multi-year analysis to determine the impact of inter-annual variability on the classification results. We evaluated 5 years of AVHRR data using combinations of the original AVHRR spectral channels (1-5) to determine which channels are most important for cover type discrimination, yet stabilize inter-annual variability. Particular attention was placed on the channels in the thermal portion of the spectrum. Fourteen cover types over the entire state of Colorado were evaluated using a supervised classification approach on all two-, three-, four- and five-channel combinations for seven AVHRR biweekly composite datasets covering the entire growing season for each of 5 years. Results show that all three of the major portions of the electromagnetic spectrum represented by the AVHRR sensor are required to discriminate cover types effectively and stabilize inter-annual variability. Of the two-channel combinations, channels 1 (red visible) and 2 (near-infrared) had, by far, the highest average overall accuracy (72.2%), yet the inter-annual classification accuracies were highly variable. Including a thermal channel (channel 4) significantly increased the average overall classification accuracy by 5.5% and stabilized interannual variability. Each of the thermal channels gave similar classification accuracies; however, because of the problems in consistently interpreting channel 3 data, either channel 4 or 5 was found to be a more appropriate choice. Substituting the thermal channel with a single elevation layer resulted in equivalent classification accuracies and inter-annual variability.
NASA Astrophysics Data System (ADS)
Cavigelli, Lukas; Bernath, Dominic; Magno, Michele; Benini, Luca
2016-10-01
Detecting and classifying targets in video streams from surveillance cameras is a cumbersome, error-prone and expensive task. Often, the incurred costs are prohibitive for real-time monitoring. This leads to data being stored locally or transmitted to a central storage site for post-incident examination. The required communication links and archiving of the video data are still expensive and this setup excludes preemptive actions to respond to imminent threats. An effective way to overcome these limitations is to build a smart camera that analyzes the data on-site, close to the sensor, and transmits alerts when relevant video sequences are detected. Deep neural networks (DNNs) have come to outperform humans in visual classifications tasks and are also performing exceptionally well on other computer vision tasks. The concept of DNNs and Convolutional Networks (ConvNets) can easily be extended to make use of higher-dimensional input data such as multispectral data. We explore this opportunity in terms of achievable accuracy and required computational effort. To analyze the precision of DNNs for scene labeling in an urban surveillance scenario we have created a dataset with 8 classes obtained in a field experiment. We combine an RGB camera with a 25-channel VIS-NIR snapshot sensor to assess the potential of multispectral image data for target classification. We evaluate several new DNNs, showing that the spectral information fused together with the RGB frames can be used to improve the accuracy of the system or to achieve similar accuracy with a 3x smaller computation effort. We achieve a very high per-pixel accuracy of 99.1%. Even for scarcely occurring, but particularly interesting classes, such as cars, 75% of the pixels are labeled correctly with errors occurring only around the border of the objects. This high accuracy was obtained with a training set of only 30 labeled images, paving the way for fast adaptation to various application scenarios.
Accuracy analysis for triangulation and tracking based on time-multiplexed structured light.
Wagner, Benjamin; Stüber, Patrick; Wissel, Tobias; Bruder, Ralf; Schweikard, Achim; Ernst, Floris
2014-08-01
The authors' research group is currently developing a new optical head tracking system for intracranial radiosurgery. This tracking system utilizes infrared laser light to measure features of the soft tissue on the patient's forehead. These features are intended to offer highly accurate registration with respect to the rigid skull structure by means of compensating for the soft tissue. In this context, the system also has to be able to quickly generate accurate reconstructions of the skin surface. For this purpose, the authors have developed a laser scanning device which uses time-multiplexed structured light to triangulate surface points. The accuracy of the authors' laser scanning device is analyzed and compared for different triangulation methods. These methods are given by the Linear-Eigen method and a nonlinear least squares method. Since Microsoft's Kinect camera represents an alternative for fast surface reconstruction, the authors' results are also compared to the triangulation accuracy of the Kinect device. Moreover, the authors' laser scanning device was used for tracking of a rigid object to determine how this process is influenced by the remaining triangulation errors. For this experiment, the scanning device was mounted to the end-effector of a robot to be able to calculate a ground truth for the tracking. The analysis of the triangulation accuracy of the authors' laser scanning device revealed a root mean square (RMS) error of 0.16 mm. In comparison, the analysis of the triangulation accuracy of the Kinect device revealed a RMS error of 0.89 mm. It turned out that the remaining triangulation errors only cause small inaccuracies for the tracking of a rigid object. Here, the tracking accuracy was given by a RMS translational error of 0.33 mm and a RMS rotational error of 0.12°. This paper shows that time-multiplexed structured light can be used to generate highly accurate reconstructions of surfaces. Furthermore, the reconstructed point sets can be used for high-accuracy tracking of objects, meeting the strict requirements of intracranial radiosurgery.
High Order Schemes in Bats-R-US for Faster and More Accurate Predictions
NASA Astrophysics Data System (ADS)
Chen, Y.; Toth, G.; Gombosi, T. I.
2014-12-01
BATS-R-US is a widely used global magnetohydrodynamics model that originally employed second order accurate TVD schemes combined with block based Adaptive Mesh Refinement (AMR) to achieve high resolution in the regions of interest. In the last years we have implemented fifth order accurate finite difference schemes CWENO5 and MP5 for uniform Cartesian grids. Now the high order schemes have been extended to generalized coordinates, including spherical grids and also to the non-uniform AMR grids including dynamic regridding. We present numerical tests that verify the preservation of free-stream solution and high-order accuracy as well as robust oscillation-free behavior near discontinuities. We apply the new high order accurate schemes to both heliospheric and magnetospheric simulations and show that it is robust and can achieve the same accuracy as the second order scheme with much less computational resources. This is especially important for space weather prediction that requires faster than real time code execution.
Tagliafico, Alberto Stefano; Bignotti, Bianca; Rossi, Federica; Signori, Alessio; Sormani, Maria Pia; Valdora, Francesca; Calabrese, Massimo; Houssami, Nehmat
2016-08-01
To estimate sensitivity and specificity of CESM for breast cancer diagnosis. Systematic review and meta-analysis of the accuracy of CESM in finding breast cancer in highly selected women. We estimated summary receiver operating characteristic curves, sensitivity and specificity according to quality criteria with QUADAS-2. Six hundred four studies were retrieved, 8 of these reporting on 920 patients with 994 lesions, were eligible for inclusion. Estimated sensitivity from all studies was: 0.98 (95% CI: 0.96-1.00). Specificity was estimated from six studies reporting raw data: 0.58 (95% CI: 0.38-0.77). The majority of studies were scored as at high risk of bias due to the very selected populations. CESM has a high sensitivity but very low specificity. The source studies were based on highly selected case series and prone to selection bias. High-quality studies are required to assess the accuracy of CESM in unselected cases. Copyright © 2016 Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Luthcke, Scott; Rowlands, David; Lemoine, Frank; Zelensky, Nikita; Beckley, Brian; Klosko, Steve; Chinn, Doug
2006-01-01
Although satellite altimetry has been around for thirty years, the last fifteen beginning with the launch of TOPEX/Poseidon (TP) have yielded an abundance of significant results including: monitoring of ENS0 events, detection of internal tides, determination of accurate global tides, unambiguous delineation of Rossby waves and their propagation characteristics, accurate determination of geostrophic currents, and a multi-decadal time series of mean sea level trend and dynamic ocean topography variability. While the high level of accuracy being achieved is a result of both instrument maturity and the quality of models and correction algorithms applied to the data, improving the quality of the Climate Data Records produced from altimetry is highly dependent on concurrent progress being made in fields such as orbit determination. The precision orbits form the reference frame from which the radar altimeter observations are made. Therefore, the accuracy of the altimetric mapping is limited to a great extent by the accuracy to which a satellite orbit can be computed. The TP mission represents the first time that the radial component of an altimeter orbit was routinely computed with an accuracy of 2-cm. Recently it has been demonstrated that it is possible to compute the radial component of Jason orbits with an accuracy of better than 1-cm. Additionally, still further improvements in TP orbits are being achieved with new techniques and algorithms largely developed from combined Jason and TP data analysis. While these recent POD achievements are impressive, the new accuracies are now revealing subtle systematic orbit error that manifest as both intra and inter annual ocean topography errors. Additionally the construction of inter-decadal time series of climate data records requires the removal of systematic differences across multiple missions. Current and future efforts must focus on the understanding and reduction of these errors in order to generate a complete and consistent time series of improved orbits across multiple missions and decades required for the most stringent climate-related research. This presentation discusses the POD progress and achievements made over nearly three decades, and presents the future challenges, goals and their impact on altimetric derived ocean sciences.
Testing and evaluation of tactical electro-optical sensors
NASA Astrophysics Data System (ADS)
Middlebrook, Christopher T.; Smith, John G.
2002-07-01
As integrated electro-optical sensor payloads (multi- sensors) comprised of infrared imagers, visible imagers, and lasers advance in performance, the tests and testing methods must also advance in order to fully evaluate them. Future operational requirements will require integrated sensor payloads to perform missions at further ranges and with increased targeting accuracy. In order to meet these requirements sensors will require advanced imaging algorithms, advanced tracking capability, high-powered lasers, and high-resolution imagers. To meet the U.S. Navy's testing requirements of such multi-sensors, the test and evaluation group in the Night Vision and Chemical Biological Warfare Department at NAVSEA Crane is developing automated testing methods, and improved tests to evaluate imaging algorithms, and procuring advanced testing hardware to measure high resolution imagers and line of sight stabilization of targeting systems. This paper addresses: descriptions of the multi-sensor payloads tested, testing methods used and under development, and the different types of testing hardware and specific payload tests that are being developed and used at NAVSEA Crane.
Brigham, John C.; Aquino, Wilkins; Aguilo, Miguel A.; Diamessis, Peter J.
2010-01-01
An approach for efficient and accurate finite element analysis of harmonically excited soft solids using high-order spectral finite elements is presented and evaluated. The Helmholtz-type equations used to model such systems suffer from additional numerical error known as pollution when excitation frequency becomes high relative to stiffness (i.e. high wave number), which is the case, for example, for soft tissues subject to ultrasound excitations. The use of high-order polynomial elements allows for a reduction in this pollution error, but requires additional consideration to counteract Runge's phenomenon and/or poor linear system conditioning, which has led to the use of spectral element approaches. This work examines in detail the computational benefits and practical applicability of high-order spectral elements for such problems. The spectral elements examined are tensor product elements (i.e. quad or brick elements) of high-order Lagrangian polynomials with non-uniformly distributed Gauss-Lobatto-Legendre nodal points. A shear plane wave example is presented to show the dependence of the accuracy and computational expense of high-order elements on wave number. Then, a convergence study for a viscoelastic acoustic-structure interaction finite element model of an actual ultrasound driven vibroacoustic experiment is shown. The number of degrees of freedom required for a given accuracy level was found to consistently decrease with increasing element order. However, the computationally optimal element order was found to strongly depend on the wave number. PMID:21461402
Contributions for the next generation of 3D metal printing machines
NASA Astrophysics Data System (ADS)
Pereira, M.; Thombansen, U.
2015-03-01
The 3D metal printing processes are key technologies for the new industry manufacturing requirements, as small lot production associated with high design complexity and high flexibility are needed towards personalization and customization. The main challenges for these processes are associated to increasing printing volumes, maintaining the relative accuracy level and reducing the global manufacturing time. Through a review on current technologies and solutions proposed by global patents new design solutions for 3D metal printing machines can be suggested. This paper picks up current technologies and trends in SLM and suggests some design approaches to overcome these challenges. As the SLM process is based on laser scanning, an increase in printing volume requires moving the scanner over the work surface by motion systems if printing accuracy has to be kept constant. This approach however does not contribute to a reduction in manufacturing time, as only one laser source will be responsible for building the entire work piece. With given technology limits in galvo based laser scanning systems, the most obvious solution consists in using multiple beam delivery systems in series, in parallel or both. Another concern is related to the weight of large work pieces. A new powder recoater can control the layer thickness and uniformity and eliminate or diminish fumes. To improve global accuracy, the use of a pair of high frequency piezoelectric actuators can help in positioning the laser beam. The implementation of such suggestions can contribute to SLM productivity. To do this, several research activities need to be accomplished in areas related to design, control, software and process fundamentals.
Maas, E T; Juch, J N S; Ostelo, R W J G; Groeneweg, J G; Kallewaard, J W; Koes, B W; Verhagen, A P; Huygen, F J P M; van Tulder, M W
2017-03-01
Patient history and physical examination are frequently used procedures to diagnose chronic low back pain (CLBP) originating from the facet joints, although the diagnostic accuracy is controversial. The aim of this systematic review is to determine the diagnostic accuracy of patient history and/or physical examination to identify CLBP originating from the facet joints using diagnostic blocks as reference standard. We searched MEDLINE, EMBASE, CINAHL, Web of Science and the Cochrane Collaboration database from inception until June 2016. Two review authors independently selected studies for inclusion, extracted data and assessed the risk of bias. We calculated sensitivity and specificity values, with 95% confidence intervals (95% CI). Twelve studies were included, in which 129 combinations of index tests and reference standards were presented. Most of these index tests have only been evaluated in single studies with a high risk of bias. Four studies evaluated the diagnostic accuracy of the Revel's criteria combination. Because of the clinical heterogeneity, results were not pooled. The published sensitivities ranged from 0.11 (95% CI 0.02-0.29) to 1.00 (95% CI 0.75-1.00), and the specificities ranged from 0.66 (95% CI 0.46-0.82) to 0.91 (95% CI 0.83-0.96). Due to clinical heterogeneity, the evidence for the diagnostic accuracy of patient history and/or physical examination to identify facet joint pain is inconclusive. Patient history and physical examination cannot be used to limit the need of a diagnostic block. The validity of the diagnostic facet joint block should be studied, and high quality studies are required to confirm the results of single studies. Patient history and physical examination cannot be used to limit the need of a diagnostic block. The validity of the diagnostic facet joint block should be studied, and high quality studies are required to confirm the results of single studies. © 2016 European Pain Federation - EFIC®.
A new method to obtain ground control points based on SRTM data
NASA Astrophysics Data System (ADS)
Wang, Pu; An, Wei; Deng, Xin-pu; Zhang, Xi
2013-09-01
The GCPs are widely used in remote sense image registration and geometric correction. Normally, the DRG and DOM are the major data source from which GCPs are extracted. But the high accuracy products of DRG and DOM are usually costly to obtain. Some of the production are free, yet without any guarantee. In order to balance the cost and the accuracy, the paper proposes a method of extracting the GCPs from SRTM data. The method consist of artificial assistance, binarization, data resample and reshape. With artificial assistance to find out which part of SRTM data could be used as GCPs, such as the islands or sharp coast line. By utilizing binarization algorithm , the shape information of the region is obtained while other information is excluded. Then the binary data is resampled to a suitable resolution required by specific application. At last, the data would be reshaped according to satellite imaging type to obtain the GCPs which could be used. There are three advantages of the method proposed in the paper. Firstly, the method is easy for implementation. Unlike the DRG data or DOM data that charges a lot, the SRTM data is totally free to access without any constricts. Secondly, the SRTM has a high accuracy about 90m that is promised by its producer, so the GCPs got from it can also obtain a high quality. Finally, given the SRTM data covers nearly all the land surface of earth between latitude -60° and latitude +60°, the GCPs which are produced by the method can cover most important regions of the world. The method which obtain GCPs from SRTM data can be used in meteorological satellite image or some situation alike, which have a relative low requirement about the accuracy. Through plenty of simulation test, the method is proved convenient and effective.
Teachers' Judgement Accuracy Concerning CEFR Levels of Prospective University Students
ERIC Educational Resources Information Center
Fleckenstein, Johanna; Leucht, Michael; Köller, Olaf
2018-01-01
Most English-medium programs at European universities require prospective students to take standardised tests for English as a foreign language (EFL) to be admitted. However, there are contexts in which individual teachers' judgements serve the same function, thus having high-stakes consequences for the higher education entrance of their students.…
Exploring Proficiency-Based vs. Performance-Based Items with Elicited Imitation Assessment
ERIC Educational Resources Information Center
Cox, Troy L.; Bown, Jennifer; Burdis, Jacob
2015-01-01
This study investigates the effect of proficiency- vs. performance-based elicited imitation (EI) assessment. EI requires test-takers to repeat sentences in the target language. The accuracy at which test-takers are able to repeat sentences highly correlates with test-takers' language proficiency. However, in EI, the factors that render an item…
Accuracy of airspeed measurements and flight calibration procedures
NASA Technical Reports Server (NTRS)
Huston, Wilber B
1948-01-01
The sources of error that may enter into the measurement of airspeed by pitot-static methods are reviewed in detail together with methods of flight calibration of airspeed installations. Special attention is given to the problem of accurate measurements of airspeed under conditions of high speed and maneuverability required of military airplanes. (author)
A research of a high precision multichannel data acquisition system
NASA Astrophysics Data System (ADS)
Zhong, Ling-na; Tang, Xiao-ping; Yan, Wei
2013-08-01
The output signals of the focusing system in lithography are analog. To convert the analog signals into digital ones which are more flexible and stable to process, a desirable data acquisition system is required. The resolution of data acquisition, to some extent, affects the accuracy of focusing. In this article, we first compared performance between the various kinds of analog-to-digital converters (ADC) available on the market at the moment. Combined with the specific requirements (sampling frequency, converting accuracy, numbers of channels etc) and the characteristics (polarization, amplitude range etc) of the analog signals, the model of the ADC to be used as the core chip in our hardware design was determined. On this basis, we chose other chips needed in the hardware circuit that would well match with ADC, then the overall hardware design was obtained. Validation of our data acquisition system was verified through experiments and it can be demonstrated that the system can effectively realize the high resolution conversion of the multi-channel analog signals and give the accurate focusing information in lithography.
Setford, Steven; Smith, Antony; McColl, David; Grady, Mike; Koria, Krisna; Cameron, Hilary
2015-01-01
Assess laboratory and in-clinic performance of the OneTouch Select(®) Plus test system against ISO 15197:2013 standard for measurement of blood glucose. System performance assessed in laboratory against key patient, environmental and pharmacologic factors. User performance was assessed in clinic by system-naïve lay-users. Healthcare professionals assessed system accuracy on diabetes subjects in clinic. The system demonstrated high levels of performance, meeting ISO 15197:2013 requirements in laboratory testing (precision, linearity, hematocrit, temperature, humidity and altitude). System performance was tested against 28 interferents, with an adverse interfering effect only being recorded for pralidoxime iodide. Clinic user performance results fulfilled ISO 15197:2013 accuracy criteria. Subjects agreed that the color range indicator clearly showed if they were low, in-range or high and helped them better understand glucose results. The system evaluated is accurate and meets all ISO 15197:2013 requirements as per the tests described. The color range indicator helped subjects understand glucose results and supports patients in following healthcare professional recommendations on glucose targets.
Optical integration of SPO mirror modules in the ATHENA telescope
NASA Astrophysics Data System (ADS)
Valsecchi, G.; Marioni, F.; Bianucci, G.; Zocchi, F. E.; Gallieni, D.; Parodi, G.; Ottolini, M.; Collon, M.; Civitani, M.; Pareschi, G.; Spiga, D.; Bavdaz, M.; Wille, E.
2017-08-01
ATHENA (Advanced Telescope for High-ENergy Astrophysics) is the next high-energy astrophysical mission selected by the European Space Agency for launch in 2028. The X-ray telescope consists of 1062 silicon pore optics mirror modules with a target angular resolution of 5 arcsec. Each module must be integrated on a 3 m structure with an accuracy of 1.5 arcsec for alignment and assembly. This industrial and scientific team is developing the alignment and integration process of the SPO mirror modules based on ultra-violet imaging at the 12 m focal plane. This technique promises to meet the accuracy requirement while, at the same time, allowing arbitrary integration sequence and mirror module exchangeability. Moreover, it enables monitoring the telescope point spread function during the planned 3-year integration phase.
Zhang, Le; Lawson, Ken; Yeung, Bernice; Wypych, Jette
2015-01-06
A purity method based on capillary zone electrophoresis (CZE) has been developed for the separation of isoforms of a highly glycosylated protein. The separation was found to be driven by the number of sialic acids attached to each isoform. The method has been characterized using orthogonal assays and shown to have excellent specificity, precision and accuracy. We have demonstrated the CZE method is a useful in-process assay to support cell culture and purification development of this glycoprotein. Compared to isoelectric focusing (IEF), the CZE method provides more quantitative results and higher sample throughput with excellent accuracy, qualities that are required for process development. In addition, the CZE method has been applied in the stability testing of purified glycoprotein samples.
NASA Technical Reports Server (NTRS)
Mareboyana, Manohar; Le Moigne-Stewart, Jacqueline; Bennett, Jerome
2016-01-01
In this paper, we demonstrate a simple algorithm that projects low resolution (LR) images differing in subpixel shifts on a high resolution (HR) also called super resolution (SR) grid. The algorithm is very effective in accuracy as well as time efficiency. A number of spatial interpolation techniques using nearest neighbor, inverse-distance weighted averages, Radial Basis Functions (RBF) etc. used in projection yield comparable results. For best accuracy of reconstructing SR image by a factor of two requires four LR images differing in four independent subpixel shifts. The algorithm has two steps: i) registration of low resolution images and (ii) shifting the low resolution images to align with reference image and projecting them on high resolution grid based on the shifts of each low resolution image using different interpolation techniques. Experiments are conducted by simulating low resolution images by subpixel shifts and subsampling of original high resolution image and the reconstructing the high resolution images from the simulated low resolution images. The results of accuracy of reconstruction are compared by using mean squared error measure between original high resolution image and reconstructed image. The algorithm was tested on remote sensing images and found to outperform previously proposed techniques such as Iterative Back Projection algorithm (IBP), Maximum Likelihood (ML), and Maximum a posterior (MAP) algorithms. The algorithm is robust and is not overly sensitive to the registration inaccuracies.
High-Resolution Surface Reconstruction from Imagery for Close Range Cultural Heritage Applications
NASA Astrophysics Data System (ADS)
Wenzel, K.; Abdel-Wahab, M.; Cefalu, A.; Fritsch, D.
2012-07-01
The recording of high resolution point clouds with sub-mm resolution is a demanding and cost intensive task, especially with current equipment like handheld laser scanners. We present an image based approached, where techniques of image matching and dense surface reconstruction are combined with a compact and affordable rig of off-the-shelf industry cameras. Such cameras provide high spatial resolution with low radiometric noise, which enables a one-shot solution and thus an efficient data acquisition while satisfying high accuracy requirements. However, the largest drawback of image based solutions is often the acquisition of surfaces with low texture where the image matching process might fail. Thus, an additional structured light projector is employed, represented here by the pseudo-random pattern projector of the Microsoft Kinect. Its strong infrared-laser projects speckles of different sizes. By using dense image matching techniques on the acquired images, a 3D point can be derived for almost each pixel. The use of multiple cameras enables the acquisition of a high resolution point cloud with high accuracy for each shot. For the proposed system up to 3.5 Mio. 3D points with sub-mm accuracy can be derived per shot. The registration of multiple shots is performed by Structure and Motion reconstruction techniques, where feature points are used to derive the camera positions and rotations automatically without initial information.
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; Chang, Chau-Lyan; Venkatachari, Balaji Shankar
2017-01-01
Traditionally high-aspect ratio triangular/tetrahedral meshes are avoided by CFD researchers in the vicinity of a solid wall, as it is known to reduce the accuracy of gradient computations in those regions. Although for certain complex geometries, the use of high-aspect ratio triangular/tetrahedral elements in the vicinity of a solid wall can be replaced by quadrilateral/prismatic elements, ability to use triangular/tetrahedral elements in such regions without any degradation in accuracy can be beneficial from a mesh generation point of view. The benefits also carry over to numerical frameworks such as the space-time conservation element and solution element (CESE), where simplex elements are the mandatory building blocks. With the requirement of the CESE method in mind, a rigorous mathematical framework that clearly identifies the reason behind the difficulties in use of such high-aspect ratio simplex elements is formulated using two different approaches and presented here. Drawing insights from the analysis, a potential solution to avoid that pitfall is also provided as part of this work. Furthermore, through the use of numerical simulations of practical viscous problems involving high-Reynolds number flows, how the gradient evaluation procedures of the CESE framework can be effectively used to produce accurate and stable results on such high-aspect ratio simplex meshes is also showcased.
The utility of low-density genotyping for imputation in the Thoroughbred horse
2014-01-01
Background Despite the dramatic reduction in the cost of high-density genotyping that has occurred over the last decade, it remains one of the limiting factors for obtaining the large datasets required for genomic studies of disease in the horse. In this study, we investigated the potential for low-density genotyping and subsequent imputation to address this problem. Results Using the haplotype phasing and imputation program, BEAGLE, it is possible to impute genotypes from low- to high-density (50K) in the Thoroughbred horse with reasonable to high accuracy. Analysis of the sources of variation in imputation accuracy revealed dependence both on the minor allele frequency of the single nucleotide polymorphisms (SNPs) being imputed and on the underlying linkage disequilibrium structure. Whereas equidistant spacing of the SNPs on the low-density panel worked well, optimising SNP selection to increase their minor allele frequency was advantageous, even when the panel was subsequently used in a population of different geographical origin. Replacing base pair position with linkage disequilibrium map distance reduced the variation in imputation accuracy across SNPs. Whereas a 1K SNP panel was generally sufficient to ensure that more than 80% of genotypes were correctly imputed, other studies suggest that a 2K to 3K panel is more efficient to minimize the subsequent loss of accuracy in genomic prediction analyses. The relationship between accuracy and genotyping costs for the different low-density panels, suggests that a 2K SNP panel would represent good value for money. Conclusions Low-density genotyping with a 2K SNP panel followed by imputation provides a compromise between cost and accuracy that could promote more widespread genotyping, and hence the use of genomic information in horses. In addition to offering a low cost alternative to high-density genotyping, imputation provides a means to combine datasets from different genotyping platforms, which is becoming necessary since researchers are starting to use the recently developed equine 70K SNP chip. However, more work is needed to evaluate the impact of between-breed differences on imputation accuracy. PMID:24495673
Mapping Gnss Restricted Environments with a Drone Tandem and Indirect Position Control
NASA Astrophysics Data System (ADS)
Cledat, E.; Cucci, D. A.
2017-08-01
The problem of autonomously mapping highly cluttered environments, such as urban and natural canyons, is intractable with the current UAV technology. The reason lies in the absence or unreliability of GNSS signals due to partial sky occlusion or multi-path effects. High quality carrier-phase observations are also required in efficient mapping paradigms, such as Assisted Aerial Triangulation, to achieve high ground accuracy without the need of dense networks of ground control points. In this work we consider a drone tandem in which the first drone flies outside the canyon, where GNSS constellation is ideal, visually tracks the second drone and provides an indirect position control for it. This enables both autonomous guidance and accurate mapping of GNSS restricted environments without the need of ground control points. We address the technical feasibility of this concept considering preliminary real-world experiments in comparable conditions and we perform a mapping accuracy prediction based on a simulation scenario.
Kim, Heejun; Bian, Jiantao; Mostafa, Javed; Jonnalagadda, Siddhartha; Del Fiol, Guilherme
2016-01-01
Motivation: Clinicians need up-to-date evidence from high quality clinical trials to support clinical decisions. However, applying evidence from the primary literature requires significant effort. Objective: To examine the feasibility of automatically extracting key clinical trial information from ClinicalTrials.gov. Methods: We assessed the coverage of ClinicalTrials.gov for high quality clinical studies that are indexed in PubMed. Using 140 random ClinicalTrials.gov records, we developed and tested rules for the automatic extraction of key information. Results: The rate of high quality clinical trial registration in ClinicalTrials.gov increased from 0.2% in 2005 to 17% in 2015. Trials reporting results increased from 3% in 2005 to 19% in 2015. The accuracy of the automatic extraction algorithm for 10 trial attributes was 90% on average. Future research is needed to improve the algorithm accuracy and to design information displays to optimally present trial information to clinicians.
Misawa, Masashi; Kudo, Shin-Ei; Mori, Yuichi; Takeda, Kenichi; Maeda, Yasuharu; Kataoka, Shinichi; Nakamura, Hiroki; Kudo, Toyoki; Wakamura, Kunihiko; Hayashi, Takemasa; Katagiri, Atsushi; Baba, Toshiyuki; Ishida, Fumio; Inoue, Haruhiro; Nimura, Yukitaka; Oda, Msahiro; Mori, Kensaku
2017-05-01
Real-time characterization of colorectal lesions during colonoscopy is important for reducing medical costs, given that the need for a pathological diagnosis can be omitted if the accuracy of the diagnostic modality is sufficiently high. However, it is sometimes difficult for community-based gastroenterologists to achieve the required level of diagnostic accuracy. In this regard, we developed a computer-aided diagnosis (CAD) system based on endocytoscopy (EC) to evaluate cellular, glandular, and vessel structure atypia in vivo. The purpose of this study was to compare the diagnostic ability and efficacy of this CAD system with the performances of human expert and trainee endoscopists. We developed a CAD system based on EC with narrow-band imaging that allowed microvascular evaluation without dye (ECV-CAD). The CAD algorithm was programmed based on texture analysis and provided a two-class diagnosis of neoplastic or non-neoplastic, with probabilities. We validated the diagnostic ability of the ECV-CAD system using 173 randomly selected EC images (49 non-neoplasms, 124 neoplasms). The images were evaluated by the CAD and by four expert endoscopists and three trainees. The diagnostic accuracies for distinguishing between neoplasms and non-neoplasms were calculated. ECV-CAD had higher overall diagnostic accuracy than trainees (87.8 vs 63.4%; [Formula: see text]), but similar to experts (87.8 vs 84.2%; [Formula: see text]). With regard to high-confidence cases, the overall accuracy of ECV-CAD was also higher than trainees (93.5 vs 71.7%; [Formula: see text]) and comparable to experts (93.5 vs 90.8%; [Formula: see text]). ECV-CAD showed better diagnostic accuracy than trainee endoscopists and was comparable to that of experts. ECV-CAD could thus be a powerful decision-making tool for less-experienced endoscopists.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lieu, Richard
A hierarchy of statistics of increasing sophistication and accuracy is proposed to exploit an interesting and fundamental arithmetic structure in the photon bunching noise of incoherent light of large photon occupation number, with the purpose of suppressing the noise and rendering a more reliable and unbiased measurement of the light intensity. The method does not require any new hardware, rather it operates at the software level with the help of high-precision computers to reprocess the intensity time series of the incident light to create a new series with smaller bunching noise coherence length. The ultimate accuracy improvement of this methodmore » of flux measurement is limited by the timing resolution of the detector and the photon occupation number of the beam (the higher the photon number the better the performance). The principal application is accuracy improvement in the signal-limited bolometric flux measurement of a radio source.« less
An effective algorithm for calculating the Chandrasekhar function
NASA Astrophysics Data System (ADS)
Jablonski, A.
2012-08-01
Numerical values of the Chandrasekhar function are needed with high accuracy in evaluations of theoretical models describing electron transport in condensed matter. An algorithm for such calculations should be possibly fast and also accurate, e.g. an accuracy of 10 decimal digits is needed for some applications. Two of the integral representations of the Chandrasekhar function are prospective for constructing such an algorithm, but suitable transformations are needed to obtain a rapidly converging quadrature. A mixed algorithm is proposed in which the Chandrasekhar function is calculated from two algorithms, depending on the value of one of the arguments. Catalogue identifier: AEMC_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEMC_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 567 No. of bytes in distributed program, including test data, etc.: 4444 Distribution format: tar.gz Programming language: Fortran 90 Computer: Any computer with a FORTRAN 90 compiler Operating system: Linux, Windows 7, Windows XP RAM: 0.6 Mb Classification: 2.4, 7.2 Nature of problem: An attempt has been made to develop a subroutine that calculates the Chandrasekhar function with high accuracy, of at least 10 decimal places. Simultaneously, this subroutine should be very fast. Both requirements stem from the theory of electron transport in condensed matter. Solution method: Two algorithms were developed, each based on a different integral representation of the Chandrasekhar function. The final algorithm is edited by mixing these two algorithms and by selecting ranges of the argument ω in which performance is the fastest. Restrictions: Two input parameters for the Chandrasekhar function, x and ω (notation used in the code), are restricted to the range: 0⩽x⩽1 and 0⩽ω⩽1, which is sufficient in numerous applications. Unusual features: The program uses the Romberg quadrature for integration. This quadrature is applicable to integrands that satisfy several requirements (the integrand does not vary rapidly and does not change sign in the integration interval; furthermore, the integrand is finite at the endpoints). Consequently, the analyzed integrands were transformed so that these requirements were satisfied. In effect, one can conveniently control the accuracy of integration. Although the desired fractional accuracy was set at 10-10, the obtained accuracy of the Chandrasekhar function was much higher, typically 13 decimal places. Running time: Between 0.7 and 5 milliseconds for one pair of arguments of the Chandrasekhar function.
Design concepts and cost studies for magnetic suspension and balance systems. [wind tunnel apparatus
NASA Technical Reports Server (NTRS)
Bloom, H. L.
1982-01-01
The application of superconducting magnets for suspension and balance of wind tunnel models was studied. Conceptual designs are presented for magnetic suspension and balance system (MSBS) configurations compatible with three high Reynolds number cases representing specified combinations of test conditions and model sizes. Concepts in general met initially specified performance requirements such as duty cycle, force and moment levels, model angular displacement and positioning accuracy with nominal design requirements for support subsystems. Other performance requirements, such as forced model sinusoidal oscillations, and control force magnitude and frequency, were modified so as to alleviate the magnitude of magnet, power, and cryogenic design requirements.
NASA Astrophysics Data System (ADS)
Malekabadi, Ali; Paoloni, Claudio
2016-09-01
A microfabrication process based on UV LIGA (German acronym of lithography, electroplating and molding) is proposed for the fabrication of relatively high aspect ratio sub-terahertz (100-1000 GHz) metal waveguides, to be used as a slow wave structure in sub-THz vacuum electron devices. The high accuracy and tight tolerances required to properly support frequencies in the sub-THz range can be only achieved by a stable process with full parameter control. The proposed process, based on SU-8 photoresist, has been developed to satisfy high planar surface requirements for metal sub-THz waveguides. It will be demonstrated that, for a given thickness, it is more effective to stack a number of layers of SU-8 with lower thickness rather than using a single thick layer obtained at lower spin rate. The multiple layer approach provides the planarity and the surface quality required for electroforming of ground planes or assembly surfaces and for assuring low ohmic losses of waveguides. A systematic procedure is provided to calculate soft and post-bake times to produce high homogeneity SU-8 multiple layer coating as a mold for very high quality metal waveguides. A double corrugated waveguide designed for 0.3 THz operating frequency, to be used in vacuum electronic devices, was fabricated as test structure. The proposed process based on UV LIGA will enable low cost production of high accuracy sub-THz 3D waveguides. This is fundamental for producing a new generation of affordable sub-THz vacuum electron devices, to fill the technological gap that still prevents a wide diffusion of numerous applications based on THz radiation.
Design of PID temperature control system based on STM32
NASA Astrophysics Data System (ADS)
Zhang, Jianxin; Li, Hailin; Ma, Kai; Xue, Liang; Han, Bianhua; Dong, Yuemeng; Tan, Yue; Gu, Chengru
2018-03-01
A rapid and high-accuracy temperature control system was designed using proportional-integral-derivative (PID) control algorithm with STM32 as micro-controller unit (MCU). The temperature control system can be applied in the fields which have high requirements on the response speed and accuracy of temperature control. The temperature acquisition circuit in system adopted Pt1000 resistance thermometer as temperature sensor. Through this acquisition circuit, the monitoring actual temperature signal could be converted into voltage signal and transmitted into MCU. A TLP521-1 photoelectric coupler was matched with BD237 power transistor to drive the thermoelectric cooler (TEC) in FTA951 module. The effective electric power of TEC was controlled by the pulse width modulation (PWM) signals which generated by MCU. The PWM signal parameters could be adjusted timely by PID algorithm according to the difference between monitoring actual temperature and set temperature. The upper computer was used to input the set temperature and monitor the system running state via serial port. The application experiment results show that the temperature control system is featured by simple structure, rapid response speed, good stability and high temperature control accuracy with the error less than ±0.5°C.
Validation of a Low-Thrust Mission Design Tool Using Operational Navigation Software
NASA Technical Reports Server (NTRS)
Englander, Jacob A.; Knittel, Jeremy M.; Williams, Ken; Stanbridge, Dale; Ellison, Donald H.
2017-01-01
Design of flight trajectories for missions employing solar electric propulsion requires a suitably high-fidelity design tool. In this work, the Evolutionary Mission Trajectory Generator (EMTG) is presented as a medium-high fidelity design tool that is suitable for mission proposals. EMTG is validated against the high-heritage deep-space navigation tool MIRAGE, demonstrating both the accuracy of EMTG's model and an operational mission design and navigation procedure using both tools. The validation is performed using a benchmark mission to the Jupiter Trojans.
Automatic Fault Characterization via Abnormality-Enhanced Classification
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bronevetsky, G; Laguna, I; de Supinski, B R
Enterprise and high-performance computing systems are growing extremely large and complex, employing hundreds to hundreds of thousands of processors and software/hardware stacks built by many people across many organizations. As the growing scale of these machines increases the frequency of faults, system complexity makes these faults difficult to detect and to diagnose. Current system management techniques, which focus primarily on efficient data access and query mechanisms, require system administrators to examine the behavior of various system services manually. Growing system complexity is making this manual process unmanageable: administrators require more effective management tools that can detect faults and help tomore » identify their root causes. System administrators need timely notification when a fault is manifested that includes the type of fault, the time period in which it occurred and the processor on which it originated. Statistical modeling approaches can accurately characterize system behavior. However, the complex effects of system faults make these tools difficult to apply effectively. This paper investigates the application of classification and clustering algorithms to fault detection and characterization. We show experimentally that naively applying these methods achieves poor accuracy. Further, we design novel techniques that combine classification algorithms with information on the abnormality of application behavior to improve detection and characterization accuracy. Our experiments demonstrate that these techniques can detect and characterize faults with 65% accuracy, compared to just 5% accuracy for naive approaches.« less
NASA Technical Reports Server (NTRS)
Kanning, G.; Cicolani, L. S.; Schmidt, S. F.
1983-01-01
Translational state estimation in terminal area operations, using a set of commonly available position, air data, and acceleration sensors, is described. Kalman filtering is applied to obtain maximum estimation accuracy from the sensors but feasibility in real-time computations requires a variety of approximations and devices aimed at minimizing the required computation time with only negligible loss of accuracy. Accuracy behavior throughout the terminal area, its relation to sensor accuracy, its effect on trajectory tracking errors and control activity in an automatic flight control system, and its adequacy in terms of existing criteria for various terminal area operations are examined. The principal investigative tool is a simulation of the system.
Modeling of profilometry with laser focus sensors
NASA Astrophysics Data System (ADS)
Bischoff, Jörg; Manske, Eberhard; Baitinger, Henner
2011-05-01
Metrology is of paramount importance in submicron patterning. Particularly, line width and overlay have to be measured very accurately. Appropriated metrology techniques are scanning electron microscopy and optical scatterometry. The latter is non-invasive, highly accurate and enables optical cross sections of layer stacks but it requires periodic patterns. Scanning laser focus sensors are a viable alternative enabling the measurement of non-periodic features. Severe limitations are imposed by the diffraction limit determining the edge location accuracy. It will be shown that the accuracy can be greatly improved by means of rigorous modeling. To this end, a fully vectorial 2.5-dimensional model has been developed based on rigorous Maxwell solvers and combined with models for the scanning and various autofocus principles. The simulations are compared with experimental results. Moreover, the simulations are directly utilized to improve the edge location accuracy.
Modifying high-order aeroelastic math model of a jet transport using maximum likelihood estimation
NASA Technical Reports Server (NTRS)
Anissipour, Amir A.; Benson, Russell A.
1989-01-01
The design of control laws to damp flexible structural modes requires accurate math models. Unlike the design of control laws for rigid body motion (e.g., where robust control is used to compensate for modeling inaccuracies), structural mode damping usually employs narrow band notch filters. In order to obtain the required accuracy in the math model, maximum likelihood estimation technique is employed to improve the accuracy of the math model using flight data. Presented here are all phases of this methodology: (1) pre-flight analysis (i.e., optimal input signal design for flight test, sensor location determination, model reduction technique, etc.), (2) data collection and preprocessing, and (3) post-flight analysis (i.e., estimation technique and model verification). In addition, a discussion is presented of the software tools used and the need for future study in this field.
Validation of enhanced kinect sensor based motion capturing for gait assessment
Müller, Björn; Ilg, Winfried; Giese, Martin A.
2017-01-01
Optical motion capturing systems are expensive and require substantial dedicated space to be set up. On the other hand, they provide unsurpassed accuracy and reliability. In many situations however flexibility is required and the motion capturing system can only temporarily be placed. The Microsoft Kinect v2 sensor is comparatively cheap and with respect to gait analysis promising results have been published. We here present a motion capturing system that is easy to set up, flexible with respect to the sensor locations and delivers high accuracy in gait parameters comparable to a gold standard motion capturing system (VICON). Further, we demonstrate that sensor setups which track the person only from one-side are less accurate and should be replaced by two-sided setups. With respect to commonly analyzed gait parameters, especially step width, our system shows higher agreement with the VICON system than previous reports. PMID:28410413
Deep-space navigation applications of improved ground-based optical astrometry
NASA Technical Reports Server (NTRS)
Null, G. W.; Owen, W. M., Jr.; Synnott, S. P.
1992-01-01
Improvements in ground-based optical astrometry will eventually be required for navigation of interplanetary spacecraft when these spacecraft communicate at optical wavelengths. Although such spacecraft may be some years off, preliminary versions of the astrometric technology can also be used to obtain navigational improvements for the Galileo and Cassini missions. This article describes a technology-development and observational program to accomplish this, including a cooperative effort with U.S. Naval Observatory Flagstaff Station. For Galileo, Earth-based astrometry of Jupiter's Galilean satellites may improve their ephemeris accuracy by a factor of 3 to 6. This would reduce the requirements for onboard optical navigation pictures, so that more of the data transmission capability (currently limited by high-gain antenna deployment problems) can be used for science data. Also, observations of European Space Agency (ESA) Hipparcos stars with asteroid 243 Ida may provide significantly improved navigation accuracy for a planned August 1993 Galileo spacecraft encounter.
A Smart High Accuracy Silicon Piezoresistive Pressure Sensor Temperature Compensation System
Zhou, Guanwu; Zhao, Yulong; Guo, Fangfang; Xu, Wenju
2014-01-01
Theoretical analysis in this paper indicates that the accuracy of a silicon piezoresistive pressure sensor is mainly affected by thermal drift, and varies nonlinearly with the temperature. Here, a smart temperature compensation system to reduce its effect on accuracy is proposed. Firstly, an effective conditioning circuit for signal processing and data acquisition is designed. The hardware to implement the system is fabricated. Then, a program is developed on LabVIEW which incorporates an extreme learning machine (ELM) as the calibration algorithm for the pressure drift. The implementation of the algorithm was ported to a micro-control unit (MCU) after calibration in the computer. Practical pressure measurement experiments are carried out to verify the system's performance. The temperature compensation is solved in the interval from −40 to 85 °C. The compensated sensor is aimed at providing pressure measurement in oil-gas pipelines. Compared with other algorithms, ELM acquires higher accuracy and is more suitable for batch compensation because of its higher generalization and faster learning speed. The accuracy, linearity, zero temperature coefficient and sensitivity temperature coefficient of the tested sensor are 2.57% FS, 2.49% FS, 8.1 × 10−5/°C and 29.5 × 10−5/°C before compensation, and are improved to 0.13%FS, 0.15%FS, 1.17 × 10−5/°C and 2.1 × 10−5/°C respectively, after compensation. The experimental results demonstrate that the proposed system is valid for the temperature compensation and high accuracy requirement of the sensor. PMID:25006998
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ellens, N; Farahani, K
2015-06-15
Purpose: MRI-guided focused ultrasound (MRgFUS) has many potential and realized applications including controlled heating and localized drug delivery. The development of many of these applications requires extensive preclinical work, much of it in small animal models. The goal of this study is to characterize the spatial targeting accuracy and reproducibility of a preclinical high field MRgFUS system for thermal ablation and drug delivery applications. Methods: The RK300 (FUS Instruments, Toronto, Canada) is a motorized, 2-axis FUS positioning system suitable for small bore (72 mm), high-field MRI systems. The accuracy of the system was assessed in three ways. First, the precisionmore » of the system was assessed by sonicating regular grids of 5 mm squares on polystyrene plates and comparing the resulting focal dimples to the intended pattern, thereby assessing the reproducibility and precision of the motion control alone. Second, the targeting accuracy was assessed by imaging a polystyrene plate with randomly drilled holes and replicating the hole pattern by sonicating the observed hole locations on intact polystyrene plates and comparing the results. Third, the practicallyrealizable accuracy and precision were assessed by comparing the locations of transcranial, FUS-induced blood-brain-barrier disruption (BBBD) (observed through Gadolinium enhancement) to the intended targets in a retrospective analysis of animals sonicated for other experiments. Results: The evenly-spaced grids indicated that the precision was 0.11 +/− 0.05 mm. When image-guidance was included by targeting random locations, the accuracy was 0.5 +/− 0.2 mm. The effective accuracy in the four rodent brains assessed was 0.8 +/− 0.6 mm. In all cases, the error appeared normally distributed (p<0.05) in both orthogonal axes, though the left/right error was systematically greater than the superior/inferior error. Conclusions: The targeting accuracy of this device is sub-millimeter, suitable for many preclinical applications including focused drug delivery and thermal therapy. Funding support provided by Philips Healthcare.« less
Accurate Radiometry from Space: An Essential Tool for Climate Studies
NASA Technical Reports Server (NTRS)
Fox, Nigel; Kaiser-Weiss, Andrea; Schmutz, Werner; Thome, Kurtis; Young, Dave; Wielicki, Bruce; Winkler, Rainer; Woolliams, Emma
2011-01-01
The Earth s climate is undoubtedly changing; however, the time scale, consequences and causal attribution remain the subject of significant debate and uncertainty. Detection of subtle indicators from a background of natural variability requires measurements over a time base of decades. This places severe demands on the instrumentation used, requiring measurements of sufficient accuracy and sensitivity that can allow reliable judgements to be made decades apart. The International System of Units (SI) and the network of National Metrology Institutes were developed to address such requirements. However, ensuring and maintaining SI traceability of sufficient accuracy in instruments orbiting the Earth presents a significant new challenge to the metrology community. This paper highlights some key measurands and applications driving the uncertainty demand of the climate community in the solar reflective domain, e.g. solar irradiances and reflectances/radiances of the Earth. It discusses how meeting these uncertainties facilitate significant improvement in the forecasting abilities of climate models. After discussing the current state of the art, it describes a new satellite mission, called TRUTHS, which enables, for the first time, high-accuracy SI traceability to be established in orbit. The direct use of a primary standard and replication of the terrestrial traceability chain extends the SI into space, in effect realizing a metrology laboratory in space . Keywords: climate change; Earth observation; satellites; radiometry; solar irradiance
Turner, Karly M.; Peak, James; Burne, Thomas H. J.
2016-01-01
Neuropsychiatric research has utilized cognitive testing in rodents to improve our understanding of cognitive deficits and for preclinical drug development. However, more sophisticated cognitive tasks have not been as widely exploited due to low throughput and the extensive training time required. We developed a modified signal detection task (SDT) based on the growing body of literature aimed at improving cognitive testing in rodents. This study directly compares performance on the modified SDT with a traditional test for measuring attention, the 5-choice serial reaction time task (5CSRTT). Adult male Sprague-Dawley rats were trained on either the 5CSRTT or the SDT. Briefly, the 5CSRTT required rodents to pay attention to a spatial array of five apertures and respond with a nose poke when an aperture was illuminated. The SDT required the rat to attend to a light panel and respond either left or right to indicate the presence of a signal. In addition, modifications were made to the reward delivery, timing, control of body positioning, and the self-initiation of trials. It was found that less training time was required for the SDT, with both sessions to criteria and daily session duration significantly reduced. Rats performed with a high level of accuracy (>87%) on both tasks, however omissions were far more frequent on the 5CSRTT. The signal duration was reduced on both tasks as a manipulation of task difficulty relevant to attention and a similar pattern of decreasing accuracy was observed on both tasks. These results demonstrate some of the advantages of the SDT over the traditional 5CSRTT as being higher throughput with reduced training time, fewer omission responses and their body position was controlled at stimulus onset. In addition, rats performing the SDT had comparable high levels of accuracy. These results highlight the differences and similarities between the 5CSRTT and a modified SDT as tools for assessing attention in preclinical animal models. PMID:26834597
Calibration method for a large-scale structured light measurement system.
Wang, Peng; Wang, Jianmei; Xu, Jing; Guan, Yong; Zhang, Guanglie; Chen, Ken
2017-05-10
The structured light method is an effective non-contact measurement approach. The calibration greatly affects the measurement precision of structured light systems. To construct a large-scale structured light system with high accuracy, a large-scale and precise calibration gauge is always required, which leads to an increased cost. To this end, in this paper, a calibration method with a planar mirror is proposed to reduce the calibration gauge size and cost. An out-of-focus camera calibration method is also proposed to overcome the defocusing problem caused by the shortened distance during the calibration procedure. The experimental results verify the accuracy of the proposed calibration method.
Iterative Correction of Reference Nucleotides (iCORN) using second generation sequencing technology.
Otto, Thomas D; Sanders, Mandy; Berriman, Matthew; Newbold, Chris
2010-07-15
The accuracy of reference genomes is important for downstream analysis but a low error rate requires expensive manual interrogation of the sequence. Here, we describe a novel algorithm (Iterative Correction of Reference Nucleotides) that iteratively aligns deep coverage of short sequencing reads to correct errors in reference genome sequences and evaluate their accuracy. Using Plasmodium falciparum (81% A + T content) as an extreme example, we show that the algorithm is highly accurate and corrects over 2000 errors in the reference sequence. We give examples of its application to numerous other eukaryotic and prokaryotic genomes and suggest additional applications. The software is available at http://icorn.sourceforge.net
Freckmann, Guido; Baumstark, Annette; Schmid, Christina; Pleus, Stefan; Link, Manuela; Haug, Cornelia
2014-02-01
Systems for self-monitoring of blood glucose (SMBG) have to provide accurate and reproducible blood glucose (BG) values in order to ensure adequate therapeutic decisions by people with diabetes. Twelve SMBG systems were compared in a standardized manner under controlled laboratory conditions: nine systems were available on the German market and were purchased from a local pharmacy, and three systems were obtained from the manufacturer (two systems were available on the U.S. market, and one system was not yet introduced to the German market). System accuracy was evaluated following DIN EN ISO (International Organization for Standardization) 15197:2003. In addition, measurement reproducibility was assessed following a modified TNO (Netherlands Organization for Applied Scientific Research) procedure. Comparison measurements were performed with either the glucose oxidase method (YSI 2300 STAT Plus™ glucose analyzer; YSI Life Sciences, Yellow Springs, OH) or the hexokinase method (cobas(®) c111; Roche Diagnostics GmbH, Mannheim, Germany) according to the manufacturer's measurement procedure. The 12 evaluated systems showed between 71.5% and 100% of the measurement results within the required system accuracy limits. Ten systems fulfilled with the evaluated test strip lot minimum accuracy requirements specified by DIN EN ISO 15197:2003. In addition, accuracy limits of the recently published revision ISO 15197:2013 were applied and showed between 54.5% and 100% of the systems' measurement results within the required accuracy limits. Regarding measurement reproducibility, each of the 12 tested systems met the applied performance criteria. In summary, 83% of the systems fulfilled with the evaluated test strip lot minimum system accuracy requirements of DIN EN ISO 15197:2003. Each of the tested systems showed acceptable measurement reproducibility. In order to ensure sufficient measurement quality of each distributed test strip lot, regular evaluations are required.
A Dynamic Precision Evaluation Method for the Star Sensor in the Stellar-Inertial Navigation System.
Lu, Jiazhen; Lei, Chaohua; Yang, Yanqiang
2017-06-28
Integrating the advantages of INS (inertial navigation system) and the star sensor, the stellar-inertial navigation system has been used for a wide variety of applications. The star sensor is a high-precision attitude measurement instrument; therefore, determining how to validate its accuracy is critical in guaranteeing its practical precision. The dynamic precision evaluation of the star sensor is more difficult than a static precision evaluation because of dynamic reference values and other impacts. This paper proposes a dynamic precision verification method of star sensor with the aid of inertial navigation device to realize real-time attitude accuracy measurement. Based on the gold-standard reference generated by the star simulator, the altitude and azimuth angle errors of the star sensor are calculated for evaluation criteria. With the goal of diminishing the impacts of factors such as the sensors' drift and devices, the innovative aspect of this method is to employ static accuracy for comparison. If the dynamic results are as good as the static results, which have accuracy comparable to the single star sensor's precision, the practical precision of the star sensor is sufficiently high to meet the requirements of the system specification. The experiments demonstrate the feasibility and effectiveness of the proposed method.
NASA Astrophysics Data System (ADS)
Blaser, S.; Nebiker, S.; Cavegn, S.
2017-05-01
Image-based mobile mapping systems enable the efficient acquisition of georeferenced image sequences, which can later be exploited in cloud-based 3D geoinformation services. In order to provide a 360° coverage with accurate 3D measuring capabilities, we present a novel 360° stereo panoramic camera configuration. By using two 360° panorama cameras tilted forward and backward in combination with conventional forward and backward looking stereo camera systems, we achieve a full 360° multi-stereo coverage. We furthermore developed a fully operational new mobile mapping system based on our proposed approach, which fulfils our high accuracy requirements. We successfully implemented a rigorous sensor and system calibration procedure, which allows calibrating all stereo systems with a superior accuracy compared to that of previous work. Our study delivered absolute 3D point accuracies in the range of 4 to 6 cm and relative accuracies of 3D distances in the range of 1 to 3 cm. These results were achieved in a challenging urban area. Furthermore, we automatically reconstructed a 3D city model of our study area by employing all captured and georeferenced mobile mapping imagery. The result is a very high detailed and almost complete 3D city model of the street environment.
Motion-sensor fusion-based gesture recognition and its VLSI architecture design for mobile devices
NASA Astrophysics Data System (ADS)
Zhu, Wenping; Liu, Leibo; Yin, Shouyi; Hu, Siqi; Tang, Eugene Y.; Wei, Shaojun
2014-05-01
With the rapid proliferation of smartphones and tablets, various embedded sensors are incorporated into these platforms to enable multimodal human-computer interfaces. Gesture recognition, as an intuitive interaction approach, has been extensively explored in the mobile computing community. However, most gesture recognition implementations by now are all user-dependent and only rely on accelerometer. In order to achieve competitive accuracy, users are required to hold the devices in predefined manner during the operation. In this paper, a high-accuracy human gesture recognition system is proposed based on multiple motion sensor fusion. Furthermore, to reduce the energy overhead resulted from frequent sensor sampling and data processing, a high energy-efficient VLSI architecture implemented on a Xilinx Virtex-5 FPGA board is also proposed. Compared with the pure software implementation, approximately 45 times speed-up is achieved while operating at 20 MHz. The experiments show that the average accuracy for 10 gestures achieves 93.98% for user-independent case and 96.14% for user-dependent case when subjects hold the device randomly during completing the specified gestures. Although a few percent lower than the conventional best result, it still provides competitive accuracy acceptable for practical usage. Most importantly, the proposed system allows users to hold the device randomly during operating the predefined gestures, which substantially enhances the user experience.
NASA Technical Reports Server (NTRS)
Ohring, G.; Wielicki, B.; Spencer, R.; Emery, B.; Datla, R.
2004-01-01
Measuring the small changes associated with long-term global climate change from space is a daunting task. To address these problems and recommend directions for improvements in satellite instrument calibration some 75 scientists, including researchers who develop and analyze long-term data sets from satellites, experts in the field of satellite instrument calibration, and physicists working on state of the art calibration sources and standards met November 12 - 14, 2002 and discussed the issues. The workshop defined the absolute accuracies and long-term stabilities of global climate data sets that are needed to detect expected trends, translated these data set accuracies and stabilities to required satellite instrument accuracies and stabilities, and evaluated the ability of current observing systems to meet these requirements. The workshop's recommendations include a set of basic axioms or overarching principles that must guide high quality climate observations in general, and a roadmap for improving satellite instrument characterization, calibration, inter-calibration, and associated activities to meet the challenge of measuring global climate change. It is also recommended that a follow-up workshop be conducted to discuss implementation of the roadmap developed at this workshop.
NASA Astrophysics Data System (ADS)
Buchari, M. A.; Mardiyanto, S.; Hendradjaya, B.
2018-03-01
Finding the existence of software defect as early as possible is the purpose of research about software defect prediction. Software defect prediction activity is required to not only state the existence of defects, but also to be able to give a list of priorities which modules require a more intensive test. Therefore, the allocation of test resources can be managed efficiently. Learning to rank is one of the approach that can provide defect module ranking data for the purposes of software testing. In this study, we propose a meta-heuristic chaotic Gaussian particle swarm optimization to improve the accuracy of learning to rank software defect prediction approach. We have used 11 public benchmark data sets as experimental data. Our overall results has demonstrated that the prediction models construct using Chaotic Gaussian Particle Swarm Optimization gets better accuracy on 5 data sets, ties in 5 data sets and gets worse in 1 data sets. Thus, we conclude that the application of Chaotic Gaussian Particle Swarm Optimization in Learning-to-Rank approach can improve the accuracy of the defect module ranking in data sets that have high-dimensional features.
Shah, Sohil Atul
2017-01-01
Clustering is a fundamental procedure in the analysis of scientific data. It is used ubiquitously across the sciences. Despite decades of research, existing clustering algorithms have limited effectiveness in high dimensions and often require tuning parameters for different domains and datasets. We present a clustering algorithm that achieves high accuracy across multiple domains and scales efficiently to high dimensions and large datasets. The presented algorithm optimizes a smooth continuous objective, which is based on robust statistics and allows heavily mixed clusters to be untangled. The continuous nature of the objective also allows clustering to be integrated as a module in end-to-end feature learning pipelines. We demonstrate this by extending the algorithm to perform joint clustering and dimensionality reduction by efficiently optimizing a continuous global objective. The presented approach is evaluated on large datasets of faces, hand-written digits, objects, newswire articles, sensor readings from the Space Shuttle, and protein expression levels. Our method achieves high accuracy across all datasets, outperforming the best prior algorithm by a factor of 3 in average rank. PMID:28851838
A fuzzy pattern matching method based on graph kernel for lithography hotspot detection
NASA Astrophysics Data System (ADS)
Nitta, Izumi; Kanazawa, Yuzi; Ishida, Tsutomu; Banno, Koji
2017-03-01
In advanced technology nodes, lithography hotspot detection has become one of the most significant issues in design for manufacturability. Recently, machine learning based lithography hotspot detection has been widely investigated, but it has trade-off between detection accuracy and false alarm. To apply machine learning based technique to the physical verification phase, designers require minimizing undetected hotspots to avoid yield degradation. They also need a ranking of similar known patterns with a detected hotspot to prioritize layout pattern to be corrected. To achieve high detection accuracy and to prioritize detected hotspots, we propose a novel lithography hotspot detection method using Delaunay triangulation and graph kernel based machine learning. Delaunay triangulation extracts features of hotspot patterns where polygons locate irregularly and closely one another, and graph kernel expresses inner structure of graphs. Additionally, our method provides similarity between two patterns and creates a list of similar training patterns with a detected hotspot. Experiments results on ICCAD 2012 benchmarks show that our method achieves high accuracy with allowable range of false alarm. We also show the ranking of the similar known patterns with a detected hotspot.
NASA Technical Reports Server (NTRS)
Comber, Brian; Glazer, Stuart
2012-01-01
The James Webb Space Telescope (JWST) is an upcoming flagship observatory mission scheduled to be launched in 2018. Three of the four science instruments are passively cooled to their operational temperature range of 36K to 40K, and the fourth instrument is actively cooled to its operational temperature of approximately 6K. The requirement for multiple thermal zoned results in the instruments being thermally connected to five external radiators via individual high purity aluminum heat straps. Thermal-vacuum and thermal balance testing of the flight instruments at the Integrated Science Instrument Module (ISIM) element level will take place within a newly constructed shroud cooled by gaseous helium inside Goddard Space Flight Center's (GSFC) Space environment Simulator (SES). The flight external radiators are not available during ISIM-level thermal vacuum/thermal testing, so they will be replaced in test with stable and adjustable thermal boundaries with identical physical interfaces to the flight radiators. Those boundaries are provided by specially designed test hardware which also measures the heat flow within each of the five heat straps to an accuracy of less than 2 mW, which is less than 5% of the minimum predicted heat flow values. Measurement of the heat loads to this accuracy is essential to ISIM thermal model correlation, since thermal models are more accurately correlated when temperature data is supplemented by accurate knowledge of heat flows. It also provides direct verification by test of several high-level thermal requirements. Devices that measure heat flow in this manner have historically been referred to a "Q-meters". Perhaps the most important feature of the design of the JWST Q-meters is that it does not depend on the absolute accuracy of its temperature sensors, but rather on knowledge of precise heater power required to maintain a constant temperature difference between sensors on two stages, for which a table is empirically developed during a calibration campaign in a small chamber at GSFC. This paper provides a brief review of Q-meter design, and discusses the Q-meter calibration procedure including calibration chamber modifications and accommodations, handling of differing conditions between calibration and usage, the calibration process itself, and the results of the tests used to determine if the calibration is successful.
Haworth, Annette; Kearvell, Rachel; Greer, Peter B; Hooton, Ben; Denham, James W; Lamb, David; Duchesne, Gillian; Murray, Judy; Joseph, David
2009-03-01
A multi-centre clinical trial for prostate cancer patients provided an opportunity to introduce conformal radiotherapy with dose escalation. To verify adequate treatment accuracy prior to patient recruitment, centres submitted details of a set-up accuracy study (SUAS). We report the results of the SUAS, the variation in clinical practice and the strategies used to help centres improve treatment accuracy. The SUAS required each of the 24 participating centres to collect data on at least 10 pelvic patients imaged on a minimum of 20 occasions. Software was provided for data collection and analysis. Support to centres was provided through educational lectures, the trial quality assurance team and an information booklet. Only two centres had recently carried out a SUAS prior to the trial opening. Systematic errors were generally smaller than those previously reported in the literature. The questionnaire identified many differences in patient set-up protocols. As a result of participating in this QA activity more than 65% of centres improved their treatment delivery accuracy. Conducting a pre-trial SUAS has led to improvement in treatment delivery accuracy in many centres. Treatment techniques and set-up accuracy varied greatly, demonstrating a need to ensure an on-going awareness for such studies in future trials and with the introduction of dose escalation or new technologies.
Fast and Accurate Simulation of the Cray XMT Multithreaded Supercomputer
DOE Office of Scientific and Technical Information (OSTI.GOV)
Villa, Oreste; Tumeo, Antonino; Secchi, Simone
Irregular applications, such as data mining and analysis or graph-based computations, show unpredictable memory/network access patterns and control structures. Highly multithreaded architectures with large processor counts, like the Cray MTA-1, MTA-2 and XMT, appear to address their requirements better than commodity clusters. However, the research on highly multithreaded systems is currently limited by the lack of adequate architectural simulation infrastructures due to issues such as size of the machines, memory footprint, simulation speed, accuracy and customization. At the same time, Shared-memory MultiProcessors (SMPs) with multi-core processors have become an attractive platform to simulate large scale machines. In this paper, wemore » introduce a cycle-level simulator of the highly multithreaded Cray XMT supercomputer. The simulator runs unmodified XMT applications. We discuss how we tackled the challenges posed by its development, detailing the techniques introduced to make the simulation as fast as possible while maintaining a high accuracy. By mapping XMT processors (ThreadStorm with 128 hardware threads) to host computing cores, the simulation speed remains constant as the number of simulated processors increases, up to the number of available host cores. The simulator supports zero-overhead switching among different accuracy levels at run-time and includes a network model that takes into account contention. On a modern 48-core SMP host, our infrastructure simulates a large set of irregular applications 500 to 2000 times slower than real time when compared to a 128-processor XMT, while remaining within 10\\% of accuracy. Emulation is only from 25 to 200 times slower than real time.« less
Stroke maximizing and high efficient hysteresis hybrid modeling for a rhombic piezoelectric actuator
NASA Astrophysics Data System (ADS)
Shao, Shubao; Xu, Minglong; Zhang, Shuwen; Xie, Shilin
2016-06-01
Rhombic piezoelectric actuator (RPA), which employs a rhombic mechanism to amplify the small stroke of PZT stack, has been widely used in many micro-positioning machineries due to its remarkable properties such as high displacement resolution and compact structure. In order to achieve large actuation range along with high accuracy, the stroke maximizing and compensation for the hysteresis are two concerns in the use of RPA. However, existing maximization methods based on theoretical model can hardly accurately predict the maximum stroke of RPA because of approximation errors that are caused by the simplifications that must be made in the analysis. Moreover, despite the high hysteresis modeling accuracy of Preisach model, its modeling procedure is trivial and time-consuming since a large set of experimental data is required to determine the model parameters. In our research, to improve the accuracy of theoretical model of RPA, the approximation theory is employed in which the approximation errors can be compensated by two dimensionless coefficients. To simplify the hysteresis modeling procedure, a hybrid modeling method is proposed in which the parameters of Preisach model can be identified from only a small set of experimental data by using the combination of discrete Preisach model (DPM) with particle swarm optimization (PSO) algorithm. The proposed novel hybrid modeling method can not only model the hysteresis with considerable accuracy but also significantly simplified the modeling procedure. Finally, the inversion of hysteresis is introduced to compensate for the hysteresis non-linearity of RPA, and consequently a pseudo-linear system can be obtained.
NASA Technical Reports Server (NTRS)
Rock, Stephen M.; LeMaster, Edward A.
2001-01-01
Pseudolites can extend the availability of GPS-type positioning systems to a wide range of applications not possible with satellite-only GPS. One such application is Mars exploration, where the centimeter-level accuracy and high repeatability of CDGPS would make it attractive for rover positioning during autonomous exploration, sample collection, and habitat construction if it were available. Pseudolites distributed on the surface would allow multiple rovers and/or astronauts to share a common navigational reference. This would help enable cooperation for complicated science tasks, reducing the need for instructions from Earth and increasing the likelihood of mission success. Conventional GPS Pseudolite arrays require that the devices be pre-calibrated through a Survey of their locations, typically to sub-centimeter accuracy. This is a problematic task for robots on the surface of another planet. By using the GPS signals that the Pseudolites broadcast, however, it is possible to have the array self-survey its own relative locations, creating a SelfCalibrating Pseudolite Array (SCPA). This requires the use of GPS transceivers instead of standard pseudolites. Surveying can be done either at carrier- or code-phase levels. An overview of SCPA capabilities, system requirements, and self-calibration algorithms is presented in another work. The Aerospace Robotics Laboratory at Statif0id has developed a fully operational prototype SCPA. The array is able to determine the range between any two transceivers with either code- or carrier-phase accuracy, and uses this inter-transceiver ranging to determine the at-ray geometry. This paper presents results from field tests conducted at Stanford University demonstrating the accuracy of inter-transceiver ranging and its viability and utility for array localization, and shows how transceiver motion may be utilized to refine the array estimate by accurately determining carrier-phase integers and line biases. It also summarizes the overall system requirements and architecture, and describes the hardware and software used in the prototype system.
Marciano, Michael A; Adelman, Jonathan D
2017-03-01
The deconvolution of DNA mixtures remains one of the most critical challenges in the field of forensic DNA analysis. In addition, of all the data features required to perform such deconvolution, the number of contributors in the sample is widely considered the most important, and, if incorrectly chosen, the most likely to negatively influence the mixture interpretation of a DNA profile. Unfortunately, most current approaches to mixture deconvolution require the assumption that the number of contributors is known by the analyst, an assumption that can prove to be especially faulty when faced with increasingly complex mixtures of 3 or more contributors. In this study, we propose a probabilistic approach for estimating the number of contributors in a DNA mixture that leverages the strengths of machine learning. To assess this approach, we compare classification performances of six machine learning algorithms and evaluate the model from the top-performing algorithm against the current state of the art in the field of contributor number classification. Overall results show over 98% accuracy in identifying the number of contributors in a DNA mixture of up to 4 contributors. Comparative results showed 3-person mixtures had a classification accuracy improvement of over 6% compared to the current best-in-field methodology, and that 4-person mixtures had a classification accuracy improvement of over 20%. The Probabilistic Assessment for Contributor Estimation (PACE) also accomplishes classification of mixtures of up to 4 contributors in less than 1s using a standard laptop or desktop computer. Considering the high classification accuracy rates, as well as the significant time commitment required by the current state of the art model versus seconds required by a machine learning-derived model, the approach described herein provides a promising means of estimating the number of contributors and, subsequently, will lead to improved DNA mixture interpretation. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
ERIC Educational Resources Information Center
Foorman, Barbara R.; Kershaw, Sarah; Petscher, Yaacov
2013-01-01
Florida requires that students who do not meet grade-level reading proficiency standards on the end-of-year state assessment (Florida Comprehensive Assessment Test, FCAT) receive intensive reading intervention. With the stakes so high, teachers and principals are interested in using screening or diagnostic assessments to identify students with a…
Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvořáček, Filip
2015-01-01
In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5–50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments’ results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%. PMID:26258777
Suppression of Systematic Errors of Electronic Distance Meters for Measurement of Short Distances.
Braun, Jaroslav; Štroner, Martin; Urban, Rudolf; Dvoček, Filip
2015-08-06
In modern industrial geodesy, high demands are placed on the final accuracy, with expectations currently falling below 1 mm. The measurement methodology and surveying instruments used have to be adjusted to meet these stringent requirements, especially the total stations as the most often used instruments. A standard deviation of the measured distance is the accuracy parameter, commonly between 1 and 2 mm. This parameter is often discussed in conjunction with the determination of the real accuracy of measurements at very short distances (5-50 m) because it is generally known that this accuracy cannot be increased by simply repeating the measurement because a considerable part of the error is systematic. This article describes the detailed testing of electronic distance meters to determine the absolute size of their systematic errors, their stability over time, their repeatability and the real accuracy of their distance measurement. Twenty instruments (total stations) have been tested, and more than 60,000 distances in total were measured to determine the accuracy and precision parameters of the distance meters. Based on the experiments' results, calibration procedures were designed, including a special correction function for each instrument, whose usage reduces the standard deviation of the measurement of distance by at least 50%.
Small Body Landing Accuracy Using In-Situ Navigation
NASA Technical Reports Server (NTRS)
Bhaskaran, Shyam; Nandi, Sumita; Broschart, Stephen; Wallace, Mark; Olson, Corwin; Cangahuala, L. Alberto
2011-01-01
Spacecraft landings on small bodies (asteroids and comets) can require target accuracies too stringent to be met using ground-based navigation alone, especially if specific landing site requirements must be met for safety or to meet science goals. In-situ optical observations coupled with onboard navigation processing can meet the tighter accuracy requirements to enable such missions. Recent developments in deep space navigation capability include a self-contained autonomous navigation system (used in flight on three missions) and a landmark tracking system (used experimentally on the Japanese Hayabusa mission). The merging of these two technologies forms a methodology to perform autonomous onboard navigation around small bodies. This paper presents an overview of these systems, as well as the results from Monte Carlo studies to quantify the achievable landing accuracies by using these methods. Sensitivity of the results to variations in spacecraft maneuver execution error, attitude control accuracy and unmodeled forces are examined. Cases for two bodies, a small asteroid and on a mid-size comet, are presented.
NASA Technical Reports Server (NTRS)
Chang, Sin-Chung; Chang, Chau-Lyan; Venkatachari, Balaji Shankar
2017-01-01
Traditionally high-aspect ratio triangular/tetrahedral meshes are avoided by CFD re-searchers in the vicinity of a solid wall, as it is known to reduce the accuracy of gradient computations in those regions and also cause numerical instability. Although for certain complex geometries, the use of high-aspect ratio triangular/tetrahedral elements in the vicinity of a solid wall can be replaced by quadrilateral/prismatic elements, ability to use triangular/tetrahedral elements in such regions without any degradation in accuracy can be beneficial from a mesh generation point of view. The benefits also carry over to numerical frameworks such as the space-time conservation element and solution element (CESE), where triangular/tetrahedral elements are the mandatory building blocks. With the requirement of the CESE method in mind, a rigorous mathematical framework that clearly identities the reason behind the difficulties in use of such high-aspect ratio triangular/tetrahedral elements is presented here. As will be shown, it turns out that the degree of accuracy deterioration of gradient computation involving a triangular element is hinged on the value of its shape factor Gamma def = sq sin Alpha1 + sq sin Alpha2 + sq sin Alpha3, where Alpha1; Alpha2 and Alpha3 are the internal angles of the element. In fact, it is shown that the degree of accuracy deterioration increases monotonically as the value of Gamma decreases monotonically from its maximal value 9/4 (attained by an equilateral triangle only) to a value much less than 1 (associated with a highly obtuse triangle). By taking advantage of the fact that a high-aspect ratio triangle is not necessarily highly obtuse, and in fact it can have a shape factor whose value is close to the maximal value 9/4, a potential solution to avoid accuracy deterioration of gradient computation associated with a high-aspect ratio triangular grid is given. Also a brief discussion on the extension of the current mathematical framework to the tetrahedral-grid case along with some of the practical results of this extension is also provided. Furthermore, through the use of numerical simulations of practical viscous problems involving high-Reynolds number flows, the effectiveness of the gradient evaluation procedures within the CESE framework (that have their basis on the analysis presented here) to produce accurate and stable results on such high-aspect ratio meshes is also showcased.
Flight dynamics facility operational orbit determination support for the ocean topography experiment
NASA Technical Reports Server (NTRS)
Bolvin, D. T.; Schanzle, A. F.; Samii, M. V.; Doll, C. E.
1991-01-01
The Ocean Topography Experiment (TOPEX/POSEIDON) mission is designed to determine the topography of the Earth's sea surface across a 3 yr period, beginning with launch in June 1992. The Goddard Space Flight Center Dynamics Facility has the capability to operationally receive and process Tracking and Data Relay Satellite System (TDRSS) tracking data. Because these data will be used to support orbit determination (OD) aspects of the TOPEX mission, the Dynamics Facility was designated to perform TOPEX operational OD. The scientific data require stringent OD accuracy in navigating the TOPEX spacecraft. The OD accuracy requirements fall into two categories: (1) on orbit free flight; and (2) maneuver. The maneuver OD accuracy requirements are of two types; premaneuver planning and postmaneuver evaluation. Analysis using the Orbit Determination Error Analysis System (ODEAS) covariance software has shown that, during the first postlaunch mission phase of the TOPEX mission, some postmaneuver evaluation OD accuracy requirements cannot be met. ODEAS results also show that the most difficult requirements to meet are those that determine the change in the components of velocity for postmaneuver evaluation.
L-Band Transmit/Receive Module for Phase-Stable Array Antennas
NASA Technical Reports Server (NTRS)
Andricos, Constantine; Edelstein, Wendy; Krimskiy, Vladimir
2008-01-01
Interferometric synthetic aperture radar (InSAR) has been shown to provide very sensitive measurements of surface deformation and displacement on the order of 1 cm. Future systematic measurements of surface deformation will require this capability over very large areas (300 km) from space. To achieve these required accuracies, these spaceborne sensors must exhibit low temporal decorrelation and be temporally stable systems. An L-band (24-cmwavelength) InSAR instrument using an electronically steerable radar antenna is suited to meet these needs. In order to achieve the 1-cm displacement accuracy, the phased array antenna requires phase-stable transmit/receive (T/R) modules. The T/R module operates at L-band (1.24 GHz) and has less than 1- deg absolute phase stability and less than 0.1-dB absolute amplitude stability over temperature. The T/R module is also high power (30 W) and power efficient (60-percent overall efficiency). The design is currently implemented using discrete components and surface mount technology. The basic T/R module architecture is augmented with a calibration loop to compensate for temperature variations, component variations, and path loss variations as a function of beam settings. The calibration circuit consists of an amplitude and phase detector, and other control circuitry, to compare the measured gain and phase to a reference signal and uses this signal to control a precision analog phase shifter and analog attenuator. An architecture was developed to allow for the module to be bidirectional, to operate in both transmit and receive mode. The architecture also includes a power detector used to maintain a transmitter power output constant within 0.1 dB. The use of a simple, stable, low-cost, and high-accuracy gain and phase detector made by Analog Devices (AD8302), combined with a very-high efficiency T/R module, is novel. While a self-calibrating T/R module capability has been sought for years, a practical and cost-effective solution has never been demonstrated. By adding the calibration loop to an existing high-efficiency T/R module, there is a demonstrated order-of-magnitude improvement in the amplitude and phase stability.
Esquinas, Pedro L; Uribe, Carlos F; Gonzalez, M; Rodríguez-Rodríguez, Cristina; Häfeli, Urs O; Celler, Anna
2017-07-20
The main applications of 188 Re in radionuclide therapies include trans-arterial liver radioembolization and palliation of painful bone-metastases. In order to optimize 188 Re therapies, the accurate determination of radiation dose delivered to tumors and organs at risk is required. Single photon emission computed tomography (SPECT) can be used to perform such dosimetry calculations. However, the accuracy of dosimetry estimates strongly depends on the accuracy of activity quantification in 188 Re images. In this study, we performed a series of phantom experiments aiming to investigate the accuracy of activity quantification for 188 Re SPECT using high-energy and medium-energy collimators. Objects of different shapes and sizes were scanned in Air, non-radioactive water (Cold-water) and water with activity (Hot-water). The ordered subset expectation maximization algorithm with clinically available corrections (CT-based attenuation, triple-energy window (TEW) scatter and resolution recovery was used). For high activities, the dead-time corrections were applied. The accuracy of activity quantification was evaluated using the ratio of the reconstructed activity in each object to this object's true activity. Each object's activity was determined with three segmentation methods: a 1% fixed threshold (for cold background), a 40% fixed threshold and a CT-based segmentation. Additionally, the activity recovered in the entire phantom, as well as the average activity concentration of the phantom background were compared to their true values. Finally, Monte-Carlo simulations of a commercial [Formula: see text]-camera were performed to investigate the accuracy of the TEW method. Good quantification accuracy (errors <10%) was achieved for the entire phantom, the hot-background activity concentration and for objects in cold background segmented with a 1% threshold. However, the accuracy of activity quantification for objects segmented with 40% threshold or CT-based methods decreased (errors >15%), mostly due to partial-volume effects. The Monte-Carlo simulations confirmed that TEW-scatter correction applied to 188 Re, although practical, yields only approximate estimates of the true scatter.
Effects of changes in size, speed and distance on the perception of curved 3D trajectories
Zhang, Junjun; Braunstein, Myron L.; Andersen, George J.
2012-01-01
Previous research on the perception of 3D object motion has considered time to collision, time to passage, collision detection and judgments of speed and direction of motion, but has not directly studied the perception of the overall shape of the motion path. We examined the perception of the magnitude of curvature and sign of curvature of the motion path for objects moving at eye level in a horizontal plane parallel to the line of sight. We considered two sources of information for the perception of motion trajectories: changes in angular size and changes in angular speed. Three experiments examined judgments of relative curvature for objects moving at different distances. At the closest distance studied, accuracy was high with size information alone but near chance with speed information alone. At the greatest distance, accuracy with size information alone decreased sharply but accuracy for displays with both size and speed information remained high. We found similar results in two experiments with judgments of sign of curvature. Accuracy was higher for displays with both size and speed information than with size information alone, even when the speed information was based on parallel projections and was not informative about sign of curvature. For both magnitude of curvature and sign of curvature judgments, information indicating that the trajectory was curved increased accuracy, even when this information was not directly relevant to the required judgment. PMID:23007204
Ion beam figuring of silicon aspheres
NASA Astrophysics Data System (ADS)
Demmler, Marcel; Zeuner, Michael; Luca, Alfonz; Dunger, Thoralf; Rost, Dirk; Kiontke, Sven; Krüger, Marcus
2011-03-01
Silicon lenses are widely used for infrared applications. Especially for portable devices the size and weight of the optical system are very important factors. The use of aspherical silicon lenses instead of spherical silicon lenses results in a significant reduction of weight and size. The manufacture of silicon lenses is more challenging than the manufacture of standard glass lenses. Typically conventional methods like diamond turning, grinding and polishing are used. However, due to the high hardness of silicon, diamond turning is very difficult and requires a lot of experience. To achieve surfaces of a high quality a polishing step is mandatory within the manufacturing process. Nevertheless, the required surface form accuracy cannot be achieved through the use of conventional polishing methods because of the unpredictable behavior of the polishing tools, which leads to an unstable removal rate. To overcome these disadvantages a method called Ion Beam Figuring can be used to manufacture silicon lenses with high surface form accuracies. The general advantage of the Ion Beam Figuring technology is a contactless polishing process without any aging effects of the tool. Due to this an excellent stability of the removal rate without any mechanical surface damage is achieved. The related physical process - called sputtering - can be applied to any material and is therefore also applicable to materials of high hardness like Silicon (SiC, WC). The process is realized through the commercially available ion beam figuring system IonScan 3D. During the process, the substrate is moved in front of a focused broad ion beam. The local milling rate is controlled via a modulated velocity profile, which is calculated specifically for each surface topology in order to mill the material at the associated positions to the target geometry. The authors will present aspherical silicon lenses with very high surface form accuracies compared to conventionally manufactured lenses.
Detecting atrial fibrillation by deep convolutional neural networks.
Xia, Yong; Wulan, Naren; Wang, Kuanquan; Zhang, Henggui
2018-02-01
Atrial fibrillation (AF) is the most common cardiac arrhythmia. The incidence of AF increases with age, causing high risks of stroke and increased morbidity and mortality. Efficient and accurate diagnosis of AF based on the ECG is valuable in clinical settings and remains challenging. In this paper, we proposed a novel method with high reliability and accuracy for AF detection via deep learning. The short-term Fourier transform (STFT) and stationary wavelet transform (SWT) were used to analyze ECG segments to obtain two-dimensional (2-D) matrix input suitable for deep convolutional neural networks. Then, two different deep convolutional neural network models corresponding to STFT output and SWT output were developed. Our new method did not require detection of P or R peaks, nor feature designs for classification, in contrast to existing algorithms. Finally, the performances of the two models were evaluated and compared with those of existing algorithms. Our proposed method demonstrated favorable performances on ECG segments as short as 5 s. The deep convolutional neural network using input generated by STFT, presented a sensitivity of 98.34%, specificity of 98.24% and accuracy of 98.29%. For the deep convolutional neural network using input generated by SWT, a sensitivity of 98.79%, specificity of 97.87% and accuracy of 98.63% was achieved. The proposed method using deep convolutional neural networks shows high sensitivity, specificity and accuracy, and, therefore, is a valuable tool for AF detection. Copyright © 2017 Elsevier Ltd. All rights reserved.
Weissberger, Gali H.; Strong, Jessica V.; Stefanidis, Kayla B.; Summers, Mathew J.; Bondi, Mark W.; Stricker, Nikki H.
2018-01-01
With an increasing focus on biomarkers in dementia research, illustrating the role of neuropsychological assessment in detecting mild cognitive impairment (MCI) and Alzheimer’s dementia (AD) is important. This systematic review and meta-analysis, conducted in accordance with PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) standards, summarizes the sensitivity and specificity of memory measures in individuals with MCI and AD. Both meta-analytic and qualitative examination of AD versus healthy control (HC) studies (n = 47) revealed generally high sensitivity and specificity (≥ 80% for AD comparisons) for measures of immediate (sensitivity = 87%, specificity = 88%) and delayed memory (sensitivity = 89%, specificity = 89%), especially those involving word-list recall. Examination of MCI versus HC studies (n = 38) revealed generally lower diagnostic accuracy for both immediate (sensitivity = 72%, specificity = 81%) and delayed memory (sensitivity = 75%, specificity = 81%). Measures that differentiated AD from other conditions (n = 10 studies) yielded mixed results, with generally high sensitivity in the context of low or variable specificity. Results confirm that memory measures have high diagnostic accuracy for identification of AD, are promising but require further refinement for identification of MCI, and provide support for ongoing investigation of neuropsychological assessment as a cognitive biomarker of preclinical AD. Emphasizing diagnostic test accuracy statistics over null hypothesis testing in future studies will promote the ongoing use of neuropsychological tests as Alzheimer’s disease research and clinical criteria increasingly rely upon cerebrospinal fluid (CSF) and neuroimaging biomarkers. PMID:28940127
Frozen section pathology for decision making in parotid surgery.
Olsen, Kerry D; Moore, Eric J; Lewis, Jean E
2013-12-01
For parotid lesions, the high accuracy and utility of intraoperative frozen section (FS) pathology, compared with permanent section pathology, facilitates intraoperative decision making about the extent of surgery required. To demonstrate the accuracy and utility of FS pathology of parotid lesions as one factor in intraoperative decision making. Retrospective review of patients undergoing parotidectomy at a tertiary care center. Evaluation of the accuracy of FS pathology for parotid surgery by comparing FS pathology results with those of permanent section. Documented changes from FS to permanent section in 1339 parotidectomy pathology reports conducted from January 1, 2000, through December 31, 2009, included 693 benign and 268 primary and metastatic malignant tumors. Changes in diagnosis were found from benign to malignant (n = 11) and malignant to benign (n = 2). Sensitivity and specificity of a malignant diagnosis were 98.5% and 99.0%, respectively. Other changes were for lymphoma vs inflammation or lymphoma typing (n = 89) and for confirmation of or change in tumor type for benign (n = 36) or malignant (n = 69) tumors. No case changed from low- to high-grade malignant tumor. Only 4 cases that changed from FS to permanent section would have affected intraoperative decision making. Three patients underwent additional surgery 2 to 3 weeks later. Overall, only 1 patient was overtreated (lymphoma initially deemed carcinoma). Frozen section pathology for parotid lesions has high accuracy and utility in intraoperative decision making, facilitating timely complete procedures.
Loce, R P; Jodoin, R E
1990-09-10
Using the tools of Fourier analysis, a sampling requirement is derived that assures that sufficient information is contained within the samples of a distribution to calculate accurately geometric moments of that distribution. The derivation follows the standard textbook derivation of the Whittaker-Shannon sampling theorem, which is used for reconstruction, but further insight leads to a coarser minimum sampling interval for moment determination. The need for fewer samples to determine moments agrees with intuition since less information should be required to determine a characteristic of a distribution compared with that required to construct the distribution. A formula for calculation of the moments from these samples is also derived. A numerical analysis is performed to quantify the accuracy of the calculated first moment for practical nonideal sampling conditions. The theory is applied to a high speed laser beam position detector, which uses the normalized first moment to measure raster line positional accuracy in a laser printer. The effects of the laser irradiance profile, sampling aperture, number of samples acquired, quantization, and noise are taken into account.
A High-Order Direct Solver for Helmholtz Equations with Neumann Boundary Conditions
NASA Technical Reports Server (NTRS)
Sun, Xian-He; Zhuang, Yu
1997-01-01
In this study, a compact finite-difference discretization is first developed for Helmholtz equations on rectangular domains. Special treatments are then introduced for Neumann and Neumann-Dirichlet boundary conditions to achieve accuracy and separability. Finally, a Fast Fourier Transform (FFT) based technique is used to yield a fast direct solver. Analytical and experimental results show this newly proposed solver is comparable to the conventional second-order elliptic solver when accuracy is not a primary concern, and is significantly faster than that of the conventional solver if a highly accurate solution is required. In addition, this newly proposed fourth order Helmholtz solver is parallel in nature. It is readily available for parallel and distributed computers. The compact scheme introduced in this study is likely extendible for sixth-order accurate algorithms and for more general elliptic equations.
Iconic memory requires attention
Persuh, Marjan; Genzer, Boris; Melara, Robert D.
2012-01-01
Two experiments investigated whether attention plays a role in iconic memory, employing either a change detection paradigm (Experiment 1) or a partial-report paradigm (Experiment 2). In each experiment, attention was taxed during initial display presentation, focusing the manipulation on consolidation of information into iconic memory, prior to transfer into working memory. Observers were able to maintain high levels of performance (accuracy of change detection or categorization) even when concurrently performing an easy visual search task (low load). However, when the concurrent search was made difficult (high load), observers' performance dropped to almost chance levels, while search accuracy held at single-task levels. The effects of attentional load remained the same across paradigms. The results suggest that, without attention, participants consolidate in iconic memory only gross representations of the visual scene, information too impoverished for successful detection of perceptual change or categorization of features. PMID:22586389
Iconic memory requires attention.
Persuh, Marjan; Genzer, Boris; Melara, Robert D
2012-01-01
Two experiments investigated whether attention plays a role in iconic memory, employing either a change detection paradigm (Experiment 1) or a partial-report paradigm (Experiment 2). In each experiment, attention was taxed during initial display presentation, focusing the manipulation on consolidation of information into iconic memory, prior to transfer into working memory. Observers were able to maintain high levels of performance (accuracy of change detection or categorization) even when concurrently performing an easy visual search task (low load). However, when the concurrent search was made difficult (high load), observers' performance dropped to almost chance levels, while search accuracy held at single-task levels. The effects of attentional load remained the same across paradigms. The results suggest that, without attention, participants consolidate in iconic memory only gross representations of the visual scene, information too impoverished for successful detection of perceptual change or categorization of features.
Achieving behavioral control with millisecond resolution in a high-level programming environment.
Asaad, Wael F; Eskandar, Emad N
2008-08-30
The creation of psychophysical tasks for the behavioral neurosciences has generally relied upon low-level software running on a limited range of hardware. Despite the availability of software that allows the coding of behavioral tasks in high-level programming environments, many researchers are still reluctant to trust the temporal accuracy and resolution of programs running in such environments, especially when they run atop non-real-time operating systems. Thus, the creation of behavioral paradigms has been slowed by the intricacy of the coding required and their dissemination across labs has been hampered by the various types of hardware needed. However, we demonstrate here that, when proper measures are taken to handle the various sources of temporal error, accuracy can be achieved at the 1 ms time-scale that is relevant for the alignment of behavioral and neural events.
Bayesian network modelling of upper gastrointestinal bleeding
NASA Astrophysics Data System (ADS)
Aisha, Nazziwa; Shohaimi, Shamarina; Adam, Mohd Bakri
2013-09-01
Bayesian networks are graphical probabilistic models that represent causal and other relationships between domain variables. In the context of medical decision making, these models have been explored to help in medical diagnosis and prognosis. In this paper, we discuss the Bayesian network formalism in building medical support systems and we learn a tree augmented naive Bayes Network (TAN) from gastrointestinal bleeding data. The accuracy of the TAN in classifying the source of gastrointestinal bleeding into upper or lower source is obtained. The TAN achieves a high classification accuracy of 86% and an area under curve of 92%. A sensitivity analysis of the model shows relatively high levels of entropy reduction for color of the stool, history of gastrointestinal bleeding, consistency and the ratio of blood urea nitrogen to creatinine. The TAN facilitates the identification of the source of GIB and requires further validation.
Brückner, Hans-Peter; Spindeldreier, Christian; Blume, Holger
2013-01-01
A common approach for high accuracy sensor fusion based on 9D inertial measurement unit data is Kalman filtering. State of the art floating-point filter algorithms differ in their computational complexity nevertheless, real-time operation on a low-power microcontroller at high sampling rates is not possible. This work presents algorithmic modifications to reduce the computational demands of a two-step minimum order Kalman filter. Furthermore, the required bit-width of a fixed-point filter version is explored. For evaluation real-world data captured using an Xsens MTx inertial sensor is used. Changes in computational latency and orientation estimation accuracy due to the proposed algorithmic modifications and fixed-point number representation are evaluated in detail on a variety of processing platforms enabling on-board processing on wearable sensor platforms.
Climate Benchmark Missions: CLARREO
NASA Technical Reports Server (NTRS)
Wielicki, Bruce A.; Young, David F.
2010-01-01
CLARREO (Climate Absolute Radiance and Refractivity Observatory) is one of the four Tier 1 missions recommended by the recent NRC decadal survey report on Earth Science and Applications from Space (NRC, 2007). The CLARREO mission addresses the need to rigorously observe climate change on decade time scales and to use decadal change observations as the most critical method to determine the accuracy of climate change projections such as those used in the Fourth Assessment Report of the Intergovernmental Panel on Climate Change (IPCC AR4). A rigorously known accuracy of both decadal change observations as well as climate projections is critical in order to enable sound policy decisions. The CLARREO mission accomplishes this critical objective through highly accurate and SI traceable decadal change observations sensitive to many of the key uncertainties in climate radiative forcings, responses, and feedbacks that in turn drive uncertainty in current climate model projections. The same uncertainties also lead to uncertainty in attribution of climate change to anthropogenic forcing. The CLARREO breakthrough in decadal climate change observations is to achieve the required levels of accuracy and traceability to SI standards for a set of observations sensitive to a wide range of key decadal change variables. These accuracy levels are determined both by the projected decadal changes as well as by the background natural variability that such signals must be detected against. The accuracy for decadal change traceability to SI standards includes uncertainties of calibration, sampling, and analysis methods. Unlike most other missions, all of the CLARREO requirements are judged not by instantaneous accuracy, but instead by accuracy in large time/space scale average decadal changes. Given the focus on decadal climate change, the NRC Decadal Survey concluded that the single most critical issue for decadal change observations was their lack of accuracy and low confidence in observing the small but critical climate change signals. CLARREO is the recommended attack on this challenge, and builds on the last decade of climate observation advances in the Earth Observing System as well as metrological advances at NIST (National Institute of Standards and Technology) and other standards laboratories.
Neville, R S; Stonham, T J; Glover, R J
2000-01-01
In this article we present a methodology that partially pre-calculates the weight updates of the backpropagation learning regime and obtains high accuracy function mapping. The paper shows how to implement neural units in a digital formulation which enables the weights to be quantised to 8-bits and the activations to 9-bits. A novel methodology is introduced to enable the accuracy of sigma-pi units to be increased by expanding their internal state space. We, also, introduce a novel means of implementing bit-streams in ring memories instead of utilising shift registers. The investigation utilises digital "Higher Order" sigma-pi nodes and studies continuous input RAM-based sigma-pi units. The units are trained with the backpropagation learning regime to learn functions to a high accuracy. The neural model is the sigma-pi units which can be implemented in digital microelectronic technology. The ability to perform tasks that require the input of real-valued information, is one of the central requirements of any cognitive system that utilises artificial neural network methodologies. In this article we present recent research which investigates a technique that can be used for mapping accurate real-valued functions to RAM-nets. One of our goals was to achieve accuracies of better than 1% for target output functions in the range Y epsilon [0,1], this is equivalent to an average Mean Square Error (MSE) over all training vectors of 0.0001 or an error modulus of 0.01. We present a development of the sigma-pi node which enables the provision of high accuracy outputs. The sigma-pi neural model was initially developed by Gurney (Learning in nets of structured hypercubes. PhD Thesis, Department of Electrical Engineering, Brunel University, Middlessex, UK, 1989; available as Technical Memo CN/R/144). Gurney's neuron models, the Time Integration Node (TIN), utilises an activation that was derived from a bit-stream. In this article we present a new methodology for storing sigma-pi node's activations as single values which are averages. In the course of the article we state what we define as a real number; how we represent real numbers and input of continuous values in our neural system. We show how to utilise the bounded quantised site-values (weights) of sigma-pi nodes to make training of these neurocomputing systems simple, using pre-calculated look-up tables to train the nets. In order to meet our accuracy goal, we introduce a means of increasing the bandwidth capability of sigma-pi units by expanding their internal state-space. In our implementation we utilise bit-streams when we calculate the real-valued outputs of the net. To simplify the hardware implementation of bit-streams we present a method of mapping them to RAM-based hardware using 'ring memories'. Finally, we study the sigma-pi units' ability to generalise once they are trained to map real-valued, high accuracy, continuous functions. We use sigma-pi units as they have been shown to have shorter training times than their analogue counterparts and can also overcome some of the drawbacks of semi-linear units (Gurney, 1992. Neural Networks, 5, 289-303).
Sackmann, Eric K; Majlof, Lars; Hahn-Windgassen, Annett; Eaton, Brent; Bandzava, Temo; Daulton, Jay; Vandenbroucke, Arne; Mock, Matthew; Stearns, Richard G; Hinkson, Stephen; Datwani, Sammy S
2016-02-01
Acoustic liquid handling uses high-frequency acoustic signals that are focused on the surface of a fluid to eject droplets with high accuracy and precision for various life science applications. Here we present a multiwell source plate, the Echo Qualified Reservoir (ER), which can acoustically transfer over 2.5 mL of fluid per well in 25-nL increments using an Echo 525 liquid handler. We demonstrate two Labcyte technologies-Dynamic Fluid Analysis (DFA) methods and a high-voltage (HV) grid-that are required to maintain accurate and precise fluid transfers from the ER at this volume scale. DFA methods were employed to dynamically assess the energy requirements of the fluid and adjust the acoustic ejection parameters to maintain a constant velocity droplet. Furthermore, we demonstrate that the HV grid enhances droplet velocity and coalescence at the destination plate. These technologies enabled 5-µL per destination well transfers to a 384-well plate, with accuracy and precision values better than 4%. Last, we used the ER and Echo 525 liquid handler to perform a quantitative polymerase chain reaction (qPCR) assay to demonstrate an application that benefits from the flexibility and larger volume capabilities of the ER. © 2015 Society for Laboratory Automation and Screening.
Zhou, Hong; Liu, Jing; Xu, Jing-Juan; Zhang, Shu-Sheng; Chen, Hong-Yuan
2018-03-21
Modern optical detection technology plays a critical role in current clinical detection due to its high sensitivity and accuracy. However, higher requirements such as extremely high detection sensitivity have been put forward due to the clinical needs for the early finding and diagnosing of malignant tumors which are significant for tumor therapy. The technology of isothermal amplification with nucleic acids opens up avenues for meeting this requirement. Recent reports have shown that a nucleic acid amplification-assisted modern optical sensing interface has achieved satisfactory sensitivity and accuracy, high speed and specificity. Compared with isothermal amplification technology designed to work completely in a solution system, solid biosensing interfaces demonstrated better performances in stability and sensitivity due to their ease of separation from the reaction mixture and the better signal transduction on these optical nano-biosensing interfaces. Also the flexibility and designability during the construction of these nano-biosensing interfaces provided a promising research topic for the ultrasensitive detection of cancer diseases. In this review, we describe the construction of the burgeoning number of optical nano-biosensing interfaces assisted by a nucleic acid amplification strategy, and provide insightful views on: (1) approaches to the smart fabrication of an optical nano-biosensing interface, (2) biosensing mechanisms via the nucleic acid amplification method, (3) the newest strategies and future perspectives.
High resolution frequency analysis techniques with application to the redshift experiment
NASA Technical Reports Server (NTRS)
Decher, R.; Teuber, D.
1975-01-01
High resolution frequency analysis methods, with application to the gravitational probe redshift experiment, are discussed. For this experiment a resolution of .00001 Hz is required to measure a slowly varying, low frequency signal of approximately 1 Hz. Major building blocks include fast Fourier transform, discrete Fourier transform, Lagrange interpolation, golden section search, and adaptive matched filter technique. Accuracy, resolution, and computer effort of these methods are investigated, including test runs on an IBM 360/65 computer.
Absolute metrology for space interferometers
NASA Astrophysics Data System (ADS)
Salvadé, Yves; Courteville, Alain; Dändliker, René
2017-11-01
The crucial issue of space-based interferometers is the laser interferometric metrology systems to monitor with very high accuracy optical path differences. Although classical high-resolution laser interferometers using a single wavelength are well developed, this type of incremental interferometer has a severe drawback: any interruption of the interferometer signal results in the loss of the zero reference, which requires a new calibration, starting at zero optical path difference. We propose in this paper an absolute metrology system based on multiplewavelength interferometry.
Computational simulation and aerodynamic sensitivity analysis of film-cooled turbines
NASA Astrophysics Data System (ADS)
Massa, Luca
A computational tool is developed for the time accurate sensitivity analysis of the stage performance of hot gas, unsteady turbine components. An existing turbomachinery internal flow solver is adapted to the high temperature environment typical of the hot section of jet engines. A real gas model and film cooling capabilities are successfully incorporated in the software. The modifications to the existing algorithm are described; both the theoretical model and the numerical implementation are validated. The accuracy of the code in evaluating turbine stage performance is tested using a turbine geometry typical of the last stage of aeronautical jet engines. The results of the performance analysis show that the predictions differ from the experimental data by less than 3%. A reliable grid generator, applicable to the domain discretization of the internal flow field of axial flow turbine is developed. A sensitivity analysis capability is added to the flow solver, by rendering it able to accurately evaluate the derivatives of the time varying output functions. The complex Taylor's series expansion (CTSE) technique is reviewed. Two of them are used to demonstrate the accuracy and time dependency of the differentiation process. The results are compared with finite differences (FD) approximations. The CTSE is more accurate than the FD, but less efficient. A "black box" differentiation of the source code, resulting from the automated application of the CTSE, generates high fidelity sensitivity algorithms, but with low computational efficiency and high memory requirements. New formulations of the CTSE are proposed and applied. Selective differentiation of the method for solving the non-linear implicit residual equation leads to sensitivity algorithms with the same accuracy but improved run time. The time dependent sensitivity derivatives are computed in run times comparable to the ones required by the FD approach.
Caggiano, Michael D; Tinkham, Wade T; Hoffman, Chad; Cheng, Antony S; Hawbaker, Todd J
2016-10-01
The wildland-urban interface (WUI), the area where human development encroaches on undeveloped land, is expanding throughout the western United States resulting in increased wildfire risk to homes and communities. Although census based mapping efforts have provided insights into the pattern of development and expansion of the WUI at regional and national scales, these approaches do not provide sufficient detail for fine-scale fire and emergency management planning, which requires maps of individual building locations. Although fine-scale maps of the WUI have been developed, they are often limited in their spatial extent, have unknown accuracies and biases, and are costly to update over time. In this paper we assess a semi-automated Object Based Image Analysis (OBIA) approach that utilizes 4-band multispectral National Aerial Image Program (NAIP) imagery for the detection of individual buildings within the WUI. We evaluate this approach by comparing the accuracy and overall quality of extracted buildings to a building footprint control dataset. In addition, we assessed the effects of buffer distance, topographic conditions, and building characteristics on the accuracy and quality of building extraction. The overall accuracy and quality of our approach was positively related to buffer distance, with accuracies ranging from 50 to 95% for buffer distances from 0 to 100 m. Our results also indicate that building detection was sensitive to building size, with smaller outbuildings (footprints less than 75 m 2 ) having detection rates below 80% and larger residential buildings having detection rates above 90%. These findings demonstrate that this approach can successfully identify buildings in the WUI in diverse landscapes while achieving high accuracies at buffer distances appropriate for most fire management applications while overcoming cost and time constraints associated with traditional approaches. This study is unique in that it evaluates the ability of an OBIA approach to extract highly detailed data on building locations in a WUI setting.
Positional calibration of an ultrasound image-guided robotic breast biopsy system.
Nelson, Thomas R; Tran, Amy; Fakourfar, Hourieh; Nebeker, Jakob
2012-03-01
Precision biopsy of small lesions is essential in providing high-quality patient diagnosis and management. Localization depends on high-quality imaging. We have developed a dedicated, fully automatic volume breast ultrasound (US) imaging system for early breast cancer detection. This work focuses on development of an image-guided robotic biopsy system that is integrated with the volume breast US system for performing minimally invasive breast biopsies. The objective of this work was to assess the positional accuracy of the robotic system for breast biopsy. We have adapted a compact robotic arm for performing breast biopsy. The arm incorporates a force torque sensor and is modified to accommodate breast biopsy sampling needles mounted on the robot end effector. Volume breast US images are used as input to a targeting algorithm that provides the physician with control of biopsy device guidance and trajectory optimization. In this work, the positional accuracy was evaluated using (1) a light-emitting diode (LED) mounted on the end effector and (2) a LED mounted on the end of a biopsy needle, each of which was imaged for each robot controller position as part of mapping the positional accuracy throughout a volume that would contain the breast. We measured the error in each location and the cumulative error. Robotic device performance over the volume provided mean accuracy ± SD of 0.76 ± 0.13 mm (end effector) and 0.55 ± 0.13 mm (needle sample location), sufficient for a targeting accuracy within ±1 mm, which is suitable for clinical use. Depth positioning error also was small: 0.38 ± 0.03 mm. Reproducibility was excellent with less than 0.5% variation. Overall accuracy and reproducibility of the compact robotic device were excellent, well within clinical biopsy performance requirements. Volume breast US data provide high-quality input to a biopsy sampling algorithm under physician control. Robotic devices may provide more precise device placement, assisting physicians with biopsy procedures.
Caggiano, Michael D.; Tinkham, Wade T.; Hoffman, Chad; Cheng, Antony S.; Hawbaker, Todd J.
2016-01-01
The wildland-urban interface (WUI), the area where human development encroaches on undeveloped land, is expanding throughout the western United States resulting in increased wildfire risk to homes and communities. Although census based mapping efforts have provided insights into the pattern of development and expansion of the WUI at regional and national scales, these approaches do not provide sufficient detail for fine-scale fire and emergency management planning, which requires maps of individual building locations. Although fine-scale maps of the WUI have been developed, they are often limited in their spatial extent, have unknown accuracies and biases, and are costly to update over time. In this paper we assess a semi-automated Object Based Image Analysis (OBIA) approach that utilizes 4-band multispectral National Aerial Image Program (NAIP) imagery for the detection of individual buildings within the WUI. We evaluate this approach by comparing the accuracy and overall quality of extracted buildings to a building footprint control dataset. In addition, we assessed the effects of buffer distance, topographic conditions, and building characteristics on the accuracy and quality of building extraction. The overall accuracy and quality of our approach was positively related to buffer distance, with accuracies ranging from 50 to 95% for buffer distances from 0 to 100 m. Our results also indicate that building detection was sensitive to building size, with smaller outbuildings (footprints less than 75 m2) having detection rates below 80% and larger residential buildings having detection rates above 90%. These findings demonstrate that this approach can successfully identify buildings in the WUI in diverse landscapes while achieving high accuracies at buffer distances appropriate for most fire management applications while overcoming cost and time constraints associated with traditional approaches. This study is unique in that it evaluates the ability of an OBIA approach to extract highly detailed data on building locations in a WUI setting.
SFOL Pulse: A High Accuracy DME Pulse for Alternative Aircraft Position and Navigation.
Kim, Euiho; Seo, Jiwon
2017-09-22
In the Federal Aviation Administration's (FAA) performance based navigation strategy announced in 2016, the FAA stated that it would retain and expand the Distance Measuring Equipment (DME) infrastructure to ensure resilient aircraft navigation capability during the event of a Global Navigation Satellite System (GNSS) outage. However, the main drawback of the DME as a GNSS back up system is that it requires a significant expansion of the current DME ground infrastructure due to its poor distance measuring accuracy over 100 m. The paper introduces a method to improve DME distance measuring accuracy by using a new DME pulse shape. The proposed pulse shape was developed by using Genetic Algorithms and is less susceptible to multipath effects so that the ranging error reduces by 36.0-77.3% when compared to the Gaussian and Smoothed Concave Polygon DME pulses, depending on noise environment.
Design of all-weather celestial navigation system
NASA Astrophysics Data System (ADS)
Sun, Hongchi; Mu, Rongjun; Du, Huajun; Wu, Peng
2018-03-01
In order to realize autonomous navigation in the atmosphere, an all-weather celestial navigation system is designed. The research of celestial navigation system include discrimination method of comentropy and the adaptive navigation algorithm based on the P value. The discrimination method of comentropy is studied to realize the independent switching of two celestial navigation modes, starlight and radio. Finally, an adaptive filtering algorithm based on P value is proposed, which can greatly improve the disturbance rejection capability of the system. The experimental results show that the accuracy of the three axis attitude is better than 10″, and it can work all weather. In perturbation environment, the position accuracy of the integrated navigation system can be increased 20% comparing with the traditional method. It basically meets the requirements of the all-weather celestial navigation system, and it has the ability of stability, reliability, high accuracy and strong anti-interference.
SFOL Pulse: A High Accuracy DME Pulse for Alternative Aircraft Position and Navigation
Kim, Euiho
2017-01-01
In the Federal Aviation Administration’s (FAA) performance based navigation strategy announced in 2016, the FAA stated that it would retain and expand the Distance Measuring Equipment (DME) infrastructure to ensure resilient aircraft navigation capability during the event of a Global Navigation Satellite System (GNSS) outage. However, the main drawback of the DME as a GNSS back up system is that it requires a significant expansion of the current DME ground infrastructure due to its poor distance measuring accuracy over 100 m. The paper introduces a method to improve DME distance measuring accuracy by using a new DME pulse shape. The proposed pulse shape was developed by using Genetic Algorithms and is less susceptible to multipath effects so that the ranging error reduces by 36.0–77.3% when compared to the Gaussian and Smoothed Concave Polygon DME pulses, depending on noise environment. PMID:28937615
Research on Horizontal Accuracy Method of High Spatial Resolution Remotely Sensed Orthophoto Image
NASA Astrophysics Data System (ADS)
Xu, Y. M.; Zhang, J. X.; Yu, F.; Dong, S.
2018-04-01
At present, in the inspection and acceptance of high spatial resolution remotly sensed orthophoto image, the horizontal accuracy detection is testing and evaluating the accuracy of images, which mostly based on a set of testing points with the same accuracy and reliability. However, it is difficult to get a set of testing points with the same accuracy and reliability in the areas where the field measurement is difficult and the reference data with high accuracy is not enough. So it is difficult to test and evaluate the horizontal accuracy of the orthophoto image. The uncertainty of the horizontal accuracy has become a bottleneck for the application of satellite borne high-resolution remote sensing image and the scope of service expansion. Therefore, this paper proposes a new method to test the horizontal accuracy of orthophoto image. This method using the testing points with different accuracy and reliability. These points' source is high accuracy reference data and field measurement. The new method solves the horizontal accuracy detection of the orthophoto image in the difficult areas and provides the basis for providing reliable orthophoto images to the users.
NASA Technical Reports Server (NTRS)
Clegg, R. H.; Scherz, J. P.
1975-01-01
Successful aerial photography depends on aerial cameras providing acceptable photographs within cost restrictions of the job. For topographic mapping where ultimate accuracy is required only large format mapping cameras will suffice. For mapping environmental patterns of vegetation, soils, or water pollution, 9-inch cameras often exceed accuracy and cost requirements, and small formats may be better. In choosing the best camera for environmental mapping, relative capabilities and costs must be understood. This study compares resolution, photo interpretation potential, metric accuracy, and cost of 9-inch, 70mm, and 35mm cameras for obtaining simultaneous color and color infrared photography for environmental mapping purposes.
Carnley, Mark V.; Fulford, Janice M.; Brooks, Myron H.
2013-01-01
The Level TROLL 100 manufactured by In-Situ Inc. was evaluated by the U.S. Geological Survey (USGS) Hydrologic Instrumentation Facility (HIF) for conformance to the manufacturer’s accuracy specifications for measuring pressure throughout the device’s operating temperature range. The Level TROLL 100 is a submersible, sealed, water-level sensing device with an operating pressure range equivalent to 0 to 30 feet of water over a temperature range of −20 to 50 degrees Celsius (°C). The device met the manufacturer’s stated accuracy specifications for pressure within its temperature-compensated operating range of 0 to 50 °C. The device’s accuracy specifications did not meet established USGS requirements for primary water-stage sensors used in the operation of streamgages, but the Level TROLL 100 may be suitable for other hydrologic data-collection applications. As a note, the Level TROLL 100 is not designed to meet USGS accuracy requirements. Manufacturer accuracy specifications were evaluated, and the procedures followed and the results obtained are described in this report. USGS accuracy requirements are routinely examined and reported when instruments are evaluated at the HIF.
Huang, Chen-Yu; Keall, Paul; Rice, Adam; Colvill, Emma; Ng, Jin Aun; Booth, Jeremy T
2017-09-01
Inter-fraction and intra-fraction motion management methods are increasingly applied clinically and require the development of advanced motion platforms to facilitate testing and quality assurance program development. The aim of this study was to assess the performance of a 5 degrees-of-freedom (DoF) programmable motion platform HexaMotion (ScandiDos, Uppsala, Sweden) towards clinically observed tumor motion range, velocity, acceleration and the accuracy requirements of SABR prescribed in AAPM Task Group 142. Performance specifications for the motion platform were derived from literature regarding the motion characteristics of prostate and lung tumor targets required for real time motion management. The performance of the programmable motion platform was evaluated against (1) maximum range, velocity and acceleration (5 DoF), (2) static position accuracy (5 DoF) and (3) dynamic position accuracy using patient-derived prostate and lung tumor motion traces (3 DoF). Translational motion accuracy was compared against electromagnetic transponder measurements. Rotation was benchmarked with a digital inclinometer. The static accuracy and reproducibility for translation and rotation was <0.1 mm or <0.1°, respectively. The accuracy of reproducing dynamic patient motion was <0.3 mm. The motion platform's range met the need to reproduce clinically relevant translation and rotation ranges and its accuracy met the TG 142 requirements for SABR. The range, velocity and acceleration of the motion platform are sufficient to reproduce lung and prostate tumor motion for motion management. Programmable motion platforms are valuable tools in the investigation, quality assurance and commissioning of motion management systems in radiation oncology.
Short-term Temperature Prediction Using Adaptive Computing on Dynamic Scales
NASA Astrophysics Data System (ADS)
Hu, W.; Cervone, G.; Jha, S.; Balasubramanian, V.; Turilli, M.
2017-12-01
When predicting temperature, there are specific places and times when high accuracy predictions are harder. For example, not all the sub-regions in the domain require the same amount of computing resources to generate an accurate prediction. Plateau areas might require less computing resources than mountainous areas because of the steeper gradient of temperature change in the latter. However, it is difficult to estimate beforehand the optimal allocation of computational resources because several parameters play a role in determining the accuracy of the forecasts, in addition to orography. The allocation of resources to perform simulations can become a bottleneck because it requires human intervention to stop jobs or start new ones. The goal of this project is to design and develop a dynamic approach to generate short-term temperature predictions that can automatically determines the required computing resources and the geographic scales of the predictions based on the spatial and temporal uncertainties. The predictions and the prediction quality metrics are computed using a numeric weather prediction model, Analog Ensemble (AnEn), and the parallelization on high performance computing systems is accomplished using Ensemble Toolkit, one component of the RADICAL-Cybertools family of tools. RADICAL-Cybertools decouple the science needs from the computational capabilities by building an intermediate layer to run general ensemble patterns, regardless of the science. In this research, we show how the ensemble toolkit allows generating high resolution temperature forecasts at different spatial and temporal resolution. The AnEn algorithm is run using NAM analysis and forecasts data for the continental United States for a period of 2 years. AnEn results show that temperature forecasts perform well according to different probabilistic and deterministic statistical tests.
Structural Efficiency of Composite Struts for Aerospace Applications
NASA Technical Reports Server (NTRS)
Jegley, Dawn C.; Wu, K. Chauncey; McKenney, Martin J.; Oremont, Leonard
2011-01-01
The structural efficiency of carbon-epoxy tapered struts is considered through trade studies, detailed analysis, manufacturing and experimentation. Since some of the lunar lander struts are more highly loaded than struts used in applications such as satellites and telescopes, the primary focus of the effort is on these highly loaded struts. Lunar lander requirements include that the strut has to be tapered on both ends, complicating the design and limiting the manufacturing process. Optimal stacking sequences, geometries, and materials are determined and the sensitivity of the strut weight to each parameter is evaluated. The trade study results indicate that the most efficient carbon-epoxy struts are 30 percent lighter than the most efficient aluminum-lithium struts. Structurally efficient, highly loaded struts were fabricated and loaded in tension and compression to determine if they met the design requirements and to verify the accuracy of the analyses. Experimental evaluation of some of these struts demonstrated that they could meet the greatest Altair loading requirements in both tension and compression. These results could be applied to other vehicles requiring struts with high loading and light weight.
Luo, Shezhou; Chen, Jing M; Wang, Cheng; Xi, Xiaohuan; Zeng, Hongcheng; Peng, Dailiang; Li, Dong
2016-05-30
Vegetation leaf area index (LAI), height, and aboveground biomass are key biophysical parameters. Corn is an important and globally distributed crop, and reliable estimations of these parameters are essential for corn yield forecasting, health monitoring and ecosystem modeling. Light Detection and Ranging (LiDAR) is considered an effective technology for estimating vegetation biophysical parameters. However, the estimation accuracies of these parameters are affected by multiple factors. In this study, we first estimated corn LAI, height and biomass (R2 = 0.80, 0.874 and 0.838, respectively) using the original LiDAR data (7.32 points/m2), and the results showed that LiDAR data could accurately estimate these biophysical parameters. Second, comprehensive research was conducted on the effects of LiDAR point density, sampling size and height threshold on the estimation accuracy of LAI, height and biomass. Our findings indicated that LiDAR point density had an important effect on the estimation accuracy for vegetation biophysical parameters, however, high point density did not always produce highly accurate estimates, and reduced point density could deliver reasonable estimation results. Furthermore, the results showed that sampling size and height threshold were additional key factors that affect the estimation accuracy of biophysical parameters. Therefore, the optimal sampling size and the height threshold should be determined to improve the estimation accuracy of biophysical parameters. Our results also implied that a higher LiDAR point density, larger sampling size and height threshold were required to obtain accurate corn LAI estimation when compared with height and biomass estimations. In general, our results provide valuable guidance for LiDAR data acquisition and estimation of vegetation biophysical parameters using LiDAR data.
High-coherence mid-infrared dual-comb spectroscopy spanning 2.6 to 5.2 μm
NASA Astrophysics Data System (ADS)
Ycas, Gabriel; Giorgetta, Fabrizio R.; Baumann, Esther; Coddington, Ian; Herman, Daniel; Diddams, Scott A.; Newbury, Nathan R.
2018-04-01
Mid-infrared dual-comb spectroscopy has the potential to supplant conventional Fourier-transform spectroscopy in applications requiring high resolution, accuracy, signal-to-noise ratio and speed. Until now, mid-infrared dual-comb spectroscopy has been limited to narrow optical bandwidths or low signal-to-noise ratios. Using digital signal processing and broadband frequency conversion in waveguides, we demonstrate a mid-infrared dual-comb spectrometer covering 2.6 to 5.2 µm with comb-tooth resolution, sub-MHz frequency precision and accuracy, and a spectral signal-to-noise ratio as high as 6,500. As a demonstration, we measure the highly structured, broadband cross-section of propane from 2,840 to 3,040 cm-1, the complex phase/amplitude spectra of carbonyl sulfide from 2,000 to 2,100 cm-1, and of a methane, acetylene and ethane mixture from 2,860 to 3,400 cm-1. The combination of broad bandwidth, comb-mode resolution and high brightness will enable accurate mid-infrared spectroscopy in precision laboratory experiments and non-laboratory applications including open-path atmospheric gas sensing, process monitoring and combustion.
Time-optimized laser micro machining by using a new high dynamic and high precision galvo scanner
NASA Astrophysics Data System (ADS)
Jaeggi, Beat; Neuenschwander, Beat; Zimmermann, Markus; Zecherle, Markus; Boeckler, Ernst W.
2016-03-01
High accuracy, quality and throughput are key factors in laser micro machining. To obtain these goals the ablation process, the machining strategy and the scanning device have to be optimized. The precision is influenced by the accuracy of the galvo scanner and can further be enhanced by synchronizing the movement of the mirrors with the laser pulse train. To maintain a high machining quality i.e. minimum surface roughness, the pulse-to-pulse distance has also to be optimized. Highest ablation efficiency is obtained by choosing the proper laser peak fluence together with highest specific removal rate. The throughput can now be enhanced by simultaneously increasing the average power, the repetition rate as well as the scanning speed to preserve the fluence and the pulse-to-pulse distance. Therefore a high scanning speed is of essential importance. To guarantee the required excellent accuracy even at high scanning speeds a new interferometry based encoder technology was used, that provides a high quality signal for closed-loop control of the galvo scanner position. Low inertia encoder design enables a very dynamic scanner system, which can be driven to very high line speeds by a specially adapted control solution. We will present results with marking speeds up to 25 m/s using a f = 100 mm objective obtained with a new scanning system and scanner tuning maintaining a precision of about 5 μm. Further it will be shown that, especially for short line lengths, the machining time can be minimized by choosing the proper speed which has not to be the maximum one.
Fundamental Techniques for High Photon Energy Stability of a Modern Soft X-ray Beamline
DOE Office of Scientific and Technical Information (OSTI.GOV)
Senba, Yasunori; Kishimoto, Hikaru; Miura, Takanori
2007-01-19
High energy resolution and high energy stability are required for modern soft x-ray beamlines. Attempts at improving the energy stability are presented in this paper. Some measures have been adopted to avoid energy instability. It is clearly observed that the unstable temperature of the support frame of the optical elements results in photon energy instability. A photon energy stability of 10 meV for half a day is achieved by controlling the temperature with an accuracy of 0.01 deg. C.
High-order ENO schemes applied to two- and three-dimensional compressible flow
NASA Technical Reports Server (NTRS)
Shu, Chi-Wang; Erlebacher, Gordon; Zang, Thomas A.; Whitaker, David; Osher, Stanley
1991-01-01
High order essentially non-oscillatory (ENO) finite difference schemes are applied to the 2-D and 3-D compressible Euler and Navier-Stokes equations. Practical issues, such as vectorization, efficiency of coding, cost comparison with other numerical methods, and accuracy degeneracy effects, are discussed. Numerical examples are provided which are representative of computational problems of current interest in transition and turbulence physics. These require both nonoscillatory shock capturing and high resolution for detailed structures in the smooth regions and demonstrate the advantage of ENO schemes.
Kobler, Jan-Philipp; Schoppe, Michael; Lexow, G Jakob; Rau, Thomas S; Majdani, Omid; Kahrs, Lüder A; Ortmaier, Tobias
2014-11-01
Minimally invasive cochlear implantation is a surgical technique which requires drilling a canal from the mastoid surface toward the basal turn of the cochlea. The choice of an appropriate drilling strategy is hypothesized to have significant influence on the achievable targeting accuracy. Therefore, a method is presented to analyze the contribution of the drilling process and drilling tool to the targeting error isolated from other error sources. The experimental setup to evaluate the borehole accuracy comprises a drill handpiece attached to a linear slide as well as a highly accurate coordinate measuring machine (CMM). Based on the specific requirements of the minimally invasive cochlear access, three drilling strategies, mainly characterized by different drill tools, are derived. The strategies are evaluated by drilling into synthetic temporal bone substitutes containing air-filled cavities to simulate mastoid cells. Deviations from the desired drill trajectories are determined based on measurements using the CMM. Using the experimental setup, a total of 144 holes were drilled for accuracy evaluation. Errors resulting from the drilling process depend on the specific geometry of the tool as well as the angle at which the drill contacts the bone surface. Furthermore, there is a risk of the drill bit deflecting due to synthetic mastoid cells. A single-flute gun drill combined with a pilot drill of the same diameter provided the best results for simulated minimally invasive cochlear implantation, based on an experimental method that may be used for testing further drilling process improvements.
Methods of Laser, Non-Linear, and Fiber Optics in Studying Fundamental Problems of Astrophysics
NASA Astrophysics Data System (ADS)
Kryukov, P. G.
2018-04-01
Precise measurements of Doppler shifts of lines in stellar spectra allowing the radial velocity to be measured are an important field of astrophysical studies. A remarkable feature of the Doppler spectroscopy is the possibility to reliably measure quite small variations of the radial velocities (its acceleration, in fact) during long periods of time. Influence of a planet on a star is an example of such a variation. Under the influence of a planet rotating around a star, the latter demonstrates periodic motion manifested in the Doppler shift of the stellar spectrum. Precise measurements of this shift made it possible to indirectly discover planets outside the Solar system (exoplanets). Along with this, searching for Earth-type exoplanets within the habitable zone is an important challenge. For this purpose, accuracy of spectral measurements has to allow one to determine radial velocity variations at the level of centimeters per second during the timespans of about a year. Suchmeasurements on the periods of 10-15 years also would serve as a directmethod for determination of assumed acceleration of the Universe expansion. However, the required accuracy of spectroscopic measurements for this exceeds the possibilities of the traditional spectroscopy (an iodine cell, spectral lamps). Methods of radical improvement of possibilities of astronomical Doppler spectroscopy allowing one to attain the required measurement accuracy of Doppler shifts are considered. The issue of precise calibration can be solved through creating a system of a laser optical frequency generator of an exceptionally high accuracy and stability.
Hardware friendly probabilistic spiking neural network with long-term and short-term plasticity.
Hsieh, Hung-Yi; Tang, Kea-Tiong
2013-12-01
This paper proposes a probabilistic spiking neural network (PSNN) with unimodal weight distribution, possessing long- and short-term plasticity. The proposed algorithm is derived by both the arithmetic gradient decent calculation and bioinspired algorithms. The algorithm is benchmarked by the Iris and Wisconsin breast cancer (WBC) data sets. The network features fast convergence speed and high accuracy. In the experiment, the PSNN took not more than 40 epochs for convergence. The average testing accuracy for Iris and WBC data is 96.7% and 97.2%, respectively. To test the usefulness of the PSNN to real world application, the PSNN was also tested with the odor data, which was collected by our self-developed electronic nose (e-nose). Compared with the algorithm (K-nearest neighbor) that has the highest classification accuracy in the e-nose for the same odor data, the classification accuracy of the PSNN is only 1.3% less but the memory requirement can be reduced at least 40%. All the experiments suggest that the PSNN is hardware friendly. First, it requires only nine-bits weight resolution for training and testing. Second, the PSNN can learn complex data sets with a little number of neurons that in turn reduce the cost of VLSI implementation. In addition, the algorithm is insensitive to synaptic noise and the parameter variation induced by the VLSI fabrication. Therefore, the algorithm can be implemented by either software or hardware, making it suitable for wider application.
McQueen, Robert Brett; Breton, Marc D; Craig, Joyce; Holmes, Hayden; Whittington, Melanie D; Ott, Markus A; Campbell, Jonathan D
2018-04-01
The objective was to model clinical and economic outcomes of self-monitoring blood glucose (SMBG) devices with varying error ranges and strip prices for type 1 and insulin-treated type 2 diabetes patients in England. We programmed a simulation model that included separate risk and complication estimates by type of diabetes and evidence from in silico modeling validated by the Food and Drug Administration. Changes in SMBG error were associated with changes in hemoglobin A1c (HbA1c) and separately, changes in hypoglycemia. Markov cohort simulation estimated clinical and economic outcomes. A SMBG device with 8.4% error and strip price of £0.30 (exceeding accuracy requirements by International Organization for Standardization [ISO] 15197:2013/EN ISO 15197:2015) was compared to a device with 15% error (accuracy meeting ISO 15197:2013/EN ISO 15197:2015) and price of £0.20. Outcomes were lifetime costs, quality-adjusted life years (QALYs) and incremental cost-effectiveness ratios (ICERs). With SMBG errors associated with changes in HbA1c only, the ICER was £3064 per QALY in type 1 diabetes and £264 668 per QALY in insulin-treated type 2 diabetes for an SMBG device with 8.4% versus 15% error. With SMBG errors associated with hypoglycemic events only, the device exceeding accuracy requirements was cost-saving and more effective in insulin-treated type 1 and type 2 diabetes. Investment in devices with higher strip prices but improved accuracy (less error) appears to be an efficient strategy for insulin-treated diabetes patients at high risk of severe hypoglycemia.
Measuring water level in rivers and lakes from lightweight Unmanned Aerial Vehicles
NASA Astrophysics Data System (ADS)
Bandini, Filippo; Jakobsen, Jakob; Olesen, Daniel; Reyna-Gutierrez, Jose Antonio; Bauer-Gottwein, Peter
2017-05-01
The assessment of hydrologic dynamics in rivers, lakes, reservoirs and wetlands requires measurements of water level, its temporal and spatial derivatives, and the extent and dynamics of open water surfaces. Motivated by the declining number of ground-based measurement stations, research efforts have been devoted to the retrieval of these hydraulic properties from spaceborne platforms in the past few decades. However, due to coarse spatial and temporal resolutions, spaceborne missions have several limitations when assessing the water level of terrestrial surface water bodies and determining complex water dynamics. Unmanned Aerial Vehicles (UAVs) can fill the gap between spaceborne and ground-based observations, and provide high spatial resolution and dense temporal coverage data, in quick turn-around time, using flexible payload design. This study focused on categorizing and testing sensors, which comply with the weight constraint of small UAVs (around 1.5 kg), capable of measuring the range to water surface. Subtracting the measured range from the vertical position retrieved by the onboard Global Navigation Satellite System (GNSS) receiver, we can determine the water level (orthometric height). Three different ranging payloads, which consisted of a radar, a sonar and an in-house developed camera-based laser distance sensor (CLDS), have been evaluated in terms of accuracy, precision, maximum ranging distance and beam divergence. After numerous flights, the relative accuracy of the overall system was estimated. A ranging accuracy better than 0.5% of the range and a maximum ranging distance of 60 m were achieved with the radar. The CLDS showed the lowest beam divergence, which is required to avoid contamination of the signal from interfering surroundings for narrow fields of view. With the GNSS system delivering a relative vertical accuracy better than 3-5 cm, water level can be retrieved with an overall accuracy better than 5-7 cm.
Use of partial least squares regression to impute SNP genotypes in Italian cattle breeds.
Dimauro, Corrado; Cellesi, Massimo; Gaspa, Giustino; Ajmone-Marsan, Paolo; Steri, Roberto; Marras, Gabriele; Macciotta, Nicolò P P
2013-06-05
The objective of the present study was to test the ability of the partial least squares regression technique to impute genotypes from low density single nucleotide polymorphisms (SNP) panels i.e. 3K or 7K to a high density panel with 50K SNP. No pedigree information was used. Data consisted of 2093 Holstein, 749 Brown Swiss and 479 Simmental bulls genotyped with the Illumina 50K Beadchip. First, a single-breed approach was applied by using only data from Holstein animals. Then, to enlarge the training population, data from the three breeds were combined and a multi-breed analysis was performed. Accuracies of genotypes imputed using the partial least squares regression method were compared with those obtained by using the Beagle software. The impact of genotype imputation on breeding value prediction was evaluated for milk yield, fat content and protein content. In the single-breed approach, the accuracy of imputation using partial least squares regression was around 90 and 94% for the 3K and 7K platforms, respectively; corresponding accuracies obtained with Beagle were around 85% and 90%. Moreover, computing time required by the partial least squares regression method was on average around 10 times lower than computing time required by Beagle. Using the partial least squares regression method in the multi-breed resulted in lower imputation accuracies than using single-breed data. The impact of the SNP-genotype imputation on the accuracy of direct genomic breeding values was small. The correlation between estimates of genetic merit obtained by using imputed versus actual genotypes was around 0.96 for the 7K chip. Results of the present work suggested that the partial least squares regression imputation method could be useful to impute SNP genotypes when pedigree information is not available.
Karuppiah Ramachandran, Vignesh Raja; Alblas, Huibert J; Le, Duc V; Meratnia, Nirvana
2018-05-24
In the last decade, seizure prediction systems have gained a lot of attention because of their enormous potential to largely improve the quality-of-life of the epileptic patients. The accuracy of the prediction algorithms to detect seizure in real-world applications is largely limited because the brain signals are inherently uncertain and affected by various factors, such as environment, age, drug intake, etc., in addition to the internal artefacts that occur during the process of recording the brain signals. To deal with such ambiguity, researchers transitionally use active learning, which selects the ambiguous data to be annotated by an expert and updates the classification model dynamically. However, selecting the particular data from a pool of large ambiguous datasets to be labelled by an expert is still a challenging problem. In this paper, we propose an active learning-based prediction framework that aims to improve the accuracy of the prediction with a minimum number of labelled data. The core technique of our framework is employing the Bernoulli-Gaussian Mixture model (BGMM) to determine the feature samples that have the most ambiguity to be annotated by an expert. By doing so, our approach facilitates expert intervention as well as increasing medical reliability. We evaluate seven different classifiers in terms of the classification time and memory required. An active learning framework built on top of the best performing classifier is evaluated in terms of required annotation effort to achieve a high level of prediction accuracy. The results show that our approach can achieve the same accuracy as a Support Vector Machine (SVM) classifier using only 20 % of the labelled data and also improve the prediction accuracy even under the noisy condition.
75 FR 82323 - Accuracy of Advertising and Notice of Insured Status
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-30
... NATIONAL CREDIT UNION ADMINISTRATION 12 CFR Part 740 RIN 3133-AD83 Accuracy of Advertising and... advertising statement rule. Specifically, insured credit unions will be required to include the statement in... requirements for the official advertising statement in print materials. DATES: Comments must be received on or...
75 FR 57465 - Sunshine Act Meeting; Open Commission Meeting; Thursday, September 23, 2010
Federal Register 2010, 2011, 2012, 2013, 2014
2010-09-21
... WIRELINE TITLE: Schools and COMPETITION. Libraries Universal Service Support Mechanism (CC Docket No. 02- 6... PUBLIC SAFETY & TITLE: Wireless HOMELAND SECURITY. E911 Location Accuracy Requirements (PS Docket No. 07... SAFETY & TITLE: Wireless HOMELAND SECURITY. E911 Location Accuracy Requirements (PS Docket No. 07- 114...
A mobile robot system for ground servicing operations on the space shuttle
NASA Astrophysics Data System (ADS)
Dowling, K.; Bennett, R.; Blackwell, M.; Graham, T.; Gatrall, S.; O'Toole, R.; Schempf, H.
1992-11-01
A mobile system for space shuttle servicing, the Tessellator, has been configured, designed and is currently being built and integrated. Robot tasks include chemical injection and inspection of the shuttle's thermal protection system. This paper outlines tasks, rationale, and facility requirements for the development of this system. A detailed look at the mobile system and manipulator follow with a look at mechanics, electronics, and software. Salient features of the mobile robot include omnidirectionality, high reach, high stiffness and accuracy with safety and self-reliance integral to all aspects of the design. The robot system is shown to meet task, facility, and NASA requirements in its design resulting in unprecedented specifications for a mobile-manipulation system.
A mobile robot system for ground servicing operations on the space shuttle
NASA Technical Reports Server (NTRS)
Dowling, K.; Bennett, R.; Blackwell, M.; Graham, T.; Gatrall, S.; O'Toole, R.; Schempf, H.
1992-01-01
A mobile system for space shuttle servicing, the Tessellator, has been configured, designed and is currently being built and integrated. Robot tasks include chemical injection and inspection of the shuttle's thermal protection system. This paper outlines tasks, rationale, and facility requirements for the development of this system. A detailed look at the mobile system and manipulator follow with a look at mechanics, electronics, and software. Salient features of the mobile robot include omnidirectionality, high reach, high stiffness and accuracy with safety and self-reliance integral to all aspects of the design. The robot system is shown to meet task, facility, and NASA requirements in its design resulting in unprecedented specifications for a mobile-manipulation system.
Application of a territorial-based filtering algorithm in turbomachinery blade design optimization
NASA Astrophysics Data System (ADS)
Bahrami, Salman; Khelghatibana, Maryam; Tribes, Christophe; Yi Lo, Suk; von Fellenberg, Sven; Trépanier, Jean-Yves; Guibault, François
2017-02-01
A territorial-based filtering algorithm (TBFA) is proposed as an integration tool in a multi-level design optimization methodology. The design evaluation burden is split between low- and high-cost levels in order to properly balance the cost and required accuracy in different design stages, based on the characteristics and requirements of the case at hand. TBFA is in charge of connecting those levels by selecting a given number of geometrically different promising solutions from the low-cost level to be evaluated in the high-cost level. Two test case studies, a Francis runner and a transonic fan rotor, have demonstrated the robustness and functionality of TBFA in real industrial optimization problems.
NASA Astrophysics Data System (ADS)
Le Bouteiller, P.; Benjemaa, M.; Métivier, L.; Virieux, J.
2018-03-01
Accurate numerical computation of wave traveltimes in heterogeneous media is of major interest for a large range of applications in seismics, such as phase identification, data windowing, traveltime tomography and seismic imaging. A high level of precision is needed for traveltimes and their derivatives in applications which require quantities such as amplitude or take-off angle. Even more challenging is the anisotropic case, where the general Eikonal equation is a quartic in the derivatives of traveltimes. Despite their efficiency on Cartesian meshes, finite-difference solvers are inappropriate when dealing with unstructured meshes and irregular topographies. Moreover, reaching high orders of accuracy generally requires wide stencils and high additional computational load. To go beyond these limitations, we propose a discontinuous-finite-element-based strategy which has the following advantages: (1) the Hamiltonian formalism is general enough for handling the full anisotropic Eikonal equations; (2) the scheme is suitable for any desired high-order formulation or mixing of orders (p-adaptivity); (3) the solver is explicit whatever Hamiltonian is used (no need to find the roots of the quartic); (4) the use of unstructured meshes provides the flexibility for handling complex boundary geometries such as topographies (h-adaptivity) and radiation boundary conditions for mimicking an infinite medium. The point-source factorization principles are extended to this discontinuous Galerkin formulation. Extensive tests in smooth analytical media demonstrate the high accuracy of the method. Simulations in strongly heterogeneous media illustrate the solver robustness to realistic Earth-sciences-oriented applications.
Definition and Proposed Realization of the International Height Reference System (IHRS)
NASA Astrophysics Data System (ADS)
Ihde, Johannes; Sánchez, Laura; Barzaghi, Riccardo; Drewes, Hermann; Foerste, Christoph; Gruber, Thomas; Liebsch, Gunter; Marti, Urs; Pail, Roland; Sideris, Michael
2017-05-01
Studying, understanding and modelling global change require geodetic reference frames with an order of accuracy higher than the magnitude of the effects to be actually studied and with high consistency and reliability worldwide. The International Association of Geodesy, taking care of providing a precise geodetic infrastructure for monitoring the Earth system, promotes the implementation of an integrated global geodetic reference frame that provides a reliable frame for consistent analysis and modelling of global phenomena and processes affecting the Earth's gravity field, the Earth's surface geometry and the Earth's rotation. The definition, realization, maintenance and wide utilization of the International Terrestrial Reference System guarantee a globally unified geometric reference frame with an accuracy at the millimetre level. An equivalent high-precision global physical reference frame that supports the reliable description of changes in the Earth's gravity field (such as sea level variations, mass displacements, processes associated with geophysical fluids) is missing. This paper addresses the theoretical foundations supporting the implementation of such a physical reference surface in terms of an International Height Reference System and provides guidance for the coming activities required for the practical and sustainable realization of this system. Based on conceptual approaches of physical geodesy, the requirements for a unified global height reference system are derived. In accordance with the practice, its realization as the International Height Reference Frame is designed. Further steps for the implementation are also proposed.
Sustained Satellite Missions for Climate Data Records
NASA Technical Reports Server (NTRS)
Halpern, David
2012-01-01
Satellite CDRs possess the accuracy, longevity, and stability for sustained moni toring of critical variables to enhance understanding of the global integrated Earth system and predict future conditions. center dot Satellite CDRs are a critical element of a global climate observing system. center dot Satellite CDRs are a difficult challenge and require high - level managerial commitment, extensive intellectual capital, and adequate funding.
NASA Astrophysics Data System (ADS)
Shay, T. M.; Benham, Vincent; Baker, J. T.; Ward, Benjamin; Sanchez, Anthony D.; Culpepper, Mark A.; Pilkington, D.; Spring, Justin; Nelson, Douglas J.; Lu, Chunte A.
2006-08-01
A novel high accuracy all electronic technique for phase locking arrays of optical fibers is demonstrated. We report the first demonstration of the only electronic phase locking technique that doesn't require a reference beam. The measured phase error is λ/20. Excellent phase locking has been demonstrated for fiber amplifier arrays.
The TOPEX satellite option study
NASA Technical Reports Server (NTRS)
1982-01-01
The applicability of an existing spacecraft bus and subsystems to the requirements of ocean circulation measurements are assessed. The operational meteorological satellite family TIROS and DMSP are recommended. These programs utilize a common bus to satisfy their Earth observation missions. Note that although the instrument complements were different, the pointing accuracies were different, and, initially, the boosters were different, a high degree of commonality was achieved.
PARAGON: A Systematic, Integrated Approach to Aerosol Observation and Modeling
NASA Technical Reports Server (NTRS)
Diner, David J.; Kahn, Ralph A.; Braverman, Amy J.; Davies, Roger; Martonchik, John V.; Menzies, Robert T.; Ackerman, Thomas P.; Seinfeld, John H.; Anderson, Theodore L.; Charlson, Robert J.;
2004-01-01
Aerosols are generated and transformed by myriad processes operating across many spatial and temporal scales. Evaluation of climate models and their sensitivity to changes, such as in greenhouse gas abundances, requires quantifying natural and anthropogenic aerosol forcings and accounting for other critical factors, such as cloud feedbacks. High accuracy is required to provide sufficient sensitivity to perturbations, separate anthropogenic from natural influences, and develop confidence in inputs used to support policy decisions. Although many relevant data sources exist, the aerosol research community does not currently have the means to combine these diverse inputs into an integrated data set for maximum scientific benefit. Bridging observational gaps, adapting to evolving measurements, and establishing rigorous protocols for evaluating models are necessary, while simultaneously maintaining consistent, well understood accuracies. The Progressive Aerosol Retrieval and Assimilation Global Observing Network (PARAGON) concept represents a systematic, integrated approach to global aerosol Characterization, bringing together modern measurement and modeling techniques, geospatial statistics methodologies, and high-performance information technologies to provide the machinery necessary for achieving a comprehensive understanding of how aerosol physical, chemical, and radiative processes impact the Earth system. We outline a framework for integrating and interpreting observations and models and establishing an accurate, consistent and cohesive long-term data record.
Zandvakili, Arya; Campbell, Ian; Weirauch, Matthew T.
2018-01-01
Cells use thousands of regulatory sequences to recruit transcription factors (TFs) and produce specific transcriptional outcomes. Since TFs bind degenerate DNA sequences, discriminating functional TF binding sites (TFBSs) from background sequences represents a significant challenge. Here, we show that a Drosophila regulatory element that activates Epidermal Growth Factor signaling requires overlapping, low-affinity TFBSs for competing TFs (Pax2 and Senseless) to ensure cell- and segment-specific activity. Testing available TF binding models for Pax2 and Senseless, however, revealed variable accuracy in predicting such low-affinity TFBSs. To better define parameters that increase accuracy, we developed a method that systematically selects subsets of TFBSs based on predicted affinity to generate hundreds of position-weight matrices (PWMs). Counterintuitively, we found that degenerate PWMs produced from datasets depleted of high-affinity sequences were more accurate in identifying both low- and high-affinity TFBSs for the Pax2 and Senseless TFs. Taken together, these findings reveal how TFBS arrangement can be constrained by competition rather than cooperativity and that degenerate models of TF binding preferences can improve identification of biologically relevant low affinity TFBSs. PMID:29617378
NASA Astrophysics Data System (ADS)
Rezeki, S.; Pasaribu, A. P.
2018-03-01
Indonesia is the country where malaria is still the most common population problem. The high rate of mortality and morbidity occurred due to delays in diagnosis whichis strongly influenced by the availability of diagnostic tools and personnel with required laboratory skill. This diagnostic study aims to compare the accuracy of a Rapid Diagnostic Test (RDT) without skill requirement, to agold standard microscopic method for malaria diagnosis. The study was conducted in Subdistrict Lima Puluh North Sumatera Province from December 2015 to January 2016. The subject was taken cross-sectionally from a population with characteristics typically found in malaria patients in Subdistrict Lima Puluh. The result showed a sensitivity of 100% and a specificity of 72.4% with a positive predictive value of 89.9% and a negative predictive value of 100%; the negative likelihood ratio is 0 and the positive likelihood ratio of 27.6 for Parascreen. This research indicates that Parascreen had a high sensitivity and specificity and may be consideredas an alternative for the diagnosis of malaria in Subdistrict Lima Puluh North Sumatera Province especially in areas where no skilled microscopist is available.
Automatic and robust extrinsic camera calibration for high-accuracy mobile mapping
NASA Astrophysics Data System (ADS)
Goeman, Werner; Douterloigne, Koen; Bogaert, Peter; Pires, Rui; Gautama, Sidharta
2012-10-01
A mobile mapping system (MMS) is the answer of the geoinformation community to the exponentially growing demand for various geospatial data with increasingly higher accuracies and captured by multiple sensors. As the mobile mapping technology is pushed to explore its use for various applications on water, rail, or road, the need emerges to have an external sensor calibration procedure which is portable, fast and easy to perform. This way, sensors can be mounted and demounted depending on the application requirements without the need for time consuming calibration procedures. A new methodology is presented to provide a high quality external calibration of cameras which is automatic, robust and fool proof.The MMS uses an Applanix POSLV420, which is a tightly coupled GPS/INS positioning system. The cameras used are Point Grey color video cameras synchronized with the GPS/INS system. The method uses a portable, standard ranging pole which needs to be positioned on a known ground control point. For calibration a well studied absolute orientation problem needs to be solved. Here, a mutual information based image registration technique is studied for automatic alignment of the ranging pole. Finally, a few benchmarking tests are done under various lighting conditions which proves the methodology's robustness, by showing high absolute stereo measurement accuracies of a few centimeters.
Detecting the Water-soluble Chloride Distribution of Cement Paste in a High-precision Way.
Chang, Honglei; Mu, Song
2017-11-21
To improve the accuracy of the chloride distribution along the depth of cement paste under cyclic wet-dry conditions, a new method is proposed to obtain a high-precision chloride profile. Firstly, paste specimens are molded, cured, and exposed to cyclic wet-dry conditions. Then, powder samples at different specimen depths are grinded when the exposure age is reached. Finally, the water-soluble chloride content is detected using a silver nitrate titration method, and chloride profiles are plotted. The key to improving the accuracy of the chloride distribution along the depth is to exclude the error in the powderization, which is the most critical step for testing the distribution of chloride. Based on the above concept, the grinding method in this protocol can be used to grind powder samples automatically layer by layer from the surface inward, and it should be noted that a very thin grinding thickness (less than 0.5 mm) with a minimum error less than 0.04 mm can be obtained. The chloride profile obtained by this method better reflects the chloride distribution in specimens, which helps researchers to capture the distribution features that are often overlooked. Furthermore, this method can be applied to studies in the field of cement-based materials, which require high chloride distribution accuracy.
An Adaptive Failure Detector Based on Quality of Service in Peer-to-Peer Networks
Dong, Jian; Ren, Xiao; Zuo, Decheng; Liu, Hongwei
2014-01-01
The failure detector is one of the fundamental components that maintain high availability of Peer-to-Peer (P2P) networks. Under different network conditions, the adaptive failure detector based on quality of service (QoS) can achieve the detection time and accuracy required by upper applications with lower detection overhead. In P2P systems, complexity of network and high churn lead to high message loss rate. To reduce the impact on detection accuracy, baseline detection strategy based on retransmission mechanism has been employed widely in many P2P applications; however, Chen's classic adaptive model cannot describe this kind of detection strategy. In order to provide an efficient service of failure detection in P2P systems, this paper establishes a novel QoS evaluation model for the baseline detection strategy. The relationship between the detection period and the QoS is discussed and on this basis, an adaptive failure detector (B-AFD) is proposed, which can meet the quantitative QoS metrics under changing network environment. Meanwhile, it is observed from the experimental analysis that B-AFD achieves better detection accuracy and time with lower detection overhead compared to the traditional baseline strategy and the adaptive detectors based on Chen's model. Moreover, B-AFD has better adaptability to P2P network. PMID:25198005
Development of a three-dimensional high-order strand-grids approach
NASA Astrophysics Data System (ADS)
Tong, Oisin
Development of a novel high-order flux correction method on strand grids is presented. The method uses a combination of flux correction in the unstructured plane and summation-by-parts operators in the strand direction to achieve high-fidelity solutions. Low-order truncation errors are cancelled with accurate flux and solution gradients in the flux correction method, thereby achieving a formal order of accuracy of 3, although higher orders are often obtained, especially for highly viscous flows. In this work, the scheme is extended to high-Reynolds number computations in both two and three dimensions. Turbulence closure is achieved with a robust version of the Spalart-Allmaras turbulence model that accommodates negative values of the turbulence working variable, and the Menter SST turbulence model, which blends the k-epsilon and k-o turbulence models for better accuracy. A major advantage of this high-order formulation is the ability to implement traditional finite volume-like limiters to cleanly capture shocked and discontinuous flows. In this work, this approach is explored via a symmetric limited positive (SLIP) limiter. Extensive verification and validation is conducted in two and three dimensions to determine the accuracy and fidelity of the scheme for a number of different cases. Verification studies show that the scheme achieves better than third order accuracy for low and high-Reynolds number flows. Cost studies show that in three-dimensions, the third-order flux correction scheme requires only 30% more walltime than a traditional second-order scheme on strand grids to achieve the same level of convergence. In order to overcome meshing issues at sharp corners and other small-scale features, a unique approach to traditional geometry, coined "asymptotic geometry," is explored. Asymptotic geometry is achieved by filtering out small-scale features in a level set domain through min/max flow. This approach is combined with a curvature based strand shortening strategy in order to qualitatively improve strand grid mesh quality.
Fully Integrated, Miniature, High-Frequency Flow Probe Utilizing MEMS Leadless SOI Technology
NASA Technical Reports Server (NTRS)
Ned, Alex; Kurtz, Anthony; Shang, Tonghuo; Goodman, Scott; Giemette. Gera (d)
2013-01-01
This work focused on developing, fabricating, and fully calibrating a flowangle probe for aeronautics research by utilizing the latest microelectromechanical systems (MEMS), leadless silicon on insulator (SOI) sensor technology. While the concept of angle probes is not new, traditional devices had been relatively large due to fabrication constraints; often too large to resolve flow structures necessary for modern aeropropulsion measurements such as inlet flow distortions and vortices, secondary flows, etc. Mea surements of this kind demanded a new approach to probe design to achieve sizes on the order of 0.1 in. (.3 mm) diameter or smaller, and capable of meeting demanding requirements for accuracy and ruggedness. This approach invoked the use of stateof- the-art processing techniques to install SOI sensor chips directly onto the probe body, thus eliminating redundancy in sensor packaging and probe installation that have historically forced larger probe size. This also facilitated a better thermal match between the chip and its mount, improving stability and accuracy. Further, the leadless sensor technology with which the SOI sensing element is fabricated allows direct mounting and electrical interconnecting of the sensor to the probe body. This leadless technology allowed a rugged wire-out approach that is performed at the sensor length scale, thus achieving substantial sensor size reductions. The technology is inherently capable of high-frequency and high-accuracy performance in high temperatures and harsh environments.
A new algorithm for microwave delay estimation from water vapor radiometer data
NASA Technical Reports Server (NTRS)
Robinson, S. E.
1986-01-01
A new algorithm has been developed for the estimation of tropospheric microwave path delays from water vapor radiometer (WVR) data, which does not require site and weather dependent empirical parameters to produce high accuracy. Instead of taking the conventional linear approach, the new algorithm first uses the observables with an emission model to determine an approximate form of the vertical water vapor distribution which is then explicitly integrated to estimate wet path delays, in a second step. The intrinsic accuracy of this algorithm has been examined for two channel WVR data using path delays and stimulated observables computed from archived radiosonde data. It is found that annual RMS errors for a wide range of sites are in the range from 1.3 mm to 2.3 mm, in the absence of clouds. This is comparable to the best overall accuracy obtainable from conventional linear algorithms, which must be tailored to site and weather conditions using large radiosonde data bases. The new algorithm's accuracy and flexibility are indications that it may be a good candidate for almost all WVR data interpretation.
Georgakis, D. Christine; Trace, David A.; Naeymi-Rad, Frank; Evens, Martha
1990-01-01
Medical expert systems require comprehensive evaluation of their diagnostic accuracy. The usefulness of these systems is limited without established evaluation methods. We propose a new methodology for evaluating the diagnostic accuracy and the predictive capacity of a medical expert system. We have adapted to the medical domain measures that have been used in the social sciences to examine the performance of human experts in the decision making process. Thus, in addition to the standard summary measures, we use measures of agreement and disagreement, and Goodman and Kruskal's λ and τ measures of predictive association. This methodology is illustrated by a detailed retrospective evaluation of the diagnostic accuracy of the MEDAS system. In a study using 270 patients admitted to the North Chicago Veterans Administration Hospital, diagnoses produced by MEDAS are compared with the discharge diagnoses of the attending physicians. The results of the analysis confirm the high diagnostic accuracy and predictive capacity of the MEDAS system. Overall, the agreement of the MEDAS system with the “gold standard” diagnosis of the attending physician has reached a 90% level.
Quantitative optical metrology with CMOS cameras
NASA Astrophysics Data System (ADS)
Furlong, Cosme; Kolenovic, Ervin; Ferguson, Curtis F.
2004-08-01
Recent advances in laser technology, optical sensing, and computer processing of data, have lead to the development of advanced quantitative optical metrology techniques for high accuracy measurements of absolute shapes and deformations of objects. These techniques provide noninvasive, remote, and full field of view information about the objects of interest. The information obtained relates to changes in shape and/or size of the objects, characterizes anomalies, and provides tools to enhance fabrication processes. Factors that influence selection and applicability of an optical technique include the required sensitivity, accuracy, and precision that are necessary for a particular application. In this paper, sensitivity, accuracy, and precision characteristics in quantitative optical metrology techniques, and specifically in optoelectronic holography (OEH) based on CMOS cameras, are discussed. Sensitivity, accuracy, and precision are investigated with the aid of National Institute of Standards and Technology (NIST) traceable gauges, demonstrating the applicability of CMOS cameras in quantitative optical metrology techniques. It is shown that the advanced nature of CMOS technology can be applied to challenging engineering applications, including the study of rapidly evolving phenomena occurring in MEMS and micromechatronics.
2009-01-01
Background Genomic selection (GS) uses molecular breeding values (MBV) derived from dense markers across the entire genome for selection of young animals. The accuracy of MBV prediction is important for a successful application of GS. Recently, several methods have been proposed to estimate MBV. Initial simulation studies have shown that these methods can accurately predict MBV. In this study we compared the accuracies and possible bias of five different regression methods in an empirical application in dairy cattle. Methods Genotypes of 7,372 SNP and highly accurate EBV of 1,945 dairy bulls were used to predict MBV for protein percentage (PPT) and a profit index (Australian Selection Index, ASI). Marker effects were estimated by least squares regression (FR-LS), Bayesian regression (Bayes-R), random regression best linear unbiased prediction (RR-BLUP), partial least squares regression (PLSR) and nonparametric support vector regression (SVR) in a training set of 1,239 bulls. Accuracy and bias of MBV prediction were calculated from cross-validation of the training set and tested against a test team of 706 young bulls. Results For both traits, FR-LS using a subset of SNP was significantly less accurate than all other methods which used all SNP. Accuracies obtained by Bayes-R, RR-BLUP, PLSR and SVR were very similar for ASI (0.39-0.45) and for PPT (0.55-0.61). Overall, SVR gave the highest accuracy. All methods resulted in biased MBV predictions for ASI, for PPT only RR-BLUP and SVR predictions were unbiased. A significant decrease in accuracy of prediction of ASI was seen in young test cohorts of bulls compared to the accuracy derived from cross-validation of the training set. This reduction was not apparent for PPT. Combining MBV predictions with pedigree based predictions gave 1.05 - 1.34 times higher accuracies compared to predictions based on pedigree alone. Some methods have largely different computational requirements, with PLSR and RR-BLUP requiring the least computing time. Conclusions The four methods which use information from all SNP namely RR-BLUP, Bayes-R, PLSR and SVR generate similar accuracies of MBV prediction for genomic selection, and their use in the selection of immediate future generations in dairy cattle will be comparable. The use of FR-LS in genomic selection is not recommended. PMID:20043835
Pencil-beam redefinition algorithm dose calculations for electron therapy treatment planning
NASA Astrophysics Data System (ADS)
Boyd, Robert Arthur
2001-08-01
The electron pencil-beam redefinition algorithm (PBRA) of Shiu and Hogstrom has been developed for use in radiotherapy treatment planning (RTP). Earlier studies of Boyd and Hogstrom showed that the PBRA lacked an adequate incident beam model, that PBRA might require improved electron physics, and that no data existed which allowed adequate assessment of the PBRA-calculated dose accuracy in a heterogeneous medium such as one presented by patient anatomy. The hypothesis of this research was that by addressing the above issues the PBRA-calculated dose would be accurate to within 4% or 2 mm in regions of high dose gradients. A secondary electron source was added to the PBRA to account for collimation-scattered electrons in the incident beam. Parameters of the dual-source model were determined from a minimal data set to allow ease of beam commissioning. Comparisons with measured data showed 3% or better dose accuracy in water within the field for cases where 4% accuracy was not previously achievable. A measured data set was developed that allowed an evaluation of PBRA in regions distal to localized heterogeneities. Geometries in the data set included irregular surfaces and high- and low-density internal heterogeneities. The data was estimated to have 1% precision and 2% agreement with accurate, benchmarked Monte Carlo (MC) code. PBRA electron transport was enhanced by modeling local pencil beam divergence. This required fundamental changes to the mathematics of electron transport (divPBRA). Evaluation of divPBRA with the measured data set showed marginal improvement in dose accuracy when compared to PBRA; however, 4% or 2mm accuracy was not achieved by either PBRA version for all data points. Finally, PBRA was evaluated clinically by comparing PBRA- and MC-calculated dose distributions using site-specific patient RTP data. Results show PBRA did not agree with MC to within 4% or 2mm in a small fraction (<3%) of the irradiated volume. Although the hypothesis of the research was shown to be false, the minor dose inaccuracies should have little or no impact on RTP decisions or patient outcome. Therefore, given ease of beam commissioning, documentation of accuracy, and calculational speed, the PBRA should be considered a practical tool for clinical use.
NASA Technical Reports Server (NTRS)
Thome, Kurtis; McCorkel, Joel; Hair, Jason; McAndrew, Brendan; Daw, Adrian; Jennings, Donald; Rabin, Douglas
2012-01-01
The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission addresses the need to observe high-accuracy, long-term climate change trends and to use decadal change observations as the most critical method to determine the accuracy of climate change. One of the major objectives of CLARREO is to advance the accuracy of SI traceable absolute calibration at infrared and reflected solar wavelengths. This advance is required to reach the on-orbit absolute accuracy required to allow climate change observations to survive data gaps while remaining sufficiently accurate to observe climate change to within the uncertainty of the limit of natural variability. While these capabilities exist at NIST in the laboratory, there is a need to demonstrate that it can move successfully from NIST to NASA and/or instrument vendor capabilities for future spaceborne instruments. The current work describes the test plan for the Solar, Lunar for Absolute Reflectance Imaging Spectroradiometer (SOLARIS) which is the calibration demonstration system (CDS) for the reflected solar portion of CLARREO. The goal of the CDS is to allow the testing and evaluation of calibration approaches , alternate design and/or implementation approaches and components for the CLARREO mission. SOLARIS also provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The end result of efforts with the SOLARIS CDS will be an SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climate-quality data collections. The CLARREO mission addresses the need to observe high-accuracy, long-term climate change trends and advance the accuracy of SI traceable absolute calibration. The current work describes the test plan for the SOLARIS which is the calibration demonstration system for the reflected solar portion of CLARREO. SOLARIS provides a test-bed for detector technologies, non-linearity determination and uncertainties, and application of future technology developments and suggested spacecraft instrument design modifications. The end result will be an SI-traceable error budget for reflectance retrieval using solar irradiance as a reference and methods for laboratory-based, absolute calibration suitable for climate-quality data collections.
NASA Astrophysics Data System (ADS)
Chen, Rui; Xu, Jing; Zhang, Song; Chen, Heping; Guan, Yong; Chen, Ken
2017-01-01
The accuracy of structured light measurement depends on delicate offline calibration. However, in some practical applications, the system is supposed to be reconfigured so frequently to track the target that an online calibration is required. To this end, this paper proposes a rapid and autonomous self-recalibration method. For the proposed method, first, the rotation matrix and the normalized translation vector are attained from the fundamental matrix; second, the scale factor is acquired based on scale-invariant registration such that the actual translation vector is obtained. Experiments have been conducted to verify the effectiveness of our proposed method and the results indicate a high degree of accuracy.
Accuracy versus convergence rates for a three dimensional multistage Euler code
NASA Technical Reports Server (NTRS)
Turkel, Eli
1988-01-01
Using a central difference scheme, it is necessary to add an artificial viscosity in order to reach a steady state. This viscosity usually consists of a linear fourth difference to eliminate odd-even oscillations and a nonlinear second difference to suppress oscillations in the neighborhood of steep gradients. There are free constants in these differences. As one increases the artificial viscosity, the high modes are dissipated more and the scheme converges more rapidly. However, this higher level of viscosity smooths the shocks and eliminates other features of the flow. Thus, there is a conflict between the requirements of accuracy and efficiency. Examples are presented for a variety of three-dimensional inviscid solutions over isolated wings.
NASA Astrophysics Data System (ADS)
Kamata, Shunichi
2018-01-01
Solid-state thermal convection plays a major role in the thermal evolution of solid planetary bodies. Solving the equation system for thermal evolution considering convection requires 2-D or 3-D modeling, resulting in large calculation costs. A 1-D calculation scheme based on mixing length theory (MLT) requires a much lower calculation cost and is suitable for parameter studies. A major concern for the MLT scheme is its accuracy due to a lack of detailed comparisons with higher dimensional schemes. In this study, I quantify its accuracy via comparisons of thermal profiles obtained by 1-D MLT and 3-D numerical schemes. To improve the accuracy, I propose a new definition of the mixing length (l), which is a parameter controlling the efficiency of heat transportation due to convection, for a bottom-heated convective layer. Adopting this new definition of l, I investigate the thermal evolution of Saturnian icy satellites, Dione and Enceladus, under a wide variety of parameter conditions. Calculation results indicate that each satellite requires several tens of GW of heat to possess a thick global subsurface ocean suggested from geophysical analyses. Dynamical tides may be able to account for such an amount of heat, though the reference viscosity of Dione's ice and the ammonia content of Dione's ocean need to be very high. Otherwise, a thick global ocean in Dione cannot be maintained, implying that its shell is not in a minimum stress state.
Design of a self-calibration high precision micro-angle deformation optical monitoring scheme
NASA Astrophysics Data System (ADS)
Gu, Yingying; Wang, Li; Guo, Shaogang; Wu, Yun; Liu, Da
2018-03-01
In order to meet the requirement of high precision and micro-angle measurement on orbit, a self-calibrated optical non-contact real-time monitoring device is designed. Within three meters, the micro-angle variable of target relative to measuring basis can be measured in real-time. The range of angle measurement is +/-50'', the angle measurement accuracy is less than 2''. The equipment can realize high precision real-time monitoring the micro-angle deformation, which caused by high strength vibration and shock of rock launching, sun radiation and heat conduction on orbit and so on.
Gesteme-free context-aware adaptation of robot behavior in human-robot cooperation.
Nessi, Federico; Beretta, Elisa; Gatti, Cecilia; Ferrigno, Giancarlo; De Momi, Elena
2016-11-01
Cooperative robotics is receiving greater acceptance because the typical advantages provided by manipulators are combined with an intuitive usage. In particular, hands-on robotics may benefit from the adaptation of the assistant behavior with respect to the activity currently performed by the user. A fast and reliable classification of human activities is required, as well as strategies to smoothly modify the control of the manipulator. In this scenario, gesteme-based motion classification is inadequate because it needs the observation of a wide signal percentage and the definition of a rich vocabulary. In this work, a system able to recognize the user's current activity without a vocabulary of gestemes, and to accordingly adapt the manipulator's dynamic behavior is presented. An underlying stochastic model fits variations in the user's guidance forces and the resulting trajectories of the manipulator's end-effector with a set of Gaussian distribution. The high-level switching between these distributions is captured with hidden Markov models. The dynamic of the KUKA light-weight robot, a torque-controlled manipulator, is modified with respect to the classified activity using sigmoidal-shaped functions. The presented system is validated over a pool of 12 näive users in a scenario that addresses surgical targeting tasks on soft tissue. The robot's assistance is adapted in order to obtain a stiff behavior during activities that require critical accuracy constraint, and higher compliance during wide movements. Both the ability to provide the correct classification at each moment (sample accuracy) and the capability of correctly identify the correct sequence of activity (sequence accuracy) were evaluated. The proposed classifier is fast and accurate in all the experiments conducted (80% sample accuracy after the observation of ∼450ms of signal). Moreover, the ability of recognize the correct sequence of activities, without unwanted transitions is guaranteed (sequence accuracy ∼90% when computed far away from user desired transitions). Finally, the proposed activity-based adaptation of the robot's dynamic does not lead to a not smooth behavior (high smoothness, i.e. normalized jerk score <0.01). The provided system is able to dynamic assist the operator during cooperation in the presented scenario. Copyright © 2016 Elsevier B.V. All rights reserved.
Men are more accurate than women in aiming at targets in both near space and extrapersonal space.
Sykes Tottenham, Laurie; Saucier, Deborah M; Elias, Lorin J; Gutwin, Carl
2005-08-01
Men excel at motor tasks requiring aiming accuracy whereas women excel at different tasks requiring fine motor skill. However, these tasks are confounded with proximity to the body, as fine motor tasks are performed proximally and aiming tasks are directed at distal targets. As such, it is not known whether the male advantage on tasks requiring aiming accuracy is because men have better aim or is better in the proximal domain in which the task is usually presented. 18 men (M age = 20.6 yr., SD = 3.0) and 20 women (M age = 18.7 yr., SD = 0.9) performed 2 tasks of extrapersonal aiming accuracy (>2 m away), 2 tasks of aiming accuracy performed in near space (< 1 m from them), and a task of fine motor skill. Men outperformed women on both the extrapersonal aiming tasks, and women outperformed men on the task of fine motor skill. However, a male advantage was observed for one of the aiming tasks performed in near space, suggesting that the male advantage for aiming accuracy does not result from proximity.
Instantaneous Assessment Of Athletic Performance Using High Speed Video
NASA Astrophysics Data System (ADS)
Hubbard, Mont; Alaways, LeRoy W.
1988-02-01
We describe the use of high speed video to provide quantitative assessment of motion in athletic performance. Besides the normal requirement for accuracy, an essential feature is that the information be provided rapidly enough so that it my serve as valuable feedback in the learning process. The general considerations which must be addressed in the development of such a computer based system are discussed. These ideas are illustrated specifically through the description of a prototype system which has been designed for the javelin throw.
The Speckle Toolbox: A Powerful Data Reduction Tool for CCD Astrometry
NASA Astrophysics Data System (ADS)
Harshaw, Richard; Rowe, David; Genet, Russell
2017-01-01
Recent advances in high-speed low-noise CCD and CMOS cameras, coupled with breakthroughs in data reduction software that runs on desktop PCs, has opened the domain of speckle interferometry and high-accuracy CCD measurements of double stars to amateurs, allowing them to do useful science of high quality. This paper describes how to use a speckle interferometry reduction program, the Speckle Tool Box (STB), to achieve this level of result. For over a year the author (Harshaw) has been using STB (and its predecessor, Plate Solve 3) to obtain measurements of double stars based on CCD camera technology for pairs that are either too wide (the stars not sharing the same isoplanatic patch, roughly 5 arc-seconds in diameter) or too faint to image in the coherence time required for speckle (usually under 40ms). This same approach - using speckle reduction software to measure CCD pairs with greater accuracy than possible with lucky imaging - has been used, it turns out, for several years by the U. S. Naval Observatory.
NASA Astrophysics Data System (ADS)
Zhang, Wei; Li, Chuanhao; Peng, Gaoliang; Chen, Yuanhang; Zhang, Zhujun
2018-02-01
In recent years, intelligent fault diagnosis algorithms using machine learning technique have achieved much success. However, due to the fact that in real world industrial applications, the working load is changing all the time and noise from the working environment is inevitable, degradation of the performance of intelligent fault diagnosis methods is very serious. In this paper, a new model based on deep learning is proposed to address the problem. Our contributions of include: First, we proposed an end-to-end method that takes raw temporal signals as inputs and thus doesn't need any time consuming denoising preprocessing. The model can achieve pretty high accuracy under noisy environment. Second, the model does not rely on any domain adaptation algorithm or require information of the target domain. It can achieve high accuracy when working load is changed. To understand the proposed model, we will visualize the learned features, and try to analyze the reasons behind the high performance of the model.
Kim, Heejun; Bian, Jiantao; Mostafa, Javed; Jonnalagadda, Siddhartha; Del Fiol, Guilherme
2016-01-01
Motivation: Clinicians need up-to-date evidence from high quality clinical trials to support clinical decisions. However, applying evidence from the primary literature requires significant effort. Objective: To examine the feasibility of automatically extracting key clinical trial information from ClinicalTrials.gov. Methods: We assessed the coverage of ClinicalTrials.gov for high quality clinical studies that are indexed in PubMed. Using 140 random ClinicalTrials.gov records, we developed and tested rules for the automatic extraction of key information. Results: The rate of high quality clinical trial registration in ClinicalTrials.gov increased from 0.2% in 2005 to 17% in 2015. Trials reporting results increased from 3% in 2005 to 19% in 2015. The accuracy of the automatic extraction algorithm for 10 trial attributes was 90% on average. Future research is needed to improve the algorithm accuracy and to design information displays to optimally present trial information to clinicians. PMID:28269867
Warren, R; Richardson, M; Sampson, S; Hauman, J H; Beyers, N; Donald, P R; van Helden, P D
1996-01-01
Two highly polymorphic Mycobacterium tuberculosis genomic domains, characterized by hybridization to the oligonucleotide (GTG)5, were identified as potential DNA fingerprinting probes. These domains were cloned [pMTB484(1) and pMTB484(2K4), respectively] and shown to be useful for genotype analysis by Southern blotting. These probes were used to genotype geographically linked strains of M. tuberculosis previously shown to have identical IS6110 fingerprints. Subsequent DNA fingerprints generated with MTB484(1) and MTB484(2K4) showed a high degree of polymorphism, allowing subclassification of IS6110-defined clusters into composites of smaller clusters and unique strains. Correlation of the molecular data with patient interviews and clinical records confirmed the sensitivity of these probes, as contacts were established only within subclusters. These findings demonstrate the requirement for multiple probes to accurately classify M. tuberculosis strains, even those with high copy numbers of IS6110. The enhanced accuracy of strain typing should, in turn, further our understanding of the epidemiology of tuberculosis. PMID:8862588
Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems
NASA Technical Reports Server (NTRS)
Downie, John D.; Goodman, Joseph W.
1989-01-01
The accuracy requirements of optical processors in adaptive optics systems are determined by estimating the required accuracy in a general optical linear algebra processor (OLAP) that results in a smaller average residual aberration than that achieved with a conventional electronic digital processor with some specific computation speed. Special attention is given to an error analysis of a general OLAP with regard to the residual aberration that is created in an adaptive mirror system by the inaccuracies of the processor, and to the effect of computational speed of an electronic processor on the correction. Results are presented on the ability of an OLAP to compete with a digital processor in various situations.
NASA Astrophysics Data System (ADS)
Yang, Juqing; Wang, Dayong; Fan, Baixing; Dong, Dengfeng; Zhou, Weihu
2017-03-01
In-situ intelligent manufacturing for large-volume equipment requires industrial robots with absolute high-accuracy positioning and orientation steering control. Conventional robots mainly employ an offline calibration technology to identify and compensate key robotic parameters. However, the dynamic and static parameters of a robot change nonlinearly. It is not possible to acquire a robot's actual parameters and control the absolute pose of the robot with a high accuracy within a large workspace by offline calibration in real-time. This study proposes a real-time online absolute pose steering control method for an industrial robot based on six degrees of freedom laser tracking measurement, which adopts comprehensive compensation and correction of differential movement variables. First, the pose steering control system and robot kinematics error model are constructed, and then the pose error compensation mechanism and algorithm are introduced in detail. By accurately achieving the position and orientation of the robot end-tool, mapping the computed Jacobian matrix of the joint variable and correcting the joint variable, the real-time online absolute pose compensation for an industrial robot is accurately implemented in simulations and experimental tests. The average positioning error is 0.048 mm and orientation accuracy is better than 0.01 deg. The results demonstrate that the proposed method is feasible, and the online absolute accuracy of a robot is sufficiently enhanced.
Accuracy Improvement Capability of Advanced Projectile Based on Course Correction Fuze Concept
Elsaadany, Ahmed; Wen-jun, Yi
2014-01-01
Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion. PMID:25097873
Accuracy improvement capability of advanced projectile based on course correction fuze concept.
Elsaadany, Ahmed; Wen-jun, Yi
2014-01-01
Improvement in terminal accuracy is an important objective for future artillery projectiles. Generally it is often associated with range extension. Various concepts and modifications are proposed to correct the range and drift of artillery projectile like course correction fuze. The course correction fuze concepts could provide an attractive and cost-effective solution for munitions accuracy improvement. In this paper, the trajectory correction has been obtained using two kinds of course correction modules, one is devoted to range correction (drag ring brake) and the second is devoted to drift correction (canard based-correction fuze). The course correction modules have been characterized by aerodynamic computations and flight dynamic investigations in order to analyze the effects on deflection of the projectile aerodynamic parameters. The simulation results show that the impact accuracy of a conventional projectile using these course correction modules can be improved. The drag ring brake is found to be highly capable for range correction. The deploying of the drag brake in early stage of trajectory results in large range correction. The correction occasion time can be predefined depending on required correction of range. On the other hand, the canard based-correction fuze is found to have a higher effect on the projectile drift by modifying its roll rate. In addition, the canard extension induces a high-frequency incidence angle as canards reciprocate at the roll motion.
DeWitt, Jessica D.; Warner, Timothy A.; Chirico, Peter G.; Bergstresser, Sarah E.
2017-01-01
For areas of the world that do not have access to lidar, fine-scale digital elevation models (DEMs) can be photogrammetrically created using globally available high-spatial resolution stereo satellite imagery. The resultant DEM is best termed a digital surface model (DSM) because it includes heights of surface features. In densely vegetated conditions, this inclusion can limit its usefulness in applications requiring a bare-earth DEM. This study explores the use of techniques designed for filtering lidar point clouds to mitigate the elevation artifacts caused by above ground features, within the context of a case study of Prince William Forest Park, Virginia, USA. The influences of land cover and leaf-on vs. leaf-off conditions are investigated, and the accuracy of the raw photogrammetric DSM extracted from leaf-on imagery was between that of a lidar bare-earth DEM and the Shuttle Radar Topography Mission DEM. Although the filtered leaf-on photogrammetric DEM retains some artifacts of the vegetation canopy and may not be useful for some applications, filtering procedures significantly improved the accuracy of the modeled terrain. The accuracy of the DSM extracted in leaf-off conditions was comparable in most areas to the lidar bare-earth DEM and filtering procedures resulted in accuracy comparable of that to the lidar DEM.
A very low noise, high accuracy, programmable voltage source for low frequency noise measurements.
Scandurra, Graziella; Giusi, Gino; Ciofi, Carmine
2014-04-01
In this paper an approach for designing a programmable, very low noise, high accuracy voltage source for biasing devices under test in low frequency noise measurements is proposed. The core of the system is a supercapacitor based two pole low pass filter used for filtering out the noise produced by a standard DA converter down to 100 mHz with an attenuation in excess of 40 dB. The high leakage current of the supercapacitors, however, introduces large DC errors that need to be compensated in order to obtain high accuracy as well as very low output noise. To this end, a proper circuit topology has been developed that allows to considerably reduce the effect of the supercapacitor leakage current on the DC response of the system while maintaining a very low level of output noise. With a proper design an output noise as low as the equivalent input voltage noise of the OP27 operational amplifier, used as the output buffer of the system, can be obtained with DC accuracies better that 0.05% up to the maximum output of 8 V. The expected performances of the proposed voltage source have been confirmed both by means of SPICE simulations and by means of measurements on actual prototypes. Turn on and stabilization times for the system are of the order of a few hundred seconds. These times are fully compatible with noise measurements down to 100 mHz, since measurement times of the order of several tens of minutes are required in any case in order to reduce the statistical error in the measured spectra down to an acceptable level.
A very low noise, high accuracy, programmable voltage source for low frequency noise measurements
NASA Astrophysics Data System (ADS)
Scandurra, Graziella; Giusi, Gino; Ciofi, Carmine
2014-04-01
In this paper an approach for designing a programmable, very low noise, high accuracy voltage source for biasing devices under test in low frequency noise measurements is proposed. The core of the system is a supercapacitor based two pole low pass filter used for filtering out the noise produced by a standard DA converter down to 100 mHz with an attenuation in excess of 40 dB. The high leakage current of the supercapacitors, however, introduces large DC errors that need to be compensated in order to obtain high accuracy as well as very low output noise. To this end, a proper circuit topology has been developed that allows to considerably reduce the effect of the supercapacitor leakage current on the DC response of the system while maintaining a very low level of output noise. With a proper design an output noise as low as the equivalent input voltage noise of the OP27 operational amplifier, used as the output buffer of the system, can be obtained with DC accuracies better that 0.05% up to the maximum output of 8 V. The expected performances of the proposed voltage source have been confirmed both by means of SPICE simulations and by means of measurements on actual prototypes. Turn on and stabilization times for the system are of the order of a few hundred seconds. These times are fully compatible with noise measurements down to 100 mHz, since measurement times of the order of several tens of minutes are required in any case in order to reduce the statistical error in the measured spectra down to an acceptable level.
Bricher, Phillippa K.; Lucieer, Arko; Shaw, Justine; Terauds, Aleks; Bergstrom, Dana M.
2013-01-01
Monitoring changes in the distribution and density of plant species often requires accurate and high-resolution baseline maps of those species. Detecting such change at the landscape scale is often problematic, particularly in remote areas. We examine a new technique to improve accuracy and objectivity in mapping vegetation, combining species distribution modelling and satellite image classification on a remote sub-Antarctic island. In this study, we combine spectral data from very high resolution WorldView-2 satellite imagery and terrain variables from a high resolution digital elevation model to improve mapping accuracy, in both pixel- and object-based classifications. Random forest classification was used to explore the effectiveness of these approaches on mapping the distribution of the critically endangered cushion plant Azorella macquariensis Orchard (Apiaceae) on sub-Antarctic Macquarie Island. Both pixel- and object-based classifications of the distribution of Azorella achieved very high overall validation accuracies (91.6–96.3%, κ = 0.849–0.924). Both two-class and three-class classifications were able to accurately and consistently identify the areas where Azorella was absent, indicating that these maps provide a suitable baseline for monitoring expected change in the distribution of the cushion plants. Detecting such change is critical given the threats this species is currently facing under altering environmental conditions. The method presented here has applications to monitoring a range of species, particularly in remote and isolated environments. PMID:23940805
Ye, Guangming; Cai, Xuejian; Wang, Biao; Zhou, Zhongxian; Yu, Xiaohua; Wang, Weibin; Zhang, Jiandong; Wang, Yuhai; Dong, Jierong; Jiang, Yunyun
2008-11-04
A simple, accurate and rapid method for simultaneous analysis of vancomycin and ceftazidime in cerebrospinal fluid (CSF), utilizing high-performance liquid chromatography (HPLC), has been developed and thoroughly validated to satisfy strict FDA guidelines for bioanalytical methods. Protein precipitation was used as the sample pretreatment method. In order to increase the accuracy, tinidazole was chosen as the internal standard. Separation was achieved on a Diamonsil C18 column (200 mm x 4.6mm I.D., 5 microm) using a mobile phase composed of acetonitrile and acetate buffer (pH 3.5) (8:92, v/v) at room temperature (25 degrees C), and the detection wavelength was 240 nm. All the validation data, such as accuracy, precision, and inter-day repeatability, were within the required limits. The method was applied to determine vancomycin and ceftazidime concentrations in CSF in five craniotomy patients.
Accuracy assessment for a multi-parameter optical calliper in on line automotive applications
NASA Astrophysics Data System (ADS)
D'Emilia, G.; Di Gasbarro, D.; Gaspari, A.; Natale, E.
2017-08-01
In this work, a methodological approach based on the evaluation of the measurement uncertainty is applied to an experimental test case, related to the automotive sector. The uncertainty model for different measurement procedures of a high-accuracy optical gauge is discussed in order to individuate the best measuring performances of the system for on-line applications and when the measurement requirements are becoming more stringent. In particular, with reference to the industrial production and control strategies of high-performing turbochargers, two uncertainty models are proposed, discussed and compared, to be used by the optical calliper. Models are based on an integrated approach between measurement methods and production best practices to emphasize their mutual coherence. The paper shows the possible advantages deriving from the considerations that the measurement uncertainty modelling provides, in order to keep control of the uncertainty propagation on all the indirect measurements useful for production statistical control, on which basing further improvements.
The arbitrary order mixed mimetic finite difference method for the diffusion equation
Gyrya, Vitaliy; Lipnikov, Konstantin; Manzini, Gianmarco
2016-05-01
Here, we propose an arbitrary-order accurate mimetic finite difference (MFD) method for the approximation of diffusion problems in mixed form on unstructured polygonal and polyhedral meshes. As usual in the mimetic numerical technology, the method satisfies local consistency and stability conditions, which determines the accuracy and the well-posedness of the resulting approximation. The method also requires the definition of a high-order discrete divergence operator that is the discrete analog of the divergence operator and is acting on the degrees of freedom. The new family of mimetic methods is proved theoretically to be convergent and optimal error estimates for flux andmore » scalar variable are derived from the convergence analysis. A numerical experiment confirms the high-order accuracy of the method in solving diffusion problems with variable diffusion tensor. It is worth mentioning that the approximation of the scalar variable presents a superconvergence effect.« less
Achieving behavioral control with millisecond resolution in a high-level programming environment
Asaad, Wael F.; Eskandar, Emad N.
2008-01-01
The creation of psychophysical tasks for the behavioral neurosciences has generally relied upon low-level software running on a limited range of hardware. Despite the availability of software that allows the coding of behavioral tasks in high-level programming environments, many researchers are still reluctant to trust the temporal accuracy and resolution of programs running in such environments, especially when they run atop non-real-time operating systems. Thus, the creation of behavioral paradigms has been slowed by the intricacy of the coding required and their dissemination across labs has been hampered by the various types of hardware needed. However, we demonstrate here that, when proper measures are taken to handle the various sources of temporal error, accuracy can be achieved at the one millisecond time-scale that is relevant for the alignment of behavioral and neural events. PMID:18606188
Comparison of citrus orchard inventory using LISS-III and LISS-IV data
NASA Astrophysics Data System (ADS)
Singh, Niti; Chaudhari, K. N.; Manjunath, K. R.
2016-04-01
In India, in terms of area under cultivation, citrus is the third most cultivated fruit crop after Banana and Mango. Among citrus group, lime is one of the most important horticultural crops in India as the demand for its consumption is very high. Hence, preparing citrus crop inventories using remote sensing techniques would help in maintaining a record of its area and production statistics. This study shows how accurately citrus orchard can be classified using both IRS Resourcesat-2 LISS-III and LISS-IV data and depicts the optimum bio-widow for procuring satellite data to achieve high classification accuracy required for maintaining inventory of crop. Findings of the study show classification accuracy increased from 55% (using LISS-III) to 77% (using LISS-IV). Also, according to classified outputs and NDVI values obtained, April and May months were identified as optimum bio-window for citrus crop identification.
Memorizing: a test of untrained mildly mentally retarded children's problem-solving.
Belmont, J M; Ferretti, R P; Mitchell, D W
1982-09-01
Forty untrained mildly mentally retarded and 32 untrained nonretarded junior high school students were given eight trails of practice on a self-paced memory problem with lists of letters or words. For each trail a new list was presented, requiring ordered recall of terminal list items followed by ordered recall of initial items. Subgroups of solvers and nonsolvers were identified at each IQ level by a criterion of strict recall accuracy. Direct measures of mnemonic activity showed that over trails, solvers at both IQ levels increasingly fit a theoretically ideal memorization method. At neither IQ level did nonsolvers show similar inventions. On early trials, for both IQ levels, fit to the ideal method was uncorrelated with recall accuracy. On late trials fit and recall were highly correlated at each IQ level and across levels. The results support a problem-solving theory of individual differences in retarded and nonretarded children's memory performances.
The rapid terrain visualization interferometric synthetic aperture radar sensor
NASA Astrophysics Data System (ADS)
Graham, Robert H.; Bickel, Douglas L.; Hensley, William H.
2003-11-01
The Rapid Terrain Visualization interferometric synthetic aperture radar was designed and built at Sandia National Laboratories as part of an Advanced Concept Technology Demonstration (ACTD) to "demonstrate the technologies and infrastructure to meet the Army requirement for rapid generation of digital topographic data to support emerging crisis or contingencies." This sensor is currently being operated by Sandia National Laboratories for the Joint Precision Strike Demonstration (JPSD) Project Office to provide highly accurate digital elevation models (DEMs) for military and civilian customers, both inside and outside of the United States. The sensor achieves better than DTED Level IV position accuracy in near real-time. The system is being flown on a deHavilland DHC-7 Army aircraft. This paper outlines some of the technologies used in the design of the system, discusses the performance, and will discuss operational issues. In addition, we will show results from recent flight tests, including high accuracy maps taken of the San Diego area.
Beamforming synthesis of binaural responses from computer simulations of acoustic spaces.
Poletti, Mark A; Svensson, U Peter
2008-07-01
Auditorium designs can be evaluated prior to construction by numerical modeling of the design. High-accuracy numerical modeling produces the sound pressure on a rectangular grid, and subjective assessment of the design requires auralization of the sampled sound field at a desired listener position. This paper investigates the production of binaural outputs from the sound pressure at a selected number of grid points by using a least squares beam forming approach. Low-frequency axisymmetric emulations are derived by assuming a solid sphere model of the head, and a spherical array of 640 microphones is used to emulate ten measured head-related transfer function (HRTF) data sets from the CIPIC database for half the audio bandwidth. The spherical array can produce high-accuracy band-limited emulation of any human subject's measured HRTFs for a fixed listener position by using individual sets of beam forming impulse responses.
New high-precision drift-tube detectors for the ATLAS muon spectrometer
NASA Astrophysics Data System (ADS)
Kroha, H.; Fakhrutdinov, R.; Kozhin, A.
2017-06-01
Small-diameter muon drift tube (sMDT) detectors have been developed for upgrades of the ATLAS muon spectrometer. With a tube diameter of 15 mm, they provide an about an order of magnitude higher rate capability than the present ATLAS muon tracking detectors, the MDT chambers with 30 mm tube diameter. The drift-tube design and the construction methods have been optimised for mass production and allow for complex shapes required for maximising the acceptance. A record sense wire positioning accuracy of 5 μm has been achieved with the new design. In the serial production, the wire positioning accuracy is routinely better than 10 μm. 14 new sMDT chambers are already operational in ATLAS, further 16 are under construction for installation in the 2019-2020 LHC shutdown. For the upgrade of the barrel muon spectrometer for High-Luminosity LHC, 96 sMDT chambers will be contructed between 2020 and 2024.
NASA Technical Reports Server (NTRS)
Newman, Brett; Yu, Si-bok; Rhew, Ray D. (Technical Monitor)
2003-01-01
Modern experimental and test activities demand innovative and adaptable procedures to maximize data content and quality while working within severely constrained budgetary and facility resource environments. This report describes development of a high accuracy angular measurement capability for NASA Langley Research Center hypersonic wind tunnel facilities to overcome these deficiencies. Specifically, utilization of micro-electro-mechanical sensors including accelerometers and gyros, coupled with software driven data acquisition hardware, integrated within a prototype measurement system, is considered. Development methodology addresses basic design requirements formulated from wind tunnel facility constraints and current operating procedures, as well as engineering and scientific test objectives. Description of the analytical framework governing relationships between time dependent multi-axis acceleration and angular rate sensor data and the desired three dimensional Eulerian angular state of the test model is given. Calibration procedures for identifying and estimating critical parameters in the sensor hardware is also addressed.
A new type industrial total station based on target automatic collimation
NASA Astrophysics Data System (ADS)
Lao, Dabao; Zhou, Weihu; Ji, Rongyi; Dong, Dengfeng; Xiong, Zhi; Wei, Jiang
2018-01-01
In the case of industrial field measurement, the present measuring instruments work with manual operation and collimation, which give rise to low efficiency for field measurement. In order to solve the problem, a new type industrial total station is presented in this paper. The new instrument can identify and trace cooperative target automatically, in the mean time, coordinate of the target is measured in real time. For realizing the system, key technology including high precision absolutely distance measurement, small high accuracy angle measurement, target automatic collimation with vision, and quick precise controlling should be worked out. After customized system assemblage and adjustment, the new type industrial total station will be established. As the experiments demonstrated, the coordinate accuracy of the instrument is under 15ppm in the distance of 60m, which proved that the measuring system is feasible. The result showed that the total station can satisfy most industrial field measurement requirements.
Efficient full-chip SRAF placement using machine learning for best accuracy and improved consistency
NASA Astrophysics Data System (ADS)
Wang, Shibing; Baron, Stanislas; Kachwala, Nishrin; Kallingal, Chidam; Sun, Dezheng; Shu, Vincent; Fong, Weichun; Li, Zero; Elsaid, Ahmad; Gao, Jin-Wei; Su, Jing; Ser, Jung-Hoon; Zhang, Quan; Chen, Been-Der; Howell, Rafael; Hsu, Stephen; Luo, Larry; Zou, Yi; Zhang, Gary; Lu, Yen-Wen; Cao, Yu
2018-03-01
Various computational approaches from rule-based to model-based methods exist to place Sub-Resolution Assist Features (SRAF) in order to increase process window for lithography. Each method has its advantages and drawbacks, and typically requires the user to make a trade-off between time of development, accuracy, consistency and cycle time. Rule-based methods, used since the 90 nm node, require long development time and struggle to achieve good process window performance for complex patterns. Heuristically driven, their development is often iterative and involves significant engineering time from multiple disciplines (Litho, OPC and DTCO). Model-based approaches have been widely adopted since the 20 nm node. While the development of model-driven placement methods is relatively straightforward, they often become computationally expensive when high accuracy is required. Furthermore these methods tend to yield less consistent SRAFs due to the nature of the approach: they rely on a model which is sensitive to the pattern placement on the native simulation grid, and can be impacted by such related grid dependency effects. Those undesirable effects tend to become stronger when more iterations or complexity are needed in the algorithm to achieve required accuracy. ASML Brion has developed a new SRAF placement technique on the Tachyon platform that is assisted by machine learning and significantly improves the accuracy of full chip SRAF placement while keeping consistency and runtime under control. A Deep Convolutional Neural Network (DCNN) is trained using the target wafer layout and corresponding Continuous Transmission Mask (CTM) images. These CTM images have been fully optimized using the Tachyon inverse mask optimization engine. The neural network generated SRAF guidance map is then used to place SRAF on full-chip. This is different from our existing full-chip MB-SRAF approach which utilizes a SRAF guidance map (SGM) of mask sensitivity to improve the contrast of optical image at the target pattern edges. In this paper, we demonstrate that machine learning assisted SRAF placement can achieve a superior process window compared to the SGM model-based SRAF method, while keeping the full-chip runtime affordable, and maintain consistency of SRAF placement . We describe the current status of this machine learning assisted SRAF technique and demonstrate its application to full chip mask synthesis and discuss how it can extend the computational lithography roadmap.
Accuracy and precision evaluation of seven self-monitoring blood glucose systems.
Kuo, Chih-Yi; Hsu, Cheng-Teng; Ho, Cheng-Shiao; Su, Ting-En; Wu, Ming-Hsun; Wang, Chau-Jong
2011-05-01
Self-monitoring blood glucose (SMBG) systems play a critical role in management of diabetes. SMBG systems should at least meet the minimal requirement of the World Health Organization's ISO 15197:2003. For tight glycemic control, a tighter accuracy requirement is needed. Seven SMBG systems were evaluated for accuracy and precision: Bionime Rightest(™) GM550 (Bionime Corp., Dali City, Taiwan), Accu-Chek(®) Performa (Roche Diagnostics, Indianapolis, IN), OneTouch(®) Ultra(®)2 (LifeScan Inc., Milpitas, CA), MediSense(®) Optium(™) Xceed (Abbott Diabetes Care Inc., Alameda, CA), Medisafe (TERUMO Corp., Tokyo, Japan), Fora(®) TD4227 (Taidac Technology Corp., Wugu Township, Taiwan), and Ascensia Contour(®) (Bayer HealthCare LLC, Mishawaka, IN). The 107 participants (44 men and 63 women) were between 23 and 91 years old. The analytical results of seven SMBG systems were compared with those of plasma analyzed with the hexokinase method (Olympus AU640, Olympus America Inc., Center Valley, PA). The imprecision of the seven blood glucose meters ranged from 1.1% to 4.7%. Three of the seven blood glucose meters (42.9%) fulfilled the minimum accuracy criteria of ISO 15197:2003. The mean absolute relative error value for each blood glucose meter was calculated and ranged from 6.5% to 12.0%. More than 40% of evaluated SMBG systems meet the minimal accuracy criteria requirement of ISO 15197:2003. However, considering tighter criteria for accuracy of ±15%, only the Bionime Rightest GM550 meets this requirement. Because SMBG systems play a critical role in management of diabetes, manufacturers have to strive to improve accuracy and precision and to ensure the good quality of blood glucose meters and test strips.
The reliability of the pass/fail decision for assessments comprised of multiple components.
Möltner, Andreas; Tımbıl, Sevgi; Jünger, Jana
2015-01-01
The decision having the most serious consequences for a student taking an assessment is the one to pass or fail that student. For this reason, the reliability of the pass/fail decision must be determined for high quality assessments, just as the measurement reliability of the point values. Assessments in a particular subject (graded course credit) are often composed of multiple components that must be passed independently of each other. When "conjunctively" combining separate pass/fail decisions, as with other complex decision rules for passing, adequate methods of analysis are necessary for estimating the accuracy and consistency of these classifications. To date, very few papers have addressed this issue; a generally applicable procedure was published by Douglas and Mislevy in 2010. Using the example of an assessment comprised of several parts that must be passed separately, this study analyzes the reliability underlying the decision to pass or fail students and discusses the impact of an improved method for identifying those who do not fulfill the minimum requirements. The accuracy and consistency of the decision to pass or fail an examinee in the subject cluster Internal Medicine/General Medicine/Clinical Chemistry at the University of Heidelberg's Faculty of Medicine was investigated. This cluster requires students to separately pass three components (two written exams and an OSCE), whereby students may reattempt to pass each component twice. Our analysis was carried out using the method described by Douglas and Mislevy. Frequently, when complex logical connections exist between the individual pass/fail decisions in the case of low failure rates, only a very low reliability for the overall decision to grant graded course credit can be achieved, even if high reliabilities exist for the various components. For the example analyzed here, the classification accuracy and consistency when conjunctively combining the three individual parts is relatively low with κ=0.49 or κ=0.47, despite the good reliability of over 0.75 for each of the three components. The option to repeat each component twice leads to a situation in which only about half of the candidates who do not satisfy the minimum requirements would fail the overall assessment, while the other half is able to continue their studies despite having deficient knowledge and skills. The method put forth by Douglas and Mislevy allows the analysis of the decision accuracy and consistency for complex combinations of scores from different components. Even in the case of highly reliable components, it is not necessarily so that a reliable pass/fail decision has been reached - for instance in the case of low failure rates. Assessments must be administered with the explicit goal of identifying examinees that do not fulfill the minimum requirements.
The reliability of the pass/fail decision for assessments comprised of multiple components
Möltner, Andreas; Tımbıl, Sevgi; Jünger, Jana
2015-01-01
Objective: The decision having the most serious consequences for a student taking an assessment is the one to pass or fail that student. For this reason, the reliability of the pass/fail decision must be determined for high quality assessments, just as the measurement reliability of the point values. Assessments in a particular subject (graded course credit) are often composed of multiple components that must be passed independently of each other. When “conjunctively” combining separate pass/fail decisions, as with other complex decision rules for passing, adequate methods of analysis are necessary for estimating the accuracy and consistency of these classifications. To date, very few papers have addressed this issue; a generally applicable procedure was published by Douglas and Mislevy in 2010. Using the example of an assessment comprised of several parts that must be passed separately, this study analyzes the reliability underlying the decision to pass or fail students and discusses the impact of an improved method for identifying those who do not fulfill the minimum requirements. Method: The accuracy and consistency of the decision to pass or fail an examinee in the subject cluster Internal Medicine/General Medicine/Clinical Chemistry at the University of Heidelberg’s Faculty of Medicine was investigated. This cluster requires students to separately pass three components (two written exams and an OSCE), whereby students may reattempt to pass each component twice. Our analysis was carried out using the method described by Douglas and Mislevy. Results: Frequently, when complex logical connections exist between the individual pass/fail decisions in the case of low failure rates, only a very low reliability for the overall decision to grant graded course credit can be achieved, even if high reliabilities exist for the various components. For the example analyzed here, the classification accuracy and consistency when conjunctively combining the three individual parts is relatively low with κ=0.49 or κ=0.47, despite the good reliability of over 0.75 for each of the three components. The option to repeat each component twice leads to a situation in which only about half of the candidates who do not satisfy the minimum requirements would fail the overall assessment, while the other half is able to continue their studies despite having deficient knowledge and skills. Conclusion: The method put forth by Douglas and Mislevy allows the analysis of the decision accuracy and consistency for complex combinations of scores from different components. Even in the case of highly reliable components, it is not necessarily so that a reliable pass/fail decision has been reached – for instance in the case of low failure rates. Assessments must be administered with the explicit goal of identifying examinees that do not fulfill the minimum requirements. PMID:26483855
USDA-ARS?s Scientific Manuscript database
Accurate and rapid assays for glucose are desirable for analysis of glucose and starch in food and feedstuffs. An established colorimetric glucose oxidase-peroxidase method for glucose was modified to reduce analysis time, and evaluated for factors that affected accuracy. Time required to perform t...
77 FR 43536 - Wireless E911 Phase II Location Accuracy Requirements
Federal Register 2010, 2011, 2012, 2013, 2014
2012-07-25
... Docket No. 07-114; FCC 11-107] Wireless E911 Phase II Location Accuracy Requirements AGENCY: Federal...- 2413, or email: [email protected]fcc.gov . SUPPLEMENTARY INFORMATION: This document announces that, on... Commission's Order, FCC 11-107, published at 76 FR 59916, September 28, 2011. The OMB Control Number is 3060...
NASA Astrophysics Data System (ADS)
Mandic, M.; Stöbener, N.; Smajgl, D.
2017-12-01
For many decades different instrumental methods involving generations of the isotope ratio mass spectrometers with different periphery units for sample preparation, have provided scientifically required high precision, and high throughput of samples for varies application - from geological and hydrological to food and forensic. With this work we introduce automated measurement of δ13C and δ18O from solid carbonate samples, DIC and δ18O of water. We have demonstrated usage of a Thermo Scientific™ Delta Ray™ IRIS with URI Connect on certified reference materials and confirmed the high achievable accuracy and a precision better then <0.1‰ for both δ13C and δ18O, in the laboratory or the field with same precision and throughput of samples. With equilibration method for determination of δ18O in water samples, which we present in this work, achieved repeatability and accuracy are 0.12‰ and 0.68‰ respectively, which fulfill requirements of regulatory methods. The preparation of the samples for carbonate and DIC analysis on the Delta Ray IRIS with URI Connect is similar to the previously mentioned Gas Bench II methods. Samples are put into vials and phosphoric acid is added. The resulting sample-acid chemical reaction releases CO2 gas, which is then introduced into the Delta Ray IRIS via the Variable Volume. Three international standards of carbonate materials (NBS-18, NBS-19 and IAEA-CO-1) were analyzed. NBS-18 and NBS-19 were used as standards for calibration, and IAEA-CO-1 was treated as unknown. For water sample analysis equilibration method with 1% of CO2 in dry air was used. Test measurements and conformation of precision and accuracy of method determination δ18O in water samples were done with three lab standards, namely ANST, OCEAN 2 and HBW. All laboratory standards were previously calibrated with international reference material VSMOW2 and SLAP2 to assure accuracy of the isotopic values. The Principle of Identical Treatment was applied in sample and standard preparation, in measurement procedure, as well as in the evaluation of the results.
Katzka, David A; Geno, Debra M; Ravi, Anupama; Smyrk, Thomas C; Lao-Sirieix, Pierre; Miremadi, Ahmed; Miramedi, Ahmed; Debiram, Irene; O'Donovan, Maria; Kita, Hirohito; Kephart, Gail M; Kryzer, Lori A; Camilleri, Michael; Alexander, Jeffrey A; Fitzgerald, Rebecca C
2015-01-01
Management of eosinophilic esophagitis (EoE) requires repeated endoscopic collection of mucosal samples to assess disease activity and response to therapy. An easier and less expensive means of monitoring of EoE is required. We compared the accuracy, safety, and tolerability of sample collection via Cytosponge (an ingestible gelatin capsule comprising compressed mesh attached to a string) with those of endoscopy for assessment of EoE. Esophageal tissues were collected from 20 patients with EoE (all with dysphagia, 15 with stricture, 13 with active EoE) via Cytosponge and then by endoscopy. Number of eosinophils/high-power field and levels of eosinophil-derived neurotoxin were determined; hematoxylin-eosin staining was performed. We compared the adequacy, diagnostic accuracy, safety, and patient preference for sample collection via Cytosponge vs endoscopy procedures. All 20 samples collected by Cytosponge were adequate for analysis. By using a cutoff value of 15 eosinophils/high power field, analysis of samples collected by Cytosponge identified 11 of the 13 individuals with active EoE (83%); additional features such as abscesses were also identified. Numbers of eosinophils in samples collected by Cytosponge correlated with those in samples collected by endoscopy (r = 0.50, P = .025). Analysis of tissues collected by Cytosponge identified 4 of the 7 patients without active EoE (57% specificity), as well as 3 cases of active EoE not identified by analysis of endoscopy samples. Including information on level of eosinophil-derived neurotoxin did not increase the accuracy of diagnosis. No complications occurred during the Cytosponge procedure, which was preferred by all patients, compared with endoscopy. In a feasibility study, the Cytosponge is a safe and well-tolerated method for collecting near mucosal specimens. Analysis of numbers of eosinophils/high-power field identified patients with active EoE with 83% sensitivity. Larger studies are needed to establish the efficacy and safety of this method of esophageal tissue collection. ClinicalTrials.gov number: NCT01585103. Copyright © 2015 AGA Institute. Published by Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Dewald, E; Kozioziemski, B; Moody, J
2008-06-26
We use x-ray phase contrast imaging to characterize the inner surface roughness of DT ice layers in capsules planned for future ignition experiments. It is therefore important to quantify how well the x-ray data correlates with the actual ice roughness. We benchmarked the accuracy of our system using surrogates with fabricated roughness characterized with high precision standard techniques. Cylindrical artifacts with azimuthally uniform sinusoidal perturbations with 100 um period and 1 um amplitude demonstrated 0.02 um accuracy limited by the resolution of the imager and the source size of our phase contrast system. Spherical surrogates with random roughness close tomore » that required for the DT ice for a successful ignition experiment were used to correlate the actual surface roughness to that obtained from the x-ray measurements. When comparing average power spectra of individual measurements, the accuracy mode number limits of the x-ray phase contrast system benchmarked against surface characterization performed by Atomic Force Microscopy are 60 and 90 for surrogates smoother and rougher than the required roughness for the ice. These agreement mode number limits are >100 when comparing matching individual measurements. We will discuss the implications for interpreting DT ice roughness data derived from phase-contrast x-ray imaging.« less
Applications of random forest feature selection for fine-scale genetic population assignment.
Sylvester, Emma V A; Bentzen, Paul; Bradbury, Ian R; Clément, Marie; Pearce, Jon; Horne, John; Beiko, Robert G
2018-02-01
Genetic population assignment used to inform wildlife management and conservation efforts requires panels of highly informative genetic markers and sensitive assignment tests. We explored the utility of machine-learning algorithms (random forest, regularized random forest and guided regularized random forest) compared with F ST ranking for selection of single nucleotide polymorphisms (SNP) for fine-scale population assignment. We applied these methods to an unpublished SNP data set for Atlantic salmon ( Salmo salar ) and a published SNP data set for Alaskan Chinook salmon ( Oncorhynchus tshawytscha ). In each species, we identified the minimum panel size required to obtain a self-assignment accuracy of at least 90% using each method to create panels of 50-700 markers Panels of SNPs identified using random forest-based methods performed up to 7.8 and 11.2 percentage points better than F ST -selected panels of similar size for the Atlantic salmon and Chinook salmon data, respectively. Self-assignment accuracy ≥90% was obtained with panels of 670 and 384 SNPs for each data set, respectively, a level of accuracy never reached for these species using F ST -selected panels. Our results demonstrate a role for machine-learning approaches in marker selection across large genomic data sets to improve assignment for management and conservation of exploited populations.
Evaluating Washington State's immunization information system as a research tool.
Jackson, Michael L; Henrikson, Nora B; Grossman, David C
2014-01-01
Immunization information systems (IISs) are powerful public health tools for vaccination activities. To date, however, their use for public health research has been limited, in part as a result of insufficient understanding on accuracy and quality of IIS data. We evaluated the completeness and accuracy of Washington State IIS (WAIIS) data, with particular attention to data elements of research interest. We analyzed all WAIIS records on all children born between 2006 and 2010 with at least 1 vaccination recorded in WAIIS between 2006 and 2010. We assessed all variables for completeness and tested selected variables for internal validity. To assess external validity, we matched WAIIS data to records from Group Health, a large integrated health care organization in Washington State. On these children, we compared vaccination data in WAIIS with vaccination data from Group Health's immunization registry. The WAIIS data included 486,265 children and 8,670,234 unique vaccinations. Variables required by WAIIS (such as date of vaccination) were highly complete, but optional variables were often missing. For example, most records were missing data on route (80.7%) and anatomic site (81.7%) of vaccination. WAIIS data, when complete, were highly accurate relative to the Group Health immunization registry, with 96% to 99% agreement between fields such as vaccination code and anatomic site. Required data elements in WAIIS are highly complete and have both internal and external validity, suggesting that these variables are useful for research. Research requiring nonrequired variables should use additional validity checks before proceeding. Copyright © 2014 Academic Pediatric Association. Published by Elsevier Inc. All rights reserved.
Stochastic analysis of 1D and 2D surface topography of x-ray mirrors
NASA Astrophysics Data System (ADS)
Tyurina, Anastasia Y.; Tyurin, Yury N.; Yashchuk, Valeriy V.
2017-08-01
The design and evaluation of the expected performance of new optical systems requires sophisticated and reliable information about the surface topography for planned optical elements before they are fabricated. The problem is especially complex in the case of x-ray optics, particularly for the X-ray Surveyor under development and other missions. Modern x-ray source facilities are reliant upon the availability of optics with unprecedented quality (surface slope accuracy < 0.1μrad). The high angular resolution and throughput of future x-ray space observatories requires hundreds of square meters of high quality optics. The uniqueness of the optics and limited number of proficient vendors makes the fabrication extremely time consuming and expensive, mostly due to the limitations in accuracy and measurement rate of metrology used in fabrication. We discuss improvements in metrology efficiency via comprehensive statistical analysis of a compact volume of metrology data. The data is considered stochastic and a new statistical model called Invertible Time Invariant Linear Filter (InTILF) is developed now for 2D surface profiles to provide compact description of the 2D data additionally to 1D data treated so far. The model captures faint patterns in the data and serves as a quality metric and feedback to polishing processes, avoiding high resolution metrology measurements over the entire optical surface. The modeling, implemented in our Beatmark software, allows simulating metrology data for optics made by the same vendor and technology. The forecast data is vital for reliable specification for optical fabrication, to be exactly adequate for the required system performance.
An automated method for the evaluation of the pointing accuracy of Sun-tracking devices
NASA Astrophysics Data System (ADS)
Baumgartner, Dietmar J.; Pötzi, Werner; Freislich, Heinrich; Strutzmann, Heinz; Veronig, Astrid M.; Rieder, Harald E.
2017-03-01
The accuracy of solar radiation measurements, for direct (DIR) and diffuse (DIF) radiation, depends significantly on the precision of the operational Sun-tracking device. Thus, rigid targets for instrument performance and operation have been specified for international monitoring networks, e.g., the Baseline Surface Radiation Network (BSRN) operating under the auspices of the World Climate Research Program (WCRP). Sun-tracking devices that fulfill these accuracy requirements are available from various instrument manufacturers; however, none of the commercially available systems comprise an automatic accuracy control system allowing platform operators to independently validate the pointing accuracy of Sun-tracking sensors during operation. Here we present KSO-STREAMS (KSO-SunTRackEr Accuracy Monitoring System), a fully automated, system-independent, and cost-effective system for evaluating the pointing accuracy of Sun-tracking devices. We detail the monitoring system setup, its design and specifications, and the results from its application to the Sun-tracking system operated at the Kanzelhöhe Observatory (KSO) Austrian radiation monitoring network (ARAD) site. The results from an evaluation campaign from March to June 2015 show that the tracking accuracy of the device operated at KSO lies within BSRN specifications (i.e., 0.1° tracking accuracy) for the vast majority of observations (99.8 %). The evaluation of manufacturer-specified active-tracking accuracies (0.02°), during periods with direct solar radiation exceeding 300 W m-2, shows that these are satisfied in 72.9 % of observations. Tracking accuracies are highest during clear-sky conditions and on days where prevailing clear-sky conditions are interrupted by frontal movement; in these cases, we obtain the complete fulfillment of BSRN requirements and 76.4 % of observations within manufacturer-specified active-tracking accuracies. Limitations to tracking surveillance arise during overcast conditions and periods of partial solar-limb coverage by clouds. On days with variable cloud cover, 78.1 % (99.9 %) of observations meet active-tracking (BSRN) accuracy requirements while for days with prevailing overcast conditions these numbers reduce to 64.3 % (99.5 %).
Walsh, Alex J.; Sharick, Joe T.; Skala, Melissa C.; Beier, Hope T.
2016-01-01
Time-correlated single photon counting (TCSPC) enables acquisition of fluorescence lifetime decays with high temporal resolution within the fluorescence decay. However, many thousands of photons per pixel are required for accurate lifetime decay curve representation, instrument response deconvolution, and lifetime estimation, particularly for two-component lifetimes. TCSPC imaging speed is inherently limited due to the single photon per laser pulse nature and low fluorescence event efficiencies (<10%) required to reduce bias towards short lifetimes. Here, simulated fluorescence lifetime decays are analyzed by SPCImage and SLIM Curve software to determine the limiting lifetime parameters and photon requirements of fluorescence lifetime decays that can be accurately fit. Data analysis techniques to improve fitting accuracy for low photon count data were evaluated. Temporal binning of the decays from 256 time bins to 42 time bins significantly (p<0.0001) improved fit accuracy in SPCImage and enabled accurate fits with low photon counts (as low as 700 photons/decay), a 6-fold reduction in required photons and therefore improvement in imaging speed. Additionally, reducing the number of free parameters in the fitting algorithm by fixing the lifetimes to known values significantly reduced the lifetime component error from 27.3% to 3.2% in SPCImage (p<0.0001) and from 50.6% to 4.2% in SLIM Curve (p<0.0001). Analysis of nicotinamide adenine dinucleotide–lactate dehydrogenase (NADH-LDH) solutions confirmed temporal binning of TCSPC data and a reduced number of free parameters improves exponential decay fit accuracy in SPCImage. Altogether, temporal binning (in SPCImage) and reduced free parameters are data analysis techniques that enable accurate lifetime estimation from low photon count data and enable TCSPC imaging speeds up to 6x and 300x faster, respectively, than traditional TCSPC analysis. PMID:27446663
Laboratory and field tests of the Sutron RLR-0003-1 water level sensor
Fulford, Janice M.; Bryars, R. Scott
2015-01-01
Three Sutron RLR-0003-1 water level sensors were tested in laboratory conditions to evaluate the accuracy of the sensor over the manufacturer’s specified operating temperature and distance-to-water ranges. The sensor was also tested for compliance to SDI-12 communication protocol and in field conditions at a U.S. Geological Survey (USGS) streamgaging site. Laboratory results were compared to the manufacturer’s accuracy specification for water level and to the USGS Office of Surface Water (OSW) policy requirement that water level sensors have a measurement uncertainty of no more than 0.01 foot or 0.20 percent of the indicated reading. Except for one sensor, the differences for the temperature testing were within 0.05 foot and the average measurements for the sensors were within the manufacturer’s accuracy specification. Two of the three sensors were within the manufacturer’s specified accuracy and met the USGS accuracy requirements for the laboratory distance to water testing. Three units passed a basic SDI-12 communication compliance test. Water level measurements made by the Sutron RLR-0003-1 during field testing agreed well with those made by the bubbler system and a Design Analysis Associates (DAA) H3613 radar, and they met the USGS accuracy requirements when compared to the wire-weight gage readings.
Aerosol algorithm evaluation within aerosol-CCI
NASA Astrophysics Data System (ADS)
Kinne, Stefan; Schulz, Michael; Griesfeller, Jan
Properties of aerosol retrievals from space are difficult. Even data from dedicated satellite sensors face contaminations which limit the accuracy of aerosol retrieval products. Issues are the identification of complete cloud-free scenes, the need to assume aerosol compositional features in an underdetermined solution space and the requirement to characterize the background at high accuracy. Usually the development of aerosol is a slow process, requiring continuous feedback from evaluations. To demonstrate maturity, these evaluations need to cover different regions and seasons and many different aerosol properties, because aerosol composition is quite diverse and highly variable in space and time, as atmospheric aerosol lifetimes are only a few days. Three years ago the ESA Climate Change Initiative started to support aerosol retrieval efforts in order to develop aerosol retrieval products for the climate community from underutilized ESA satellite sensors. The initial focus was on retrievals of AOD (a measure for the atmospheric column amount) and of Angstrom (a proxy for aerosol size) from the ATSR and MERIS sensors on ENVISAT. The goal was to offer retrieval products that are comparable or better in accuracy than commonly used NASA products of MODIS or MISR. Fortunately, accurate reference data of ground based sun-/sky-photometry networks exist. Thus, retrieval assessments could and were conducted independently by different evaluation groups. Here, results of these evaluations for the year 2008 are summarized. The capability of these newly developed retrievals is analyzed and quantified in scores. These scores allowed a ranking of competing efforts and also allow skill comparisons of these new retrievals against existing and commonly used retrievals.
A novel in chemico method to detect skin sensitizers in highly diluted reaction conditions.
Yamamoto, Yusuke; Tahara, Haruna; Usami, Ryota; Kasahara, Toshihiko; Jimbo, Yoshihiro; Hioki, Takanori; Fujita, Masaharu
2015-11-01
The direct peptide reactivity assay (DPRA) is a simple and versatile alternative method for the evaluation of skin sensitization that involves the reaction of test chemicals with two peptides. However, this method requires concentrated solutions of test chemicals, and hydrophobic substances may not dissolve at the concentrations required. Furthermore, hydrophobic test chemicals may precipitate when added to the reaction solution. We previously established a high-sensitivity method, the amino acid derivative reactivity assay (ADRA). This method uses novel cysteine (NAC) and novel lysine derivatives (NAL), which were synthesized by introducing a naphthalene ring to the amine group of cysteine and lysine residues. In this study, we modified the ADRA method by reducing the concentration of the test chemicals 100-fold. We investigated the accuracy of skin sensitization predictions made using the modified method, which was designated the ADRA-dilutional method (ADRA-DM). The predictive accuracy of the ADRA-DM for skin sensitization was 90% for 82 test chemicals which were also evaluated via the ADRA, and the predictive accuracy in the ADRA-DM was higher than that in the ADRA and DPRA. Furthermore, no precipitation of test compounds was observed at the initiation of the ADRA-DM reaction. These results show that the ADRA-DM allowed the use of test chemicals at concentrations two orders of magnitude lower than that possible with the ADRA. In addition, ADRA-DM does not have the restrictions on test compound solubility that were a major problem with the DPRA. Therefore, the ADRA-DM is a versatile and useful method. Copyright © 2015 John Wiley & Sons, Ltd.
NASA Astrophysics Data System (ADS)
Li, Qianxiao; Dietrich, Felix; Bollt, Erik M.; Kevrekidis, Ioannis G.
2017-10-01
Numerical approximation methods for the Koopman operator have advanced considerably in the last few years. In particular, data-driven approaches such as dynamic mode decomposition (DMD)51 and its generalization, the extended-DMD (EDMD), are becoming increasingly popular in practical applications. The EDMD improves upon the classical DMD by the inclusion of a flexible choice of dictionary of observables which spans a finite dimensional subspace on which the Koopman operator can be approximated. This enhances the accuracy of the solution reconstruction and broadens the applicability of the Koopman formalism. Although the convergence of the EDMD has been established, applying the method in practice requires a careful choice of the observables to improve convergence with just a finite number of terms. This is especially difficult for high dimensional and highly nonlinear systems. In this paper, we employ ideas from machine learning to improve upon the EDMD method. We develop an iterative approximation algorithm which couples the EDMD with a trainable dictionary represented by an artificial neural network. Using the Duffing oscillator and the Kuramoto Sivashinsky partical differential equation as examples, we show that our algorithm can effectively and efficiently adapt the trainable dictionary to the problem at hand to achieve good reconstruction accuracy without the need to choose a fixed dictionary a priori. Furthermore, to obtain a given accuracy, we require fewer dictionary terms than EDMD with fixed dictionaries. This alleviates an important shortcoming of the EDMD algorithm and enhances the applicability of the Koopman framework to practical problems.
Li, Qianxiao; Dietrich, Felix; Bollt, Erik M; Kevrekidis, Ioannis G
2017-10-01
Numerical approximation methods for the Koopman operator have advanced considerably in the last few years. In particular, data-driven approaches such as dynamic mode decomposition (DMD) 51 and its generalization, the extended-DMD (EDMD), are becoming increasingly popular in practical applications. The EDMD improves upon the classical DMD by the inclusion of a flexible choice of dictionary of observables which spans a finite dimensional subspace on which the Koopman operator can be approximated. This enhances the accuracy of the solution reconstruction and broadens the applicability of the Koopman formalism. Although the convergence of the EDMD has been established, applying the method in practice requires a careful choice of the observables to improve convergence with just a finite number of terms. This is especially difficult for high dimensional and highly nonlinear systems. In this paper, we employ ideas from machine learning to improve upon the EDMD method. We develop an iterative approximation algorithm which couples the EDMD with a trainable dictionary represented by an artificial neural network. Using the Duffing oscillator and the Kuramoto Sivashinsky partical differential equation as examples, we show that our algorithm can effectively and efficiently adapt the trainable dictionary to the problem at hand to achieve good reconstruction accuracy without the need to choose a fixed dictionary a priori. Furthermore, to obtain a given accuracy, we require fewer dictionary terms than EDMD with fixed dictionaries. This alleviates an important shortcoming of the EDMD algorithm and enhances the applicability of the Koopman framework to practical problems.
Roach, Jennifer K.; Griffith, Brad; Verbyla, David
2012-01-01
Programs to monitor lake area change are becoming increasingly important in high latitude regions, and their development often requires evaluating tradeoffs among different approaches in terms of accuracy of measurement, consistency across multiple users over long time periods, and efficiency. We compared three supervised methods for lake classification from Landsat imagery (density slicing, classification trees, and feature extraction). The accuracy of lake area and number estimates was evaluated relative to high-resolution aerial photography acquired within two days of satellite overpasses. The shortwave infrared band 5 was better at separating surface water from nonwater when used alone than when combined with other spectral bands. The simplest of the three methods, density slicing, performed best overall. The classification tree method resulted in the most omission errors (approx. 2x), feature extraction resulted in the most commission errors (approx. 4x), and density slicing had the least directional bias (approx. half of the lakes with overestimated area and half of the lakes with underestimated area). Feature extraction was the least consistent across training sets (i.e., large standard error among different training sets). Density slicing was the best of the three at classifying small lakes as evidenced by its lower optimal minimum lake size criterion of 5850 m2 compared with the other methods (8550 m2). Contrary to conventional wisdom, the use of additional spectral bands and a more sophisticated method not only required additional processing effort but also had a cost in terms of the accuracy and consistency of lake classifications.
Measurement of the PPN parameter γ by testing the geometry of near-Earth space
NASA Astrophysics Data System (ADS)
Luo, Jie; Tian, Yuan; Wang, Dian-Hong; Qin, Cheng-Gang; Shao, Cheng-Gang
2016-06-01
The Beyond Einstein Advanced Coherent Optical Network (BEACON) mission was designed to achieve an accuracy of 10^{-9} in measuring the Eddington parameter γ , which is perhaps the most fundamental Parameterized Post-Newtonian parameter. However, this ideal accuracy was just estimated as a ratio of the measurement accuracy of the inter-spacecraft distances to the magnitude of the departure from Euclidean geometry. Based on the BEACON concept, we construct a measurement model to estimate the parameter γ with the least squares method. Influences of the measurement noise and the out-of-plane error on the estimation accuracy are evaluated based on the white noise model. Though the BEACON mission does not require expensive drag-free systems and avoids physical dynamical models of spacecraft, the relatively low accuracy of initial inter-spacecraft distances poses a great challenge, which reduces the estimation accuracy in about two orders of magnitude. Thus the noise requirements may need to be more stringent in the design in order to achieve the target accuracy, which is demonstrated in the work. Considering that, we have given the limits on the power spectral density of both noise sources for the accuracy of 10^{-9}.
Stitching Type Large Aperture Depolarizer for Gas Monitoring Imaging Spectrometer
NASA Astrophysics Data System (ADS)
Liu, X.; Li, M.; An, N.; Zhang, T.; Cao, G.; Cheng, S.
2018-04-01
To increase the accuracy of radiation measurement for gas monitoring imaging spectrometer, it is necessary to achieve high levels of depolarization of the incoming beam. The preferred method in space instrument is to introduce the depolarizer into the optical system. It is a combination device of birefringence crystal wedges. Limited to the actual diameter of the crystal, the traditional depolarizer cannot be used in the large aperture imaging spectrometer (greater than 100 mm). In this paper, a stitching type depolarizer is presented. The design theory and numerical calculation model for dual babinet depolarizer were built. As required radiometric accuracies of the imaging spectrometer with 250 mm × 46 mm aperture, a stitching type dual babinet depolarizer was design in detail. Based on designing the optimum structural parmeters the tolerance of wedge angle refractive index, and central thickness were given. The analysis results show that the maximum residual polarization degree of output light from depolarizer is less than 2 %. The design requirements of polarization sensitivity is satisfied.
Outer planet mission guidance and navigation for spinning spacecraft
NASA Technical Reports Server (NTRS)
Paul, C. K.; Russell, R. K.; Ellis, J.
1974-01-01
The orbit determination accuracies, maneuver results, and navigation system specification for spinning Pioneer planetary probe missions are analyzed to aid in determining the feasibility of deploying probes into the atmospheres of the outer planets. Radio-only navigation suffices for a direct Saturn mission and the Jupiter flyby of a Jupiter/Uranus mission. Saturn ephemeris errors (1000 km) plus rigid entry constraints at Uranus result in very high velocity requirements (140 m/sec) on the final legs of the Saturn/Uranus and Jupiter/Uranus missions if only Earth-based tracking is employed. The capabilities of a conceptual V-slit sensor are assessed to supplement radio tracking by star/satellite observations. By processing the optical measurements with a batch filter, entry conditions at Uranus can be controlled to acceptable mission-defined levels (+ or - 3 deg) and the Saturn-Uranus leg velocity requirements can be reduced by a factor of 6 (from 139 to 23 m/sec) if nominal specified accuracies of the sensor can be realized.
Multi-disciplinary optimization of aeroservoelastic systems
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1990-01-01
Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.
Multidisciplinary optimization of aeroservoelastic systems using reduced-size models
NASA Technical Reports Server (NTRS)
Karpel, Mordechay
1992-01-01
Efficient analytical and computational tools for simultaneous optimal design of the structural and control components of aeroservoelastic systems are presented. The optimization objective is to achieve aircraft performance requirements and sufficient flutter and control stability margins with a minimal weight penalty and without violating the design constraints. Analytical sensitivity derivatives facilitate an efficient optimization process which allows a relatively large number of design variables. Standard finite element and unsteady aerodynamic routines are used to construct a modal data base. Minimum State aerodynamic approximations and dynamic residualization methods are used to construct a high accuracy, low order aeroservoelastic model. Sensitivity derivatives of flutter dynamic pressure, control stability margins and control effectiveness with respect to structural and control design variables are presented. The performance requirements are utilized by equality constraints which affect the sensitivity derivatives. A gradient-based optimization algorithm is used to minimize an overall cost function. A realistic numerical example of a composite wing with four controls is used to demonstrate the modeling technique, the optimization process, and their accuracy and efficiency.
Wong, Chung-Ki; Luo, Qingfei; Zotev, Vadim; Phillips, Raquel; Chan, Kam Wai Clifford; Bodurka, Jerzy
2018-03-31
In simultaneous EEG-fMRI, identification of the period of cardioballistic artifact (BCG) in EEG is required for the artifact removal. Recording the electrocardiogram (ECG) waveform during fMRI is difficult, often causing inaccurate period detection. Since the waveform of the BCG extracted by independent component analysis (ICA) is relatively invariable compared to the ECG waveform, we propose a multiple-scale peak-detection algorithm to determine the BCG cycle directly from the EEG data. The algorithm first extracts the high contrast BCG component from the EEG data by ICA. The BCG cycle is then estimated by band-pass filtering the component around the fundamental frequency identified from its energy spectral density, and the peak of BCG artifact occurrence is selected from each of the estimated cycle. The algorithm is shown to achieve a high accuracy on a large EEG-fMRI dataset. It is also adaptive to various heart rates without the needs of adjusting the threshold parameters. The cycle detection remains accurate with the scan duration reduced to half a minute. Additionally, the algorithm gives a figure of merit to evaluate the reliability of the detection accuracy. The algorithm is shown to give a higher detection accuracy than the commonly used cycle detection algorithm fmrib_qrsdetect implemented in EEGLAB. The achieved high cycle detection accuracy of our algorithm without using the ECG waveforms makes possible to create and automate pipelines for processing large EEG-fMRI datasets, and virtually eliminates the need for ECG recordings for BCG artifact removal. Copyright © 2018 The Authors. Published by Elsevier B.V. All rights reserved.
Development of at-wavelength metrology for x-ray optics at the ALS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yashchuk, Valeriy V.; Goldberg, Kenneth A.; Yuan, Sheng
2010-07-09
The comprehensive realization of the exciting advantages of new third- and forth-generation synchrotron radiation light sources requires concomitant development of reflecting and diffractive x-ray optics capable of micro- and nano-focusing, brightness preservation, and super high resolution. The fabrication, tuning, and alignment of the optics are impossible without adequate metrology instrumentation, methods, and techniques. While the accuracy of ex situ optical metrology at the Advanced Light Source (ALS) has reached a state-of-the-art level, wavefront control on beamlines is often limited by environmental and systematic alignment factors, and inadequate in situ feedback. At ALS beamline 5.3.1, we are developing broadly applicable, high-accuracy,more » in situ, at-wavelength wavefront measurement techniques to surpass 100-nrad slope measurement accuracy for Kirkpatrick-Baez (KB) mirrors. The at-wavelength methodology we are developing relies on a series of tests with increasing accuracy and sensitivity. Geometric Hartmann tests, performed with a scanning illuminated sub-aperture determine the wavefront slope across the full mirror aperture. Shearing interferometry techniques use coherent illumination and provide higher sensitivity wavefront measurements. Combining these techniques with high precision optical metrology and experimental methods will enable us to provide in situ setting and alignment of bendable x-ray optics to realize diffraction-limited, sub 50 nm focusing at beamlines. We describe here details of the metrology beamline endstation, the x-ray beam diagnostic system, and original experimental techniques that have already allowed us to precisely set a bendable KB mirror to achieve a focused spot size of 150 nm.« less
Rotman, Oren Moshe; Weiss, Dar; Zaretsky, Uri; Shitzer, Avraham; Einav, Shmuel
2015-09-18
High accuracy differential pressure measurements are required in various biomedical and medical applications, such as in fluid-dynamic test systems, or in the cath-lab. Differential pressure measurements using fluid-filled catheters are relatively inexpensive, yet may be subjected to common mode pressure errors (CMP), which can significantly reduce the measurement accuracy. Recently, a novel correction method for high accuracy differential pressure measurements was presented, and was shown to effectively remove CMP distortions from measurements acquired in rigid tubes. The purpose of the present study was to test the feasibility of this correction method inside compliant tubes, which effectively simulate arteries. Two tubes with varying compliance were tested under dynamic flow and pressure conditions to cover the physiological range of radial distensibility in coronary arteries. A third, compliant model, with a 70% stenosis severity was additionally tested. Differential pressure measurements were acquired over a 3 cm tube length using a fluid-filled double-lumen catheter, and were corrected using the proposed CMP correction method. Validation of the corrected differential pressure signals was performed by comparison to differential pressure recordings taken via a direct connection to the compliant tubes, and by comparison to predicted differential pressure readings of matching fluid-structure interaction (FSI) computational simulations. The results show excellent agreement between the experimentally acquired and computationally determined differential pressure signals. This validates the application of the CMP correction method in compliant tubes of the physiological range for up to intermediate size stenosis severity of 70%. Copyright © 2015 Elsevier Ltd. All rights reserved.
Weissberger, Gali H; Strong, Jessica V; Stefanidis, Kayla B; Summers, Mathew J; Bondi, Mark W; Stricker, Nikki H
2017-12-01
With an increasing focus on biomarkers in dementia research, illustrating the role of neuropsychological assessment in detecting mild cognitive impairment (MCI) and Alzheimer's dementia (AD) is important. This systematic review and meta-analysis, conducted in accordance with PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) standards, summarizes the sensitivity and specificity of memory measures in individuals with MCI and AD. Both meta-analytic and qualitative examination of AD versus healthy control (HC) studies (n = 47) revealed generally high sensitivity and specificity (≥ 80% for AD comparisons) for measures of immediate (sensitivity = 87%, specificity = 88%) and delayed memory (sensitivity = 89%, specificity = 89%), especially those involving word-list recall. Examination of MCI versus HC studies (n = 38) revealed generally lower diagnostic accuracy for both immediate (sensitivity = 72%, specificity = 81%) and delayed memory (sensitivity = 75%, specificity = 81%). Measures that differentiated AD from other conditions (n = 10 studies) yielded mixed results, with generally high sensitivity in the context of low or variable specificity. Results confirm that memory measures have high diagnostic accuracy for identification of AD, are promising but require further refinement for identification of MCI, and provide support for ongoing investigation of neuropsychological assessment as a cognitive biomarker of preclinical AD. Emphasizing diagnostic test accuracy statistics over null hypothesis testing in future studies will promote the ongoing use of neuropsychological tests as Alzheimer's disease research and clinical criteria increasingly rely upon cerebrospinal fluid (CSF) and neuroimaging biomarkers.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kim, Anthony; Ravi, Ananth
2014-08-15
High dose rate (HDR) remote afterloading brachytherapy involves sending a small, high-activity radioactive source attached to a cable to different positions within a hollow applicator implanted in the patient. It is critical that the source position within the applicator and the dwell time of the source are accurate. Daily quality assurance (QA) tests of the positional and dwell time accuracy are essential to ensure that the accuracy of the remote afterloader is not compromised prior to patient treatment. Our centre has developed an automated, video-based QA system for HDR brachytherapy that is dramatically superior to existing diode or film QAmore » solutions in terms of cost, objectivity, positional accuracy, with additional functionalities such as being able to determine source dwell time and transit time of the source. In our system, a video is taken of the brachytherapy source as it is sent out through a position check ruler, with the source visible through a clear window. Using a proprietary image analysis algorithm, the source position is determined with respect to time as it moves to different positions along the check ruler. The total material cost of the video-based system was under $20, consisting of a commercial webcam and adjustable stand. The accuracy of the position measurement is ±0.2 mm, and the time resolution is 30 msec. Additionally, our system is capable of robustly verifying the source transit time and velocity (a test required by the AAPM and CPQR recommendations), which is currently difficult to perform accurately.« less
Electromagnetic navigation system for CT-guided biopsy of small lesions.
Appelbaum, Liat; Sosna, Jacob; Nissenbaum, Yizhak; Benshtein, Alexander; Goldberg, S Nahum
2011-05-01
The purpose of this study was to evaluate an electromagnetic navigation system for CT-guided biopsy of small lesions. Standardized CT anthropomorphic phantoms were biopsied by two attending radiologists. CT scans of the phantom and surface electromagnetic fiducial markers were imported into the memory of the 3D electromagnetic navigation system. Each radiologist assessed the accuracy of biopsy using electromagnetic navigation alone by targeting sets of nine lesions (size range, 8-14 mm; skin to target distance, 5.7-12.8 cm) under eight different conditions of detector field strength and orientation (n = 117). As a control, each radiologist also biopsied two sets of five targets using conventional CT-guided technique. Biopsy accuracy, number of needle passes, procedure time, and radiation dose were compared. Under optimal conditions (phantom perpendicular to the electromagnetic receiver at highest possible field strength), phantom accuracy to the center of the lesion was 2.6 ± 1.1 mm. This translated into hitting 84.4% (38/45) of targets in a single pass (1.1 ± 0.4 CT confirmations), which was significantly fewer than the 3.6 ± 1.3 CT checks required for conventional technique (p < 0.001). The mean targeting time was 38.8 ± 18.2 seconds per lesion. Including procedural planning (∼5.5 minutes) and final CT confirmation of placement (∼3.5 minutes), the full electromagnetic tracking procedure required significantly less time (551.6 ± 87.4 seconds [∼9 minutes]) than conventional CT (833.3 ± 283.8 seconds [∼14 minutes]) for successful targeting (p < 0.001). Less favorable conditions, including nonperpendicular relation between the axis of the machine and weaker field strength, resulted in statistically significant lower accuracy (3.7 ± 1 mm, p < 0.001). Nevertheless, first-pass biopsy accuracy was 58.3% (21/36) and second-pass (35/36) accuracy was 97.2%. Lesions farther from the skin than 20-25 cm were out of range for successful electromagnetic tracking. Virtual electromagnetic tracking appears to have high accuracy in needle placement, potentially reducing time and radiation exposure compared with those of conventional CT techniques in the biopsy of small lesions.
40 CFR 63.8 - Monitoring requirements.
Code of Federal Regulations, 2010 CFR
2010-07-01
... in the relevant standard; or (B) The CMS fails a performance test audit (e.g., cylinder gas audit), relative accuracy audit, relative accuracy test audit, or linearity test audit; or (C) The COMS CD exceeds...) Data recording, calculations, and reporting; (v) Accuracy audit procedures, including sampling and...
Lee, Juneyoung; Kim, Kyung Won; Choi, Sang Hyun; Huh, Jimi
2015-01-01
Meta-analysis of diagnostic test accuracy studies differs from the usual meta-analysis of therapeutic/interventional studies in that, it is required to simultaneously analyze a pair of two outcome measures such as sensitivity and specificity, instead of a single outcome. Since sensitivity and specificity are generally inversely correlated and could be affected by a threshold effect, more sophisticated statistical methods are required for the meta-analysis of diagnostic test accuracy. Hierarchical models including the bivariate model and the hierarchical summary receiver operating characteristic model are increasingly being accepted as standard methods for meta-analysis of diagnostic test accuracy studies. We provide a conceptual review of statistical methods currently used and recommended for meta-analysis of diagnostic test accuracy studies. This article could serve as a methodological reference for those who perform systematic review and meta-analysis of diagnostic test accuracy studies. PMID:26576107
Is choline PET useful for identifying intraprostatic tumour lesions? A literature review.
Chan, Joachim; Syndikus, Isabel; Mahmood, Shelan; Bell, Lynn; Vinjamuri, Sobhan
2015-09-01
More than 80% of patients with intermediate-risk or high-risk localized prostate cancer are cured with radiation doses of 74-78 Gy, but high doses increase the risk for late bowel and bladder toxicity among long-term survivors. Dose painting, defined as dose escalation to areas in the prostate containing the tumour, rather than to the whole gland, minimizes dose to normal tissues and hence toxicity. It requires accurate identification of the location and size of these lesions, for which functional MRI is the current gold standard. Many studies have assessed the use of choline PET in staging newly diagnosed patients. This review will discuss important imaging variables affecting the accuracy of choline PET scans, how choline PET contributes to tumour identification and is used in radiotherapy planning and how PET can improve the patient pathway involving prostate radiotherapy. In summary, the available literature shows that the accuracy of choline PET improves with higher tracer doses and delayed imaging (although the optimal uptake time is unclear), and tumour identification by MRI is improved by the addition of PET imaging. We propose future research with prolonged choline uptake time and multiphase imaging, which may further improve accuracy.
Variable curvature mirror having variable thickness: design and fabrication
NASA Astrophysics Data System (ADS)
Zhao, Hui; Xie, Xiaopeng; Xu, Liang; Ding, Jiaoteng; Shen, Le; Gong, Jie
2017-10-01
Variable curvature mirror (VCM) can change its curvature radius dynamically and is usually used to correct the defocus and spherical aberration caused by thermal lens effect to improve the output beam quality of high power solid-state laser. Recently, the probable application of VCM in realizing non-moving element optical zoom imaging in visible band has been paid much attention. The basic requirement for VCM lies in that it should provide a large enough saggitus variation and still maintains a high enough surface figure at the same time. Therefore in this manuscript, by combing the pressurization based actuation with a variable thickness mirror design, the purpose of obtaining large saggitus variation and maintaining quite good surface figure accuracy at the same time could be achieved. A prototype zoom mirror with diameter of 120mm and central thickness of 8mm is designed, fabricated and tested. Experimental results demonstrate that the zoom mirror having an initial surface figure accuracy superior to 1/80λ could provide bigger than 36um saggitus variation and after finishing the curvature variation its surface figure accuracy could still be superior to 1/40λ with the spherical aberration removed, which proves that the effectiveness of the theoretical design.
Rossini, Gabriele; Parrini, Simone; Castroflorio, Tommaso; Deregibus, Andrea; Debernardi, Cesare L
2016-02-01
Our objective was to assess the accuracy, validity, and reliability of measurements obtained from virtual dental study models compared with those obtained from plaster models. PubMed, PubMed Central, National Library of Medicine Medline, Embase, Cochrane Central Register of Controlled Clinical trials, Web of Knowledge, Scopus, Google Scholar, and LILACs were searched from January 2000 to November 2014. A grading system described by the Swedish Council on Technology Assessment in Health Care and the Cochrane tool for risk of bias assessment were used to rate the methodologic quality of the articles. Thirty-five relevant articles were selected. The methodologic quality was high. No significant differences were observed for most of the studies in all the measured parameters, with the exception of the American Board of Orthodontics Objective Grading System. Digital models are as reliable as traditional plaster models, with high accuracy, reliability, and reproducibility. Landmark identification, rather than the measuring device or the software, appears to be the greatest limitation. Furthermore, with their advantages in terms of cost, time, and space required, digital models could be considered the new gold standard in current practice. Copyright © 2016 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.
High-accuracy self-calibration method for dual-axis rotation-modulating RLG-INS
NASA Astrophysics Data System (ADS)
Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Long, Xingwu
2017-05-01
Inertial navigation system has been the core component of both military and civil navigation systems. Dual-axis rotation modulation can completely eliminate the inertial elements constant errors of the three axes to improve the system accuracy. But the error caused by the misalignment angles and the scale factor error cannot be eliminated through dual-axis rotation modulation. And discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed the effect of calibration error during one modulated period and presented a new systematic self-calibration method for dual-axis rotation-modulating RLG-INS. Procedure for self-calibration of dual-axis rotation-modulating RLG-INS has been designed. The results of self-calibration simulation experiment proved that: this scheme can estimate all the errors in the calibration error model, the calibration precision of the inertial sensors scale factor error is less than 1ppm and the misalignment is less than 5″. These results have validated the systematic self-calibration method and proved its importance for accuracy improvement of dual -axis rotation inertial navigation system with mechanically dithered ring laser gyroscope.
Targeting an efficient target-to-target interval for P300 speller brain–computer interfaces
Sellers, Eric W.; Wang, Xingyu
2013-01-01
Longer target-to-target intervals (TTI) produce greater P300 event-related potential amplitude, which can increase brain–computer interface (BCI) classification accuracy and decrease the number of flashes needed for accurate character classification. However, longer TTIs requires more time for each trial, which will decrease the information transfer rate of BCI. In this paper, a P300 BCI using a 7 × 12 matrix explored new flash patterns (16-, 18- and 21-flash pattern) with different TTIs to assess the effects of TTI on P300 BCI performance. The new flash patterns were designed to minimize TTI, decrease repetition blindness, and examine the temporal relationship between each flash of a given stimulus by placing a minimum of one (16-flash pattern), two (18-flash pattern), or three (21-flash pattern) non-target flashes between each target flashes. Online results showed that the 16-flash pattern yielded the lowest classification accuracy among the three patterns. The results also showed that the 18-flash pattern provides a significantly higher information transfer rate (ITR) than the 21-flash pattern; both patterns provide high ITR and high accuracy for all subjects. PMID:22350331
Contact-aware simulations of particulate Stokesian suspensions
NASA Astrophysics Data System (ADS)
Lu, Libin; Rahimian, Abtin; Zorin, Denis
2017-10-01
We present an efficient, accurate, and robust method for simulation of dense suspensions of deformable and rigid particles immersed in Stokesian fluid in two dimensions. We use a well-established boundary integral formulation for the problem as the foundation of our approach. This type of formulation, with a high-order spatial discretization and an implicit and adaptive time discretization, have been shown to be able to handle complex interactions between particles with high accuracy. Yet, for dense suspensions, very small time-steps or expensive implicit solves as well as a large number of discretization points are required to avoid non-physical contact and intersections between particles, leading to infinite forces and numerical instability. Our method maintains the accuracy of previous methods at a significantly lower cost for dense suspensions. The key idea is to ensure interference-free configuration by introducing explicit contact constraints into the system. While such constraints are unnecessary in the formulation, in the discrete form of the problem, they make it possible to eliminate catastrophic loss of accuracy by preventing contact explicitly. Introducing contact constraints results in a significant increase in stable time-step size for explicit time-stepping, and a reduction in the number of points adequate for stability.
Wen, Ying; Hou, Lili; He, Lianghua; Peterson, Bradley S; Xu, Dongrong
2015-05-01
Spatial normalization plays a key role in voxel-based analyses of brain images. We propose a highly accurate algorithm for high-dimensional spatial normalization of brain images based on the technique of symmetric optical flow. We first construct a three dimension optical model with the consistency assumption of intensity and consistency of the gradient of intensity under a constraint of discontinuity-preserving spatio-temporal smoothness. Then, an efficient inverse consistency optical flow is proposed with aims of higher registration accuracy, where the flow is naturally symmetric. By employing a hierarchical strategy ranging from coarse to fine scales of resolution and a method of Euler-Lagrange numerical analysis, our algorithm is capable of registering brain images data. Experiments using both simulated and real datasets demonstrated that the accuracy of our algorithm is not only better than that of those traditional optical flow algorithms, but also comparable to other registration methods used extensively in the medical imaging community. Moreover, our registration algorithm is fully automated, requiring a very limited number of parameters and no manual intervention. Copyright © 2015 Elsevier Inc. All rights reserved.
Articulatory Control in Childhood Apraxia of Speech in a Novel Word-Learning Task.
Case, Julie; Grigos, Maria I
2016-12-01
Articulatory control and speech production accuracy were examined in children with childhood apraxia of speech (CAS) and typically developing (TD) controls within a novel word-learning task to better understand the influence of planning and programming deficits in the production of unfamiliar words. Participants included 16 children between the ages of 5 and 6 years (8 CAS, 8 TD). Short- and long-term changes in lip and jaw movement, consonant and vowel accuracy, and token-to-token consistency were measured for 2 novel words that differed in articulatory complexity. Children with CAS displayed short- and long-term changes in consonant accuracy and consistency. Lip and jaw movements did not change over time. Jaw movement duration was longer in children with CAS than in TD controls. Movement stability differed between low- and high-complexity words in both groups. Children with CAS displayed a learning effect for consonant accuracy and consistency. Lack of change in movement stability may indicate that children with CAS require additional practice to demonstrate changes in speech motor control, even within production of novel word targets with greater consonant and vowel accuracy and consistency. The longer movement duration observed in children with CAS is believed to give children additional time to plan and program movements within a novel skill.
Solving the stability-accuracy-diversity dilemma of recommender systems
NASA Astrophysics Data System (ADS)
Hou, Lei; Liu, Kecheng; Liu, Jianguo; Zhang, Runtong
2017-02-01
Recommender systems are of great significance in predicting the potential interesting items based on the target user's historical selections. However, the recommendation list for a specific user has been found changing vastly when the system changes, due to the unstable quantification of item similarities, which is defined as the recommendation stability problem. To improve the similarity stability and recommendation stability is crucial for the user experience enhancement and the better understanding of user interests. While the stability as well as accuracy of recommendation could be guaranteed by recommending only popular items, studies have been addressing the necessity of diversity which requires the system to recommend unpopular items. By ranking the similarities in terms of stability and considering only the most stable ones, we present a top- n-stability method based on the Heat Conduction algorithm (denoted as TNS-HC henceforth) for solving the stability-accuracy-diversity dilemma. Experiments on four benchmark data sets indicate that the TNS-HC algorithm could significantly improve the recommendation stability and accuracy simultaneously and still retain the high-diversity nature of the Heat Conduction algorithm. Furthermore, we compare the performance of the TNS-HC algorithm with a number of benchmark recommendation algorithms. The result suggests that the TNS-HC algorithm is more efficient in solving the stability-accuracy-diversity triple dilemma of recommender systems.
Urban Land Cover Mapping Accuracy Assessment - A Cost-benefit Analysis Approach
NASA Astrophysics Data System (ADS)
Xiao, T.
2012-12-01
One of the most important components in urban land cover mapping is mapping accuracy assessment. Many statistical models have been developed to help design simple schemes based on both accuracy and confidence levels. It is intuitive that an increased number of samples increases the accuracy as well as the cost of an assessment. Understanding cost and sampling size is crucial in implementing efficient and effective of field data collection. Few studies have included a cost calculation component as part of the assessment. In this study, a cost-benefit sampling analysis model was created by combining sample size design and sampling cost calculation. The sampling cost included transportation cost, field data collection cost, and laboratory data analysis cost. Simple Random Sampling (SRS) and Modified Systematic Sampling (MSS) methods were used to design sample locations and to extract land cover data in ArcGIS. High resolution land cover data layers of Denver, CO and Sacramento, CA, street networks, and parcel GIS data layers were used in this study to test and verify the model. The relationship between the cost and accuracy was used to determine the effectiveness of each sample method. The results of this study can be applied to other environmental studies that require spatial sampling.
NASA Technical Reports Server (NTRS)
Whalen, Robert T.; Napel, Sandy; Yan, Chye H.
1996-01-01
Progress in development of the methods required to study bone remodeling as a function of time is reported. The following topics are presented: 'A New Methodology for Registration Accuracy Evaluation', 'Registration of Serial Skeletal Images for Accurately Measuring Changes in Bone Density', and 'Precise and Accurate Gold Standard for Multimodality and Serial Registration Method Evaluations.'
Geomagnetic referencing--the real-time compass for directional drillers
Buchanan, Andrew; Finn, Carol; Love, Jeffrey J.; Worthington, E. William; Lawson, Fraser; Maus, Stefan; Okewunmi, Shola; Poedjono, Benny
2013-01-01
To pinpoint the location and direction of a wellborne, directional driller rely on measurements from accelerometers, magnetometer and gyroscopes. In the past, high-accuracy guidance methods required a halt in drilling to obtain directional measurements. Advances in geomagnetic referencing now allow companies to use real-time data acquired during drilling to accurately potion horizontal wells, decrease well spacing and drill multiple wells from limited surface locations.
Airbreathing hypersonic vehicle design and analysis methods
NASA Technical Reports Server (NTRS)
Lockwood, Mary Kae; Petley, Dennis H.; Hunt, James L.; Martin, John G.
1996-01-01
The design, analysis, and optimization of airbreathing hypersonic vehicles requires analyses involving many highly coupled disciplines at levels of accuracy exceeding those traditionally considered in a conceptual or preliminary-level design. Discipline analysis methods including propulsion, structures, thermal management, geometry, aerodynamics, performance, synthesis, sizing, closure, and cost are discussed. Also, the on-going integration of these methods into a working environment, known as HOLIST, is described.
NASA Astrophysics Data System (ADS)
Richards, Lisa M.; Kazmi, S. M. S.; Olin, Katherine E.; Waldron, James S.; Fox, Douglas J.; Dunn, Andrew K.
2017-03-01
Monitoring cerebral blood flow (CBF) during neurosurgery is essential for detecting ischemia in a timely manner for a wide range of procedures. Multiple clinical studies have demonstrated that laser speckle contrast imaging (LSCI) has high potential to be a valuable, label-free CBF monitoring technique during neurosurgery. LSCI is an optical imaging method that provides blood flow maps with high spatiotemporal resolution requiring only a coherent light source, a lens system, and a camera. However, the quantitative accuracy and sensitivity of LSCI is limited and highly dependent on the exposure time. An extension to LSCI called multi-exposure speckle imaging (MESI) overcomes these limitations, and was evaluated intraoperatively in patients undergoing brain tumor resection. This clinical study (n = 7) recorded multiple exposure times from the same cortical tissue area, and demonstrates that shorter exposure times (≤1 ms) provide the highest dynamic range and sensitivity for sampling flow rates in human neurovasculature. This study also combined exposure times using the MESI model, demonstrating high correlation with proper image calibration and acquisition. The physiological accuracy of speckle-estimated flow was validated using conservation of flow analysis on vascular bifurcations. Flow estimates were highly conserved in MESI and 1 ms exposure LSCI, with percent errors at 6.4% ± 5.3% and 7.2% ± 7.2%, respectively, while 5 ms exposure LSCI had higher errors at 21% ± 10% (n = 14 bifurcations). Results from this study demonstrate the importance of exposure time selection for LSCI, and that intraoperative MESI can be performed with high quantitative accuracy.
Code of Federal Regulations, 2013 CFR
2013-10-01
... measured and reported independently. (iv) Accuracy data from both network-based solutions and handset-based solutions may be blended to measure compliance with the accuracy requirements of paragraph (h)(1)(i)(A...'s subscriber base. The weighting ratio shall be applied to the accuracy data from each solution and...
Code of Federal Regulations, 2012 CFR
2012-10-01
... measured and reported independently. (iv) Accuracy data from both network-based solutions and handset-based solutions may be blended to measure compliance with the accuracy requirements of paragraph (h)(1)(i)(A...'s subscriber base. The weighting ratio shall be applied to the accuracy data from each solution and...
Code of Federal Regulations, 2011 CFR
2011-10-01
... measured and reported independently. (iv) Accuracy data from both network-based solutions and handset-based solutions may be blended to measure compliance with the accuracy requirements of paragraph (h)(1)(i)(A...'s subscriber base. The weighting ratio shall be applied to the accuracy data from each solution and...
10 CFR 54.13 - Completeness and accuracy of information.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 10 Energy 2 2012-01-01 2012-01-01 false Completeness and accuracy of information. 54.13 Section 54.13 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) REQUIREMENTS FOR RENEWAL OF OPERATING LICENSES FOR NUCLEAR POWER PLANTS General Provisions § 54.13 Completeness and accuracy of information. (a...
10 CFR 54.13 - Completeness and accuracy of information.
Code of Federal Regulations, 2011 CFR
2011-01-01
... 10 Energy 2 2011-01-01 2011-01-01 false Completeness and accuracy of information. 54.13 Section 54.13 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) REQUIREMENTS FOR RENEWAL OF OPERATING LICENSES FOR NUCLEAR POWER PLANTS General Provisions § 54.13 Completeness and accuracy of information. (a...
10 CFR 54.13 - Completeness and accuracy of information.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 10 Energy 2 2013-01-01 2013-01-01 false Completeness and accuracy of information. 54.13 Section 54.13 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) REQUIREMENTS FOR RENEWAL OF OPERATING LICENSES FOR NUCLEAR POWER PLANTS General Provisions § 54.13 Completeness and accuracy of information. (a...
10 CFR 54.13 - Completeness and accuracy of information.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 10 Energy 2 2010-01-01 2010-01-01 false Completeness and accuracy of information. 54.13 Section 54.13 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) REQUIREMENTS FOR RENEWAL OF OPERATING LICENSES FOR NUCLEAR POWER PLANTS General Provisions § 54.13 Completeness and accuracy of information. (a...
10 CFR 54.13 - Completeness and accuracy of information.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 10 Energy 2 2014-01-01 2014-01-01 false Completeness and accuracy of information. 54.13 Section 54.13 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) REQUIREMENTS FOR RENEWAL OF OPERATING LICENSES FOR NUCLEAR POWER PLANTS General Provisions § 54.13 Completeness and accuracy of information. (a...
Localization of single biological molecules out of the focal plane
NASA Astrophysics Data System (ADS)
Gardini, L.; Capitanio, M.; Pavone, F. S.
2014-03-01
Since the behaviour of proteins and biological molecules is tightly related to the cell's environment, more and more microscopy techniques are moving from in vitro to in living cells experiments. Looking at both diffusion and active transportation processes inside a cell requires three-dimensional localization over a few microns range, high SNR images and high temporal resolution (ms order of magnitude). We developed an apparatus that combines different microscopy techniques to satisfy all the technical requirements for 3D tracking of single fluorescent molecules inside living cells with nanometer accuracy. To account for the optical sectioning of thick samples we built up a HILO (Highly Inclined and Laminated Optical sheet) microscopy system through which we can excite the sample in a widefield (WF) configuration by a thin sheet of light that can follow the molecule up and down along the z axis spanning the entire thickness of the cell with a SNR much higher than traditional WF microscopy. Since protein dynamics inside a cell involve all three dimensions, we included a method to measure the x, y, and z coordinates with nanometer accuracy, exploiting the properties of the point-spread-function of out-of-focus quantum dots bound to the protein of interest. Finally, a feedback system stabilizes the microscope from thermal drifts, assuring accurate localization during the entire duration of the experiment.
High-resolution tree canopy mapping for New York City using LIDAR and object-based image analysis
NASA Astrophysics Data System (ADS)
MacFaden, Sean W.; O'Neil-Dunne, Jarlath P. M.; Royar, Anna R.; Lu, Jacqueline W. T.; Rundle, Andrew G.
2012-01-01
Urban tree canopy is widely believed to have myriad environmental, social, and human-health benefits, but a lack of precise canopy estimates has hindered quantification of these benefits in many municipalities. This problem was addressed for New York City using object-based image analysis (OBIA) to develop a comprehensive land-cover map, including tree canopy to the scale of individual trees. Mapping was performed using a rule-based expert system that relied primarily on high-resolution LIDAR, specifically its capacity for evaluating the height and texture of aboveground features. Multispectral imagery was also used, but shadowing and varying temporal conditions limited its utility. Contextual analysis was a key part of classification, distinguishing trees according to their physical and spectral properties as well as their relationships to adjacent, nonvegetated features. The automated product was extensively reviewed and edited via manual interpretation, and overall per-pixel accuracy of the final map was 96%. Although manual editing had only a marginal effect on accuracy despite requiring a majority of project effort, it maximized aesthetic quality and ensured the capture of small, isolated trees. Converting high-resolution LIDAR and imagery into usable information is a nontrivial exercise, requiring significant processing time and labor, but an expert system-based combination of OBIA and manual review was an effective method for fine-scale canopy mapping in a complex urban environment.
40 CFR 63.5545 - What are my monitoring installation, operation, and maintenance requirements?
Code of Federal Regulations, 2013 CFR
2013-07-01
... process unit such that the measurement is representative of control of the exhaust emissions (e.g., on or..., and validation check. (9) Except for redundant sensors, any device that is used to conduct an initial validation or accuracy audit of a CPMS must meet the accuracy requirements specified in paragraphs (f)(9)(i...
40 CFR 63.5545 - What are my monitoring installation, operation, and maintenance requirements?
Code of Federal Regulations, 2011 CFR
2011-07-01
... process unit such that the measurement is representative of control of the exhaust emissions (e.g., on or..., and validation check. (9) Except for redundant sensors, any device that is used to conduct an initial validation or accuracy audit of a CPMS must meet the accuracy requirements specified in paragraphs (f)(9)(i...
Accuracy in Blood Glucose Measurement: What Will a Tightening of Requirements Yield?
Heinemann, Lutz; Lodwig, Volker; Freckmann, Guido
2012-01-01
Nowadays, almost all persons with diabetes—at least those using antidiabetic drug therapy—use one of a plethora of meters commercially available for self-monitoring of blood glucose. The accuracy of blood glucose (BG) measurement using these meters has been presumed to be adequate; that is, the accuracy of these devices was not usually questioned until recently. Health authorities in the United States (Food and Drug Administration) and in other countries are currently endeavoring to tighten the requirements for the accuracy of these meters above the level that is currently stated in the standard ISO 15197. At first glance, this does not appear to be a problem and is hardly worth further consideration, but a closer look reveals a considerable range of critical aspects that will be discussed in this commentary. In summary, one could say that as a result of modern production methods and ongoing technical advances, the demands placed on the quality of measurement results obtained with BG meters can be increased to a certain degree. One should also take into consideration that the system accuracy (which covers many more aspects as the analytical accuracy) required to make correct therapeutical decisions certainly varies for different types of therapy. At the end, in addition to analytical accuracy, thorough and systematic training of patients and regular refresher training is important to minimize errors. Only under such circumstances will patients make appropriate therapeutic interventions to optimize and maintain metabolic control. PMID:22538158
Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems
NASA Technical Reports Server (NTRS)
Downie, John D.
1990-01-01
A ground-based adaptive optics imaging telescope system attempts to improve image quality by detecting and correcting for atmospherically induced wavefront aberrations. The required control computations during each cycle will take a finite amount of time. Longer time delays result in larger values of residual wavefront error variance since the atmosphere continues to change during that time. Thus an optical processor may be well-suited for this task. This paper presents a study of the accuracy requirements in a general optical processor that will make it competitive with, or superior to, a conventional digital computer for the adaptive optics application. An optimization of the adaptive optics correction algorithm with respect to an optical processor's degree of accuracy is also briefly discussed.
Software-type Wave-Particle Interaction Analyzer on board the Arase satellite
NASA Astrophysics Data System (ADS)
Katoh, Yuto; Kojima, Hirotsugu; Hikishima, Mitsuru; Takashima, Takeshi; Asamura, Kazushi; Miyoshi, Yoshizumi; Kasahara, Yoshiya; Kasahara, Satoshi; Mitani, Takefumi; Higashio, Nana; Matsuoka, Ayako; Ozaki, Mitsunori; Yagitani, Satoshi; Yokota, Shoichiro; Matsuda, Shoya; Kitahara, Masahiro; Shinohara, Iku
2018-01-01
We describe the principles of the Wave-Particle Interaction Analyzer (WPIA) and the implementation of the Software-type WPIA (S-WPIA) on the Arase satellite. The WPIA is a new type of instrument for the direct and quantitative measurement of wave-particle interactions. The S-WPIA is installed on the Arase satellite as a software function running on the mission data processor. The S-WPIA on board the Arase satellite uses an electromagnetic field waveform that is measured by the waveform capture receiver of the plasma wave experiment (PWE), and the velocity vectors of electrons detected by the medium-energy particle experiment-electron analyzer (MEP-e), the high-energy electron experiment (HEP), and the extremely high-energy electron experiment (XEP). The prime objective of the S-WPIA is to measure the energy exchange between whistler-mode chorus emissions and energetic electrons in the inner magnetosphere. It is essential for the S-WPIA to synchronize instruments to a relative time accuracy better than the time period of the plasma wave oscillations. Since the typical frequency of chorus emissions in the inner magnetosphere is a few kHz, a relative time accuracy of better than 10 μs is required in order to measure the relative phase angle between the wave and velocity vectors. In the Arase satellite, a dedicated system has been developed to realize the time resolution required for inter-instrument communication. Here, both the time index distributed over all instruments through the satellite system and an S-WPIA clock signal are used, that are distributed from the PWE to the MEP-e, HEP, and XEP through a direct line, for the synchronization of instruments within a relative time accuracy of a few μs. We also estimate the number of particles required to obtain statistically significant results with the S-WPIA and the expected accumulation time by referring to the specifications of the MEP-e and assuming a count rate for each detector.
Comparisons of discrete and integrative sampling accuracy in estimating pulsed aquatic exposures.
Morrison, Shane A; Luttbeg, Barney; Belden, Jason B
2016-11-01
Most current-use pesticides have short half-lives in the water column and thus the most relevant exposure scenarios for many aquatic organisms are pulsed exposures. Quantifying exposure using discrete water samples may not be accurate as few studies are able to sample frequently enough to accurately determine time-weighted average (TWA) concentrations of short aquatic exposures. Integrative sampling methods that continuously sample freely dissolved contaminants over time intervals (such as integrative passive samplers) have been demonstrated to be a promising measurement technique. We conducted several modeling scenarios to test the assumption that integrative methods may require many less samples for accurate estimation of peak 96-h TWA concentrations. We compared the accuracies of discrete point samples and integrative samples while varying sampling frequencies and a range of contaminant water half-lives (t 50 = 0.5, 2, and 8 d). Differences the predictive accuracy of discrete point samples and integrative samples were greatest at low sampling frequencies. For example, when the half-life was 0.5 d, discrete point samples required 7 sampling events to ensure median values > 50% and no sampling events reporting highly inaccurate results (defined as < 10% of the true 96-h TWA). Across all water half-lives investigated, integrative sampling only required two samples to prevent highly inaccurate results and measurements resulting in median values > 50% of the true concentration. Regardless, the need for integrative sampling diminished as water half-life increased. For an 8-d water half-life, two discrete samples produced accurate estimates and median values greater than those obtained for two integrative samples. Overall, integrative methods are the more accurate method for monitoring contaminants with short water half-lives due to reduced frequency of extreme values, especially with uncertainties around the timing of pulsed events. However, the acceptability of discrete sampling methods for providing accurate concentration measurements increases with increasing aquatic half-lives. Copyright © 2016 Elsevier Ltd. All rights reserved.
The value of forecasting key-decision variables for rain-fed farming
NASA Astrophysics Data System (ADS)
Winsemius, Hessel; Werner, Micha
2013-04-01
Rain-fed farmers are highly vulnerable to variability in rainfall. Timely knowledge of the onset of the rainy season, the expected amount of rainfall and the occurrence of dry spells can help rain-fed farmers to plan the cropping season. Seasonal probabilistic weather forecasts may provide such information to farmers, but need to provide reliable forecasts of key variables with which farmers can make decisions. In this contribution, we present a new method to evaluate the value of meteorological forecasts in predicting these key variables. The proposed method measures skill by assessing whether a forecast was useful to this decision. This is done by taking into account the required accuracy of timing of the event to make the decision useful. The method progresses the estimate of forecast skill to forecast value by taking into account the required accuracy that is needed to make the decision valuable, based on the cost/loss ratio of possible decisions. The method is applied over the Limpopo region in Southern Africa. We demonstrate the method using the example of temporary water harvesting techniques. Such techniques require time to construct and must be ready long enough before the occurrence of a dry spell to be effective. The value of the forecasts to the decision used as an example is shown to be highly sensitive to the accuracy in the timing of forecasted dry spells, and the tolerance in the decision to timing error. The skill with which dry spells can be predicted is shown to be higher in some parts of the basin, indicating that these forecasts have higher value for the decision in those parts than in others. Through assessing the skill of forecasting key decision variables to the farmers we show that it is easier to understand if the forecasts have value in reducing risk, or if other adaptation strategies should be implemented.
Powell, Daniel K; Lin, Eaton; Silberzweig, James E; Kagetsu, Nolan J
2014-03-01
To retrospectively compare resident adherence to checklist-style structured reporting for maxillofacial computed tomography (CT) from the emergency department (when required vs. suggested between two programs). To compare radiology resident reporting accuracy before and after introduction of the structured report and assess its ability to decrease the rate of undetected pathology. We introduced a reporting checklist for maxillofacial CT into our dictation software without specific training, requiring it at one program and suggesting it at another. We quantified usage among residents and compared reporting accuracy, before and after counting and categorizing faculty addenda. There was no significant change in resident accuracy in the first few months, with residents acting as their own controls (directly comparing performance with and without the checklist). Adherence to the checklist at program A (where it originated and was required) was 85% of reports compared to 9% of reports at program B (where it was suggested). When using program B as a secondary control, there was no significant difference in resident accuracy with or without using the checklist (comparing different residents using the checklist to those not using the checklist). Our results suggest that there is no automatic value of checklists for improving radiology resident reporting accuracy. They also suggest the importance of focused training, checklist flexibility, and a period of adjustment to a new reporting style. Mandatory checklists were readily adopted by residents but not when simply suggested. Copyright © 2014 AUR. Published by Elsevier Inc. All rights reserved.