Science.gov

Sample records for accuracy requirements e911

  1. 76 FR 1126 - Wireless E911 Location Accuracy Requirements; E911 Requirements for IP-Enabled Service Providers

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-07

    ... the Federal Register on November 2, 2010, 75 FR 67321. Thus, comments submitted in response to the... COMMISSION 47 CFR Part 20 Wireless E911 Location Accuracy Requirements; E911 Requirements for IP-Enabled... Association of State 911 Administrators (NASNA), CTIA--The Wireless Association (CTIA), and...

  2. 75 FR 70604 - Wireless E911 Location Accuracy Requirements

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-18

    ...In this document, the Federal Communications Commission (Commission) amends its rules to require wireless licensees subject to standards for wireless Enhanced 911 (E911) Phase II location accuracy and reliability to satisfy these standards at either a county-based or Public Safety Answering Point (PSAP)-based geographic level. The Commission takes this step in order to ensure an appropriate......

  3. 76 FR 59916 - Interconnected VoIP Service; Wireless E911 Location Accuracy Requirements; E911 Requirements for...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-09-28

    ...In this document, the Commission continues to strengthen its existing Enhanced 911 (E911) location accuracy regime for wireless carriers by retaining the existing handset-based and network-based location accuracy standards and the eight-year implementation period established in our September 2010 E911 Location Accuracy Second Report and Order but providing for phasing out the network-based......

  4. 76 FR 23713 - Wireless E911 Location Accuracy Requirements

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-04-28

    ... amendments to 47 CFR 20.18(h)(1)(vi), (h)(2)(iii), and (h)(3) published at 75 FR 70604, November 18, 2010... 75 FR 70604, the Commission published in the Federal Register the summary of the Second Report and... consistent compliance methodology with respect to location accuracy standards. In the notice at 75 FR...

  5. 77 FR 43536 - Wireless E911 Phase II Location Accuracy Requirements

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-25

    ... for new information collection requirements. DATES: The amendment to 47 CFR 20.18 published at 76 FR... at 76 FR 59916, September 28, 2011. The OMB Control Number is 3060-1147. The Commission publishes..., under OMB Control No. 3060-1147. The Commission announced OMB's approval and the effective date in 76...

  6. 75 FR 67321 - Wireless E911 Location Accuracy Requirements; E911 Requirements for IP-Enabled Service Providers

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-02

    ... technologies combine information from diverse sources, such as Wi-Fi access points or other ubiquitous sources... Wi-Fi access points for which it knows the address, should it use this information in lieu of end... Wi-Fi networks, or some combination of both. ] 35. In its recent survey of ``the current state of...

  7. 76 FR 47114 - Wireless E911 Location Accuracy Requirements; E911 Requirements for IP-Enabled Service Providers

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-08-04

    ... commercial mobile smartphones running VoIP applications, Wi-Fi enabled VoIP handsets, portable terminal... being carried over CMRS circuit-switched and data networks, as well as on Wi- Fi and other types of..., network-based location determination, and Wi-Fi based positioning. Often, these capabilities work...

  8. 47 CFR 9.5 - E911 Service.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false E911 Service. 9.5 Section 9.5 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL INTERCONNECTED VOICE OVER INTERNET PROTOCOL SERVICES § 9.5 E911 Service. (a) Scope of Section. The following requirements are only applicable to providers...

  9. 47 CFR 9.5 - E911 Service.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false E911 Service. 9.5 Section 9.5 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL INTERCONNECTED VOICE OVER INTERNET PROTOCOL SERVICES § 9.5 E911 Service. (a) Scope of Section. The following requirements are only applicable to providers...

  10. 47 CFR 9.5 - E911 Service.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 1 2014-10-01 2014-10-01 false E911 Service. 9.5 Section 9.5 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL INTERCONNECTED VOICE OVER INTERNET PROTOCOL SERVICES § 9.5 E911 Service. (a) Scope of Section. The following requirements are only applicable to providers...

  11. 47 CFR 9.5 - E911 Service.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false E911 Service. 9.5 Section 9.5 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL INTERCONNECTED VOICE OVER INTERNET PROTOCOL SERVICES § 9.5 E911 Service. (a) Scope of Section. The following requirements are only applicable to providers of interconnected VoIP services. Further,...

  12. 47 CFR 9.5 - E911 Service.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Registered Location is in a geographic area served by a Wireline E911 Network (which, as defined in § 9.3... Network all 911 calls to the PSAP, designated statewide default answering point, or appropriate local...-ANI, via the dedicated Wireline E911 Network; and (4) The Registered Location must be available to...

  13. 47 CFR Appendix C to Part 400 - Annual Certification for E-911 Grant Recipients

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 5 2012-10-01 2012-10-01 false Annual Certification for E-911 Grant Recipients... ADMINISTRATION, DEPARTMENT OF COMMERCE, AND NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION E-911 GRANT PROGRAM Pt. 400, App. C Appendix C to Part 400—Annual Certification for E-911...

  14. 47 CFR Appendix C to Part 400 - Annual Certification for E-911 Grant Recipients

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 5 2011-10-01 2011-10-01 false Annual Certification for E-911 Grant Recipients... ADMINISTRATION, DEPARTMENT OF COMMERCE, AND NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION E-911 GRANT PROGRAM Pt. 400, App. C Appendix C to Part 400—Annual Certification for E-911...

  15. 47 CFR Appendix C to Part 400 - Annual Certification for E-911 Grant Recipients

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 5 2013-10-01 2013-10-01 false Annual Certification for E-911 Grant Recipients... ADMINISTRATION, DEPARTMENT OF COMMERCE, AND NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION E-911 GRANT PROGRAM Pt. 400, App. C Appendix C to Part 400—Annual Certification for E-911...

  16. 47 CFR Appendix C to Part 400 - Annual Certification for E-911 Grant Recipients

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Annual Certification for E-911 Grant Recipients... ADMINISTRATION, DEPARTMENT OF COMMERCE, AND NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION E-911 GRANT PROGRAM Pt. 400, App. C Appendix C to Part 400—Annual Certification for E-911...

  17. 47 CFR Appendix C to Part 400 - Annual Certification for E-911 Grant Recipients

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 5 2014-10-01 2014-10-01 false Annual Certification for E-911 Grant Recipients... ADMINISTRATION, DEPARTMENT OF COMMERCE, AND NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION E-911 GRANT PROGRAM Pt. 400, App. C Appendix C to Part 400—Annual Certification for E-911...

  18. 47 CFR Appendix B to Part 400 - Initial Certification for E-911 Grant Applicants

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 5 2010-10-01 2010-10-01 false Initial Certification for E-911 Grant... ADMINISTRATION, DEPARTMENT OF COMMERCE, AND NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION E-911 GRANT PROGRAM Pt. 400, App. B Appendix B to Part 400—Initial Certification for E-911...

  19. 47 CFR Appendix B to Part 400 - Initial Certification for E-911 Grant Applicants

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 5 2014-10-01 2014-10-01 false Initial Certification for E-911 Grant... ADMINISTRATION, DEPARTMENT OF COMMERCE, AND NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION E-911 GRANT PROGRAM Pt. 400, App. B Appendix B to Part 400—Initial Certification for E-911...

  20. 47 CFR Appendix B to Part 400 - Initial Certification for E-911 Grant Applicants

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 5 2011-10-01 2011-10-01 false Initial Certification for E-911 Grant... ADMINISTRATION, DEPARTMENT OF COMMERCE, AND NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION E-911 GRANT PROGRAM Pt. 400, App. B Appendix B to Part 400—Initial Certification for E-911...

  1. 47 CFR Appendix B to Part 400 - Initial Certification for E-911 Grant Applicants

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 5 2012-10-01 2012-10-01 false Initial Certification for E-911 Grant... ADMINISTRATION, DEPARTMENT OF COMMERCE, AND NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION E-911 GRANT PROGRAM Pt. 400, App. B Appendix B to Part 400—Initial Certification for E-911...

  2. 47 CFR Appendix B to Part 400 - Initial Certification for E-911 Grant Applicants

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 5 2013-10-01 2013-10-01 false Initial Certification for E-911 Grant... ADMINISTRATION, DEPARTMENT OF COMMERCE, AND NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION E-911 GRANT PROGRAM Pt. 400, App. B Appendix B to Part 400—Initial Certification for E-911...

  3. 47 CFR 9.7 - Access to 911 and E911 service capabilities.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... 47 Telecommunication 1 2012-10-01 2012-10-01 false Access to 911 and E911 service capabilities. 9.7 Section 9.7 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL INTERCONNECTED VOICE OVER INTERNET PROTOCOL SERVICES § 9.7 Access to 911 and E911 service capabilities. (a) Access. Subject to...

  4. 47 CFR 9.7 - Access to 911 and E911 service capabilities.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... 47 Telecommunication 1 2014-10-01 2014-10-01 false Access to 911 and E911 service capabilities. 9.7 Section 9.7 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL INTERCONNECTED VOICE OVER INTERNET PROTOCOL SERVICES § 9.7 Access to 911 and E911 service capabilities. (a) Access. Subject to...

  5. 47 CFR 9.7 - Access to 911 and E911 service capabilities.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... 47 Telecommunication 1 2013-10-01 2013-10-01 false Access to 911 and E911 service capabilities. 9.7 Section 9.7 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL INTERCONNECTED VOICE OVER INTERNET PROTOCOL SERVICES § 9.7 Access to 911 and E911 service capabilities. (a) Access. Subject to...

  6. 47 CFR 9.7 - Access to 911 and E911 service capabilities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Access to 911 and E911 service capabilities. 9.7 Section 9.7 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL INTERCONNECTED VOICE OVER INTERNET PROTOCOL SERVICES § 9.7 Access to 911 and E911 service capabilities. (a) Access. Subject to...

  7. 47 CFR 9.7 - Access to 911 and E911 service capabilities.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 1 2011-10-01 2011-10-01 false Access to 911 and E911 service capabilities. 9.7 Section 9.7 Telecommunication FEDERAL COMMUNICATIONS COMMISSION GENERAL INTERCONNECTED VOICE OVER INTERNET PROTOCOL SERVICES § 9.7 Access to 911 and E911 service capabilities. (a) Access. Subject to...

  8. Accuracy requirements. [for monitoring of climate changes

    NASA Technical Reports Server (NTRS)

    Delgenio, Anthony

    1993-01-01

    Satellite and surface measurements, if they are to serve as a climate monitoring system, must be accurate enough to permit detection of changes of climate parameters on decadal time scales. The accuracy requirements are difficult to define a priori since they depend on unknown future changes of climate forcings and feedbacks. As a framework for evaluation of candidate Climsat instruments and orbits, we estimate the accuracies that would be needed to measure changes expected over two decades based on theoretical considerations including GCM simulations and on observational evidence in cases where data are available for rates of change. One major climate forcing known with reasonable accuracy is that caused by the anthropogenic homogeneously mixed greenhouse gases (CO2, CFC's, CH4 and N2O). Their net forcing since the industrial revolution began is about 2 W/sq m and it is presently increasing at a rate of about 1 W/sq m per 20 years. Thus for a competing forcing or feedback to be important, it needs to be of the order of 0.25 W/sq m or larger on this time scale. The significance of most climate feedbacks depends on their sensitivity to temperature change. Therefore we begin with an estimate of decadal temperature change. Presented are the transient temperature trends simulated by the GISS GCM when subjected to various scenarios of trace gas concentration increases. Scenario B, which represents the most plausible near-term emission rates and includes intermittent forcing by volcanic aerosols, yields a global mean surface air temperature increase Delta Ts = 0.7 degrees C over the time period 1995-2015. This is consistent with the IPCC projection of about 0.3 degrees C/decade global warming (IPCC, 1990). Several of our estimates below are based on this assumed rate of warming.

  9. 30 CFR 74.8 - Measurement, accuracy, and reliability requirements.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Register approves this incorporation by reference in accordance with 5 U.S.C. 552(a) and 1 CFR part 51... 30 Mineral Resources 1 2010-07-01 2010-07-01 false Measurement, accuracy, and reliability... Monitors § 74.8 Measurement, accuracy, and reliability requirements. (a) Breathing zone...

  10. 30 CFR 74.8 - Measurement, accuracy, and reliability requirements.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Register approves this incorporation by reference in accordance with 5 U.S.C. 552(a) and 1 CFR part 51... 30 Mineral Resources 1 2011-07-01 2011-07-01 false Measurement, accuracy, and reliability... Monitors § 74.8 Measurement, accuracy, and reliability requirements. (a) Breathing zone...

  11. Accuracy requirements and benchmark experiments for CFD validation

    NASA Technical Reports Server (NTRS)

    Marvin, Joseph G.

    1988-01-01

    The role of experiment in the development of Computation Fluid Dynamics (CFD) for aerodynamic flow prediction is discussed. The CFD verification is a concept that depends on closely coordinated planning between computational and experimental disciplines. Because code applications are becoming more complex and their potential for design more feasible, it no longer suffices to use experimental data from surface or integral measurements alone to provide the required verification. Flow physics and modeling, flow field, and boundary condition measurements are emerging as critical data. Four types of experiments are introduced and examples given that meet the challenge of validation: flow physics experiments; flow modeling experiments; calibration experiments; and verification experiments. Measurement and accuracy requirements for each of these differ and are discussed. A comprehensive program of validation is described, some examples given, and it is concluded that the future prospects are encouraging.

  12. 47 CFR 400.4 - Application requirements.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Telecommunication NATIONAL TELECOMMUNICATIONS AND INFORMATION ADMINISTRATION, DEPARTMENT OF COMMERCE, AND NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION E-911 GRANT PROGRAM § 400.4 Application requirements. (a) Contents. A State's application for funds for the E-911 grant program...

  13. 47 CFR 400.4 - Application requirements.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Telecommunication NATIONAL TELECOMMUNICATIONS AND INFORMATION ADMINISTRATION, DEPARTMENT OF COMMERCE, AND NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION E-911 GRANT PROGRAM § 400.4 Application requirements. (a) Contents. A State's application for funds for the E-911 grant program...

  14. 47 CFR 400.4 - Application requirements.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Telecommunication NATIONAL TELECOMMUNICATIONS AND INFORMATION ADMINISTRATION, DEPARTMENT OF COMMERCE, AND NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION E-911 GRANT PROGRAM § 400.4 Application requirements. (a) Contents. A State's application for funds for the E-911 grant program...

  15. 47 CFR 400.4 - Application requirements.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Telecommunication NATIONAL TELECOMMUNICATIONS AND INFORMATION ADMINISTRATION, DEPARTMENT OF COMMERCE, AND NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION E-911 GRANT PROGRAM § 400.4 Application requirements. (a) Contents. A State's application for funds for the E-911 grant program...

  16. 47 CFR 400.4 - Application requirements.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Telecommunication NATIONAL TELECOMMUNICATIONS AND INFORMATION ADMINISTRATION, DEPARTMENT OF COMMERCE, AND NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION E-911 GRANT PROGRAM § 400.4 Application requirements. (a) Contents. A State's application for funds for the E-911 grant program...

  17. Meteorological accuracy requirements for aerobraking orbital transfer vehicles

    NASA Technical Reports Server (NTRS)

    Skalecki, L. M.; Cerimele, C. J.; Gamble, J. D.

    1984-01-01

    Accuracy requirements for the prediction of atmospheric density for a transfer mission from geosynchronous orbit to low earth orbit using an aerobraking orbital transfer vehicle are presented. Uniform density variations such as would occur seasonally and diurnally were considered as well as density 'pockets' similar to what may have been observed on some of the Space Shuttle Orbiter entry flights. Variations in the lift-to-drag ratio from 0.3 to 1.5 were evaluated, with the values of the ratio of the vehicle weight to the product of the aerodynamic lift coefficient and the aerodynamic reference area ranging from 20 to 100 lb/sq ft. The results of the study indicated no problems for the range of lift-to-drag ratio values considered for uniform density variations of at least + or - 50 percent. However, density 'pockets' created problems if variations of + or - 30 percent from nominal occurred over altitude ranges of 1,000 to 10,000 ft.

  18. 75 FR 2549 - Clinical Accuracy Requirements for Point of Care Blood Glucose Meters; Public Meeting; Request...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-01-15

    ... HUMAN SERVICES Food and Drug Administration Clinical Accuracy Requirements for Point of Care Blood... public meeting entitled: Clinical Accuracy Requirements for Point of Care Blood Glucose Meters. The purpose of the public meeting is to discuss the clinical accuracy requirements of blood glucose meters...

  19. NUCLEAR DATA TARGET ACCURACY REQUIREMENTS FOR MA BURNERS

    SciTech Connect

    G. Palmiotti; M. Salvatores

    2011-06-01

    A nuclear data target accuracy assessment has been carried out for two types of transmuters: a critical sodium fast reactor(SFR) and an accelerator driven system (ADMAB). Results are provided for a 7 group energy structure. Considerations about fuel cycle parameters uncertainties illustrate their dependence from the isotope final densities at end of cycle.

  20. 30 CFR 74.8 - Measurement, accuracy, and reliability requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... Register approves this incorporation by reference in accordance with 5 U.S.C. 552(a) and 1 CFR part 51... COAL MINE SAFETY AND HEALTH COAL MINE DUST SAMPLING DEVICES Requirements for Continuous Personal Dust... requirement. The CPDM shall be capable of measuring respirable dust within the personal breathing zone of...

  1. 30 CFR 74.8 - Measurement, accuracy, and reliability requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Register approves this incorporation by reference in accordance with 5 U.S.C. 552(a) and 1 CFR part 51... COAL MINE SAFETY AND HEALTH COAL MINE DUST SAMPLING DEVICES Requirements for Continuous Personal Dust... requirement. The CPDM shall be capable of measuring respirable dust within the personal breathing zone of...

  2. 30 CFR 74.8 - Measurement, accuracy, and reliability requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Register approves this incorporation by reference in accordance with 5 U.S.C. 552(a) and 1 CFR part 51... COAL MINE SAFETY AND HEALTH COAL MINE DUST SAMPLING DEVICES Requirements for Continuous Personal Dust... requirement. The CPDM shall be capable of measuring respirable dust within the personal breathing zone of...

  3. Map accuracy requirements: The cartographic potential of satellite image data

    NASA Technical Reports Server (NTRS)

    Welch, R.

    1982-01-01

    Cartographic products fall into a variety of classes: topographic maps that are concerned with planimetric information and elevations or heights; thematic maps, which might be used for geology, vegetation, water, or to display these subjects; digital elevation maps that would be produced from digital terrain data; and finally image maps. In terms of satellite applications, thematic maps and image maps are emphasized. The objectives are to consider, first, if resolution will be adequate for the identification of control and for the compilation of map products. Then, second, to define map accuracy standards and to determine the potential for meeting these standards with image data from the film camera, scanner and linear array systems of the 1980s.

  4. IMPACT OF ENERGY GROUP STRUCTURE ON NUCLEAR DATA TARGET ACCURACY REQUIREMENTS FOR ADVANCED REACTOR SYSTEMS

    SciTech Connect

    G. Palmiotti; M. Salvatores; H. Hiruta

    2011-06-01

    A target accuracy assessment study using both a fine and a broad energy structure has shown that less stringent nuclear data accuracy requirements are needed for the latter energy structure. However, even though a reduction is observed, still the requirements will be very difficult to be met unless integral experiments are also used to reduce nuclear data uncertainties. Target accuracy assessment is the inverse problem of the uncertainty evaluation. To establish priorities and target accuracies on data uncertainty reduction, a formal approach can be adopted by defining target accuracy on design parameters and finding out required accuracy on data in order to meet them. In fact, the unknown uncertainty data requirements can be obtained by solving a minimization problem where the sensitivity coefficients in conjunction with the constraints on the integral parameters provide the needed quantities for finding the solutions.

  5. 75 FR 29914 - Telecommunications Relay Services, Speech-to-Speech Services, E911 Requirements for IP-Enabled...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-05-28

    ... rules. DATES: The rules published at 73 FR 79683, December 30, 2008, are effective May 28, 2010. FOR... Order and in the Commission's rules at 47 CFR 64.605, FCC 08-275, published at 73 FR 79683, December 30, 2008. The OMB Control Number is 3060-1089. The Commission publishes this document as an announcement...

  6. Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems

    NASA Technical Reports Server (NTRS)

    Downie, John D.; Goodman, Joseph W.

    1989-01-01

    The accuracy requirements of optical processors in adaptive optics systems are determined by estimating the required accuracy in a general optical linear algebra processor (OLAP) that results in a smaller average residual aberration than that achieved with a conventional electronic digital processor with some specific computation speed. Special attention is given to an error analysis of a general OLAP with regard to the residual aberration that is created in an adaptive mirror system by the inaccuracies of the processor, and to the effect of computational speed of an electronic processor on the correction. Results are presented on the ability of an OLAP to compete with a digital processor in various situations.

  7. Accuracy in blood glucose measurement: what will a tightening of requirements yield?

    PubMed

    Heinemann, Lutz; Lodwig, Volker; Freckmann, Guido

    2012-03-01

    Nowadays, almost all persons with diabetes--at least those using antidiabetic drug therapy--use one of a plethora of meters commercially available for self-monitoring of blood glucose. The accuracy of blood glucose (BG) measurement using these meters has been presumed to be adequate; that is, the accuracy of these devices was not usually questioned until recently. Health authorities in the United States (Food and Drug Administration) and in other countries are currently endeavoring to tighten the requirements for the accuracy of these meters above the level that is currently stated in the standard ISO 15197. At first glance, this does not appear to be a problem and is hardly worth further consideration, but a closer look reveals a considerable range of critical aspects that will be discussed in this commentary. In summary, one could say that as a result of modern production methods and ongoing technical advances, the demands placed on the quality of measurement results obtained with BG meters can be increased to a certain degree. One should also take into consideration that the system accuracy (which covers many more aspects as the analytical accuracy) required to make correct therapeutical decisions certainly varies for different types of therapy. At the end, in addition to analytical accuracy, thorough and systematic training of patients and regular refresher training is important to minimize errors. Only under such circumstances will patients make appropriate therapeutic interventions to optimize and maintain metabolic control. PMID:22538158

  8. Accuracy requirements of optical linear algebra processors in adaptive optics imaging systems

    NASA Technical Reports Server (NTRS)

    Downie, John D.

    1990-01-01

    A ground-based adaptive optics imaging telescope system attempts to improve image quality by detecting and correcting for atmospherically induced wavefront aberrations. The required control computations during each cycle will take a finite amount of time. Longer time delays result in larger values of residual wavefront error variance since the atmosphere continues to change during that time. Thus an optical processor may be well-suited for this task. This paper presents a study of the accuracy requirements in a general optical processor that will make it competitive with, or superior to, a conventional digital computer for the adaptive optics application. An optimization of the adaptive optics correction algorithm with respect to an optical processor's degree of accuracy is also briefly discussed.

  9. Accuracy required and achievable in radiotherapy dosimetry: have modern technology and techniques changed our views?

    NASA Astrophysics Data System (ADS)

    Thwaites, David

    2013-06-01

    In this review of the accuracy required and achievable in radiotherapy dosimetry, older approaches and evidence-based estimates for 3DCRT have been reprised, summarising and drawing together the author's earlier evaluations where still relevant. Available evidence for IMRT uncertainties has been reviewed, selecting information from tolerances, QA, verification measurements, in vivo dosimetry and dose delivery audits, to consider whether achievable uncertainties increase or decrease for current advanced treatments and practice. Overall there is some evidence that they tend to increase, but that similar levels should be achievable. Thus it is concluded that those earlier estimates of achievable dosimetric accuracy are still applicable, despite the changes and advances in technology and techniques. The one exception is where there is significant lung involvement, where it is likely that uncertainties have now improved due to widespread use of more accurate heterogeneity models. Geometric uncertainties have improved with the wide availability of IGRT.

  10. Requirements on the Redshift Accuracy for future Supernova andNumber Count Surveys

    SciTech Connect

    Huterer, Dragan; Kim, Alex; Broderick, Tamara

    2004-08-09

    We investigate the required redshift accuracy of type Ia supernova and cluster number-count surveys in order for the redshift uncertainties not to contribute appreciably to the dark energy parameter error budget. For the SNAP supernova experiment, we find that, without the assistance of ground-based measurements, individual supernova redshifts would need to be determined to about 0.002 or better, which is a challenging but feasible requirement for a low-resolution spectrograph. However, we find that accurate redshifts for z < 0.1 supernovae, obtained with ground-based experiments, are sufficient to immunize the results against even relatively large redshift errors at high z. For the future cluster number-count surveys such as the South Pole Telescope, Planck or DUET, we find that the purely statistical error in photometric redshift is less important, and that the irreducible, systematic bias in redshift drives the requirements. The redshift bias will have to be kept below 0.001-0.005 per redshift bin (which is determined by the filter set), depending on the sky coverage and details of the definition of the minimal mass of the survey. Furthermore, we find that X-ray surveys have a more stringent required redshift accuracy than Sunyaev-Zeldovich (SZ) effect surveys since they use a shorter lever arm in redshift; conversely, SZ surveys benefit from their high redshift reach only so long as some redshift information is available for distant (zgtrsim1) clusters.

  11. Determining the required accuracy of LST products for estimating surface energy fluxes

    NASA Astrophysics Data System (ADS)

    Pinheiro, A. C.; Reichle, R.; Sujay, K.; Arsenault, K.; Privette, J. L.; Yu, Y.

    2006-12-01

    Land Surface Temperature (LST) is an important parameter to assess the energy state of a surface. Synoptic satellite observations of LST must be used when attempting to estimate fluxes over large spatial scales. Due to the close coupling between LST, root level water availability, and mass and energy fluxes at the surface, LST is particularly useful over agricultural areas to help determine crop water demands and facilitate water management decisions (e.g., irrigation). Further, LST can be assimilated into land surface models to help improve estimates of latent and sensible heat fluxes. However, the accuracy of LST products and its impact on surface flux estimation is not well known. In this study, we quantify the uncertainty limits in LST products for accurately estimating latent heat fluxes over agricultural fields in the Rio Grande River basin of central New Mexico. We use the Community Land Model (CLM) within the Land Information Systems (LIS), and adopt an Ensemble Kalman Filter approach to assimilate the LST fields into the model. We evaluate the LST and assimilation performance against field measurements of evapotranspiration collected at two eddy-covariance towers in semi-arid cropland areas. Our results will help clarify sensor and LST product requirements for future remote sensing systems.

  12. The Effects of Noise Masking and Required Accuracy on Speech Errors, Disfluencies, and Self-Repairs.

    ERIC Educational Resources Information Center

    Postma, Albert; Kolk, Herman

    1992-01-01

    This study, involving 32 adult speakers of Dutch, strengthens the covert repair hypothesis of disfluency. It found that emphasis on speech accuracy causes lower speech error rates but does not affect disfluency and self-repair rates, noise masking reduces disfluency and self-repair rates but does not affect speech error numbers, and internal…

  13. The effects of noise masking and required accuracy on speech errors, disfluencies, and self-repairs.

    PubMed

    Postma, A; Kolk, H

    1992-06-01

    The covert repair hypothesis views disfluencies as by-products of covert self-repairs applied to internal speech errors. To test this hypothesis we examined effects of noise masking and accuracy emphasis on speech error, disfluency, and self-repair rates. Noise reduced the numbers of disfluencies and self-repairs but did not affect speech error rates significantly. With accuracy emphasis, speech error rates decreased considerably, but disfluency and self-repair rates did not. With respect to these findings, it is argued that subjects monitor errors with less scrutiny under noise and when accuracy of speaking is unimportant. Consequently, covert and overt repair tendencies drop, a fact that is reflected by changes in disfluency and self-repair rates relative to speech error rates. Self-repair occurrence may be additionally reduced under noise because the information available for error detection--that is, the auditory signal--has also decreased. A qualitative analysis of self-repair patterns revealed that phonemic errors were usually repaired immediately after their intrusion. PMID:1608244

  14. Effects of postural task requirements on the speed-accuracy trade-off.

    PubMed

    Duarte, Marcos; Latash, Mark L

    2007-07-01

    We investigated the speed-accuracy trade-off in a task of pointing with the big toe of the right foot by a standing person that was designed to accentuate the importance of postural adjustments. This was done to test two hypotheses: (1) movement time during foot pointing will scale linearly with ID during target width changes, but the scaling will differ across movement distances; and (2) variations in movement time will be reflected in postural preparations to foot motion. Ten healthy adults stood on the force plate and were instructed to point with the big toe of the right foot at a target (with widths varying from 2 to 10 cm) placed on the floor in front of the subject at a distance varying from 10 to 100 cm. The instruction given to the subjects was typical for Fitts' paradigm: "be as fast and as accurate as possible in your pointing movement". The results have shown that movement time during foot pointing movements scaled with both target distance (D) and target width (W), but the two dependences could not be reduced to a single function of W/D, confirming the first hypothesis. With respect to the second hypothesis, we found that changes in task parameters led to proportional variations in movement speed and indices of variability of the postural adjustments prior to leg movement initiation, confirming the second hypothesis. Both groups of observations were valid over the whole range of distances despite the switch of the movement strategy in the middle of this range. We conclude that the speed-accuracy trade-off in a task with postural adjustments originates at the level of movement planning. The different dependences of movement time on D and W may be related to spontaneous postural sway (migration of the point of application of the resultant force acting on the body of the standing person). The results may have practical implications for posture and gait rehabilitation techniques that use modifications of stepping accuracy. PMID:17273871

  15. Photometric and positional accuracy of the PDS Bonn in view of astronomical requirements.

    NASA Astrophysics Data System (ADS)

    Becker, H. J.

    The PDS 1010A of the Astronomical Institutes of the University of Bonn is the PDS version characterized by the 10x10 inch measuring field, density range 0...5, and 4 cm/s maximum scanning speed. The system is controlled by a PDP 8/m computer, which is now being replaced by a link to the VAX computer of the institute with image processing facilities. Until now, PDS measurements were stored on magnetic tape and reduced off-line at the CYBER 172 of the Max-Planck-Institut für Radioastronomie. In 1977 extensive tests of the PDS performance and accuracy were begun. Since then the system has been used in spectroscopic studies, astrometry, and two-dimensional photometry.

  16. Assessment of Required Accuracy of Digital Elevation Data for Hydrologic Modeling

    NASA Technical Reports Server (NTRS)

    Kenward, T.; Lettenmaier, D. P.

    1997-01-01

    The effect of vertical accuracy of Digital Elevation Models (DEMs) on hydrologic models is evaluated by comparing three DEMs and resulting hydrologic model predictions applied to a 7.2 sq km USDA - ARS watershed at Mahantango Creek, PA. The high resolution (5 m) DEM was resempled to a 30 m resolution using method that constrained the spatial structure of the elevations to be comparable with the USGS and SIR-C DEMs. This resulting 30 m DEM was used as the reference product for subsequent comparisons. Spatial fields of directly derived quantities, such as elevation differences, slope, and contributing area, were compared to the reference product, as were hydrologic model output fields derived using each of the three DEMs at the common 30 m spatial resolution.

  17. Accuracy requirements to test the applicability of the random cascade model to supersonic turbulence

    NASA Astrophysics Data System (ADS)

    Folini, Doris; Walder, Rolf

    2016-03-01

    A model, which is widely used for inertial rang statistics of supersonic turbulence in the context of molecular clouds and star formation, expresses (measurable) relative scaling exponents Zp of two-point velocity statistics as a function of two parameters, β and Δ. The model relates them to the dimension D of the most dissipative structures, D = 3 - Δ/(1 - β). While this description has proved most successful for incompressible turbulence (β = Δ = 2/3, and D = 1), its applicability in the highly compressible regime remains debated. For this regime, theoretical arguments suggest D = 2 and Δ = 2/3, or Δ = 1. Best estimates based on 3D periodic box simulations of supersonic isothermal turbulence yield Δ = 0.71 and D = 1.9, with uncertainty ranges of Δ ∈ [0.67,0.78] and D ∈ [2.04,1.60]. With these 5-10% uncertainty ranges just marginally including the theoretical values of Δ = 2/3 and D = 2, doubts remain whether the model indeed applies and, if it applies, for what values of β and Δ. We use a Monte Carlo approach to mimic actual simulation data and examine what factors are most relevant for the fit quality. We estimate that 0.1% (0.05%) accurate Zp, with p = 1,...,5, should allow for 2% (1%) accurate estimates of β and Δ in the highly compressible regime, but not in the mildly compressible regime. We argue that simulation-based Zp with such accuracy are within reach of today's computer resources. If this kind of data does not allow for the expected high quality fit of β and Δ, then this may indicate the inapplicability of the model for the simulation data. In fact, other models than the one we examine here have been suggested.

  18. Representative input load of antibiotics to WWTPs: Predictive accuracy and determination of a required sampling quantity.

    PubMed

    Marx, Conrad; Mühlbauer, Viktoria; Schubert, Sara; Oertel, Reinhard; Ahnert, Markus; Krebs, Peter; Kuehn, Volker

    2015-06-01

    Predicting the input loads of antibiotics to wastewater treatment plants (WWTP) using certain input data (e.g. prescriptions) is a reasonable method if no analytical data is available. Besides the spatiotemporal uncertainties of the projection itself, only a few studies exist to confirm the suitability of required excretion data from literature. Prescription data with a comparatively high resolution and a sampling campaign covering 15 months were used to answer the question of applicability of the prediction approach. As a result, macrolides, sulfamethoxazole and trimethoprim were almost fully recovered close to 100% of the expected input loads. Nearly all substances of the beta-lactam family exhibit high elimination rates during the wastewater transport in the sewer system with a low recovery rate at the WWTP. The measured input loads of cefuroxime, ciprofloxacin and levofloxacin fluctuated greatly through the year which was not obvious from relatively constant prescribed amounts. The latter substances are an example that available data are not per se sufficient to monitor the actual release into the environment. Furthermore, the extensive data pool of this study was used to calculate the necessary number of samples to determine a representative annual mean load to the WWTP. For antibiotics with low seasonality and low input scattering a minimum of about 10 samples is required. In the case of antibiotics exhibiting fluctuating input loads 30 to 40 evenly distributed samples are necessary for a representative input determination. As a high level estimate, a minimum number of 20-40 samples per year is proposed to reasonably estimate a representative annual input load of antibiotics and other micropollutants. PMID:25776917

  19. Improving Ocean Color Data Products using a Purely Empirical Approach: Reducing the Requirement for Radiometric Calibration Accuracy

    NASA Technical Reports Server (NTRS)

    Gregg, Watson

    2008-01-01

    Radiometric calibration is the foundation upon which ocean color remote sensing is built. Quality derived geophysical products, such as chlorophyll, are assumed to be critically dependent upon the quality of the radiometric calibration. Unfortunately, the goals of radiometric calibration are not typically met in global and large-scale regional analyses, and are especially deficient in coastal regions. The consequences of the uncertainty in calibration are very large in terms of global and regional ocean chlorophyll estimates. In fact, stability in global chlorophyll requires calibration uncertainty much greater than the goals, and outside of modern capabilities. Using a purely empirical approach, we show that stable and consistent global chlorophyll values can be achieved over very wide ranges of uncertainty. Furthermore, the approach yields statistically improved comparisons with in situ data, suggesting improved quality. The results suggest that accuracy requirements for radiometric calibration cab be reduced if alternative empirical approaches are used.

  20. An analysis of approach navigation accuracy and guidance requirements for the grand tour mission to the outer planets

    NASA Technical Reports Server (NTRS)

    Jones, D. W.

    1971-01-01

    The navigation and guidance process for the Jupiter, Saturn and Uranus planetary encounter phases of the 1977 Grand Tour interior mission was simulated. Reference approach navigation accuracies were defined and the relative information content of the various observation types were evaluated. Reference encounter guidance requirements were defined, sensitivities to assumed simulation model parameters were determined and the adequacy of the linear estimation theory was assessed. A linear sequential estimator was used to provide an estimate of the augmented state vector, consisting of the six state variables of position and velocity plus the three components of a planet position bias. The guidance process was simulated using a nonspherical model of the execution errors. Computation algorithms which simulate the navigation and guidance process were derived from theory and implemented into two research-oriented computer programs, written in FORTRAN.

  1. Effect of terminal accuracy requirements on temporal gaze-hand coordination during fast discrete and reciprocal pointings

    PubMed Central

    2011-01-01

    Background Rapid discrete goal-directed movements are characterized by a well known coordination pattern between the gaze and the hand displacements. The gaze always starts prior to the hand movement and reaches the target before hand velocity peak. Surprisingly, the effect of the target size on the temporal gaze-hand coordination has not been directly investigated. Moreover, goal-directed movements are often produced in a reciprocal rather than in a discrete manner. The objectives of this work were to assess the effect of the target size on temporal gaze-hand coordination during fast 1) discrete and 2) reciprocal pointings. Methods Subjects performed fast discrete (experiment 1) and reciprocal (experiment 2) pointings with an amplitude of 50 cm and four target diameters (7.6, 3.8, 1.9 and 0.95 cm) leading to indexes of difficulty (ID = log2[2A/D]) of 3.7, 4.7, 5.7 and 6.7 bits. Gaze and hand displacements were synchronously recorded. Temporal gaze-hand coordination parameters were compared between experiments (discrete and reciprocal pointings) and IDs using analyses of variance (ANOVAs). Results Data showed that the magnitude of the gaze-hand lead pattern was much higher for discrete than for reciprocal pointings. Moreover, while it was constant for discrete pointings, it decreased systematically with an increasing ID for reciprocal pointings because of the longer duration of gaze anchoring on target. Conclusion Overall, the temporal gaze-hand coordination analysis revealed that even for high IDs, fast reciprocal pointings could not be considered as a concatenation of discrete units. Moreover, our data clearly illustrate the smooth adaptation of temporal gaze-hand coordination to terminal accuracy requirements during fast reciprocal pointings. It will be interesting for further researches to investigate if the methodology used in the experiment 2 allows assessing the effect of sensori-motor deficits on gaze-hand coordination. PMID:21320315

  2. Estimating Temperature Retrieval Accuracy Associated With Thermal Band Spatial Resolution Requirements for Center Pivot Irrigation Monitoring and Management

    NASA Technical Reports Server (NTRS)

    Ryan, Robert E.; Irons, James; Spruce, Joseph P.; Underwood, Lauren W.; Pagnutti, Mary

    2006-01-01

    This study explores the use of synthetic thermal center pivot irrigation scenes to estimate temperature retrieval accuracy for thermal remote sensed data, such as data acquired from current and proposed Landsat-like thermal systems. Center pivot irrigation is a common practice in the western United States and in other parts of the world where water resources are scarce. Wide-area ET (evapotranspiration) estimates and reliable water management decisions depend on accurate temperature information retrieval from remotely sensed data. Spatial resolution, sensor noise, and the temperature step between a field and its surrounding area impose limits on the ability to retrieve temperature information. Spatial resolution is an interrelationship between GSD (ground sample distance) and a measure of image sharpness, such as edge response or edge slope. Edge response and edge slope are intuitive, and direct measures of spatial resolution are easier to visualize and estimate than the more common Modulation Transfer Function or Point Spread Function. For these reasons, recent data specifications, such as those for the LDCM (Landsat Data Continuity Mission), have used GSD and edge response to specify spatial resolution. For this study, we have defined a 400-800 m diameter center pivot irrigation area with a large 25 K temperature step associated with a 300 K well-watered field surrounded by an infinite 325 K dry area. In this context, we defined the benchmark problem as an easily modeled, highly common stressing case. By parametrically varying GSD (30-240 m) and edge slope, we determined the number of pixels and field area fraction that meet a given temperature accuracy estimate for 400-m, 600-m, and 800-m diameter field sizes. Results of this project will help assess the utility of proposed specifications for the LDCM and other future thermal remote sensing missions and for water resource management.

  3. Achieving Accuracy Requirements for Forest Biomass Mapping: A Data Fusion Method for Estimating Forest Biomass and LiDAR Sampling Error with Spaceborne Data

    NASA Technical Reports Server (NTRS)

    Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.

    2012-01-01

    The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized

  4. Survey mirrors and lenses and their required surface accuracy. Volume 1. Technical report. Final report for September 15, 1978-December 1, 1979

    SciTech Connect

    Beesing, M. E.; Buchholz, R. L.; Evans, R. A.; Jaminski, R. W.; Mathur, A. K.; Rausch, R. A.; Scarborough, S.; Smith, G. A.; Waldhauer, D. J.

    1980-01-01

    An investigation of the optical performance of a variety of concentrating solar collectors is reported. The study addresses two important issues: the accuracy of reflective or refractive surfaces required to achieve specified performance goals, and the effect of environmental exposure on the performance concentrators. To assess the importance of surface accuracy on optical performance, 11 tracking and nontracking concentrator designs were selected for detailed evaluation. Mathematical models were developed for each design and incorporated into a Monte Carlo ray trace computer program to carry out detailed calculations. Results for the 11 concentrators are presented in graphic form. The models and computer program are provided along with a user's manual. A survey data base was established on the effect of environmental exposure on the optical degradation of mirrors and lenses. Information on environmental and maintenance effects was found to be insufficient to permit specific recommendations for operating and maintenance procedures, but the available information is compiled and reported and does contain procedures that other workers have found useful.

  5. High-Capacity Communications from Martian Distances Part 4: Assessment of Spacecraft Pointing Accuracy Capabilities Required For Large Ka-Band Reflector Antennas

    NASA Technical Reports Server (NTRS)

    Hodges, Richard E.; Sands, O. Scott; Huang, John; Bassily, Samir

    2006-01-01

    Improved surface accuracy for deployable reflectors has brought with it the possibility of Ka-band reflector antennas with extents on the order of 1000 wavelengths. Such antennas are being considered for high-rate data delivery from planetary distances. To maintain losses at reasonable levels requires a sufficiently capable Attitude Determination and Control System (ADCS) onboard the spacecraft. This paper provides an assessment of currently available ADCS strategies and performance levels. In addition to other issues, specific factors considered include: (1) use of "beaconless" or open loop tracking versus use of a beacon on the Earth side of the link, and (2) selection of fine pointing strategy (body-fixed/spacecraft pointing, reflector pointing or various forms of electronic beam steering). Capabilities of recent spacecraft are discussed.

  6. Quantitative assessment of the accuracy of dose calculation using pencil beam and Monte Carlo algorithms and requirements for clinical quality assurance

    SciTech Connect

    Ali, Imad; Ahmad, Salahuddin

    2013-10-01

    To compare the doses calculated using the BrainLAB pencil beam (PB) and Monte Carlo (MC) algorithms for tumors located in various sites including the lung and evaluate quality assurance procedures required for the verification of the accuracy of dose calculation. The dose-calculation accuracy of PB and MC was also assessed quantitatively with measurement using ionization chamber and Gafchromic films placed in solid water and heterogeneous phantoms. The dose was calculated using PB convolution and MC algorithms in the iPlan treatment planning system from BrainLAB. The dose calculation was performed on the patient's computed tomography images with lesions in various treatment sites including 5 lungs, 5 prostates, 4 brains, 2 head and necks, and 2 paraspinal tissues. A combination of conventional, conformal, and intensity-modulated radiation therapy plans was used in dose calculation. The leaf sequence from intensity-modulated radiation therapy plans or beam shapes from conformal plans and monitor units and other planning parameters calculated by the PB were identical for calculating dose with MC. Heterogeneity correction was considered in both PB and MC dose calculations. Dose-volume parameters such as V95 (volume covered by 95% of prescription dose), dose distributions, and gamma analysis were used to evaluate the calculated dose by PB and MC. The measured doses by ionization chamber and EBT GAFCHROMIC film in solid water and heterogeneous phantoms were used to quantitatively asses the accuracy of dose calculated by PB and MC. The dose-volume histograms and dose distributions calculated by PB and MC in the brain, prostate, paraspinal, and head and neck were in good agreement with one another (within 5%) and provided acceptable planning target volume coverage. However, dose distributions of the patients with lung cancer had large discrepancies. For a plan optimized with PB, the dose coverage was shown as clinically acceptable, whereas in reality, the MC showed a

  7. Geocoding accuracy and the recovery of relationships between environmental exposures and health

    PubMed Central

    Mazumdar, Soumya; Rushton, Gerard; Smith, Brian J; Zimmerman, Dale L; Donham, Kelley J

    2008-01-01

    Background This research develops methods for determining the effect of geocoding quality on relationships between environmental exposures and health. The likelihood of detecting an existing relationship – statistical power – between measures of environmental exposures and health depends not only on the strength of the relationship but also on the level of positional accuracy and completeness of the geocodes from which the measures of environmental exposure are made. This paper summarizes the results of simulation studies conducted to examine the impact of inaccuracies of geocoded addresses generated by three types of geocoding processes: a) addresses located on orthophoto maps, b) addresses matched to TIGER files (U.S Census or their derivative street files); and, c) addresses from E-911 geocodes (developed by local authorities for emergency dispatch purposes). Results The simulated odds of disease using exposures modelled from the highest quality geocodes could be sufficiently recovered using other, more commonly used, geocoding processes such as TIGER and E-911; however, the strength of the odds relationship between disease exposures modelled at geocodes generally declined with decreasing geocoding accuracy. Conclusion Although these specific results cannot be generalized to new situations, the methods used to determine the sensitivity of results can be used in new situations. Estimated measures of positional accuracy must be used in the interpretation of results of analyses that investigate relationships between health outcomes and exposures measured at residential locations. Analyses similar to those employed in this paper can be used to validate interpretation of results from empirical analyses that use geocoded locations with estimated measures of positional accuracy. PMID:18387189

  8. How much detail and accuracy is required in plant growth sub-models to address questions about optimal management strategies in agricultural systems?

    PubMed Central

    Renton, Michael

    2011-01-01

    Background and aims Simulations that integrate sub-models of important biological processes can be used to ask questions about optimal management strategies in agricultural and ecological systems. Building sub-models with more detail and aiming for greater accuracy and realism may seem attractive, but is likely to be more expensive and time-consuming and result in more complicated models that lack transparency. This paper illustrates a general integrated approach for constructing models of agricultural and ecological systems that is based on the principle of starting simple and then directly testing for the need to add additional detail and complexity. Methodology The approach is demonstrated using LUSO (Land Use Sequence Optimizer), an agricultural system analysis framework based on simulation and optimization. A simple sensitivity analysis and functional perturbation analysis is used to test to what extent LUSO's crop–weed competition sub-model affects the answers to a number of questions at the scale of the whole farming system regarding optimal land-use sequencing strategies and resulting profitability. Principal results The need for accuracy in the crop–weed competition sub-model within LUSO depended to a small extent on the parameter being varied, but more importantly and interestingly on the type of question being addressed with the model. Only a small part of the crop–weed competition model actually affects the answers to these questions. Conclusions This study illustrates an example application of the proposed integrated approach for constructing models of agricultural and ecological systems based on testing whether complexity needs to be added to address particular questions of interest. We conclude that this example clearly demonstrates the potential value of the general approach. Advantages of this approach include minimizing costs and resources required for model construction, keeping models transparent and easy to analyse, and ensuring the model

  9. Comparison of total energy expenditure between the farming season and off farming season and accuracy assessment of estimated energy requirement prediction equation of Korean farmers

    PubMed Central

    Yeon, Seo-Eun; Lee, Sun-Hee; Choe, Jeong-Sook

    2015-01-01

    BACKGROUND/OBJECTIVES The purposes of this study were to compare total energy expenditure (including PAL and RMR) of Korean farmers between the farming season and off farming season and to assess the accuracy of estimated energy requirement (EER) prediction equation reported in KDRIs. SUBJECTS/METHODS Subjects were 72 Korean farmers (males 23, females 49) aged 30-64 years. Total energy expenditure was calculated by multiplying measured RMR by PAL. EER was calculated by using the prediction equation suggested in KDRIs 2010. RESULTS The physical activity level (PAL) was significantly higher (P < 0.05) in the farming season (male 1.77 ± 0.22, female 1.69 ± 0.24) than the off farming season (male 1.53 ± 0.32, female 1.52 ± 0.19). But resting metabolic rate was significantly higher (P < 0.05) in the off farming season (male 1,890 ± 233 kcal/day, female 1,446 ± 140 kcal/day) compared to the farming season (male 1,727 ± 163 kcal/day, female 1,356 ± 164 kcal/day). TEE (2,304 ± 497 kcal/day) of females was significantly higher in the farming season than that (2,183 ± 389 kcal/day) of the off farming season, but in males, there was no significant difference between two seasons in TEE. On the other hand, EER of male and female (2,825 ± 354 kcal/day and 2,115 ± 293 kcal/day) of the farming season was significantly higher (P < 0.05) than those (2,562 ± 339 kcal/day and 1,994 ± 224 kcal/day) of the off farming season. CONCLUSIONS This study indicates that there is a significant difference in PAL and TEE of farmers between farming and off farming seasons. And EER prediction equation proposed by KDRI 2010 underestimated TEE, thus EER prediction equation for farmers should be reviewed. PMID:25671071

  10. Analytical Performance Requirements for Systems for Self-Monitoring of Blood Glucose With Focus on System Accuracy: Relevant Differences Among ISO 15197:2003, ISO 15197:2013, and Current FDA Recommendations.

    PubMed

    Freckmann, Guido; Schmid, Christina; Baumstark, Annette; Rutschmann, Malte; Haug, Cornelia; Heinemann, Lutz

    2015-07-01

    In the European Union (EU), the ISO (International Organization for Standardization) 15197 standard is applicable for the evaluation of systems for self-monitoring of blood glucose (SMBG) before the market approval. In 2013, a revised version of this standard was published. Relevant revisions in the analytical performance requirements are the inclusion of the evaluation of influence quantities, for example, hematocrit, and some changes in the testing procedures for measurement precision and system accuracy evaluation, for example, number of test strip lots. Regarding system accuracy evaluation, the most important change is the inclusion of more stringent accuracy criteria. In 2014, the Food and Drug Administration (FDA) in the United States published their own guidance document for the premarket evaluation of SMBG systems with even more stringent system accuracy criteria than stipulated by ISO 15197:2013. The establishment of strict accuracy criteria applicable for the premarket evaluation is a possible approach to further improve the measurement quality of SMBG systems. However, the system accuracy testing procedure is quite complex, and some critical aspects, for example, systematic measurement difference between the reference measurement procedure and a higher-order procedure, may potentially limit the apparent accuracy of a given system. Therefore, the implementation of a harmonized reference measurement procedure for which traceability to standards of higher order is verified through an unbroken, documented chain of calibrations is desirable. In addition, the establishment of regular and standardized post-marketing evaluations of distributed test strip lots should be considered as an approach toward an improved measurement quality of available SMBG systems. PMID:25872965

  11. Acoustic environmental accuracy requirements for response determination

    NASA Technical Reports Server (NTRS)

    Pettitt, M. R.

    1983-01-01

    A general purpose computer program was developed for the prediction of vehicle interior noise. This program, named VIN, has both modal and statistical energy analysis capabilities for structural/acoustic interaction analysis. The analytic models and their computer implementation were verified through simple test cases with well-defined experimental results. The model was also applied in a space shuttle payload bay launch acoustics prediction study. The computer program processes large and small problems with equal efficiency because all arrays are dynamically sized by program input variables at run time. A data base is built and easily accessed for design studies. The data base significantly reduces the computational costs of such studies by allowing the reuse of the still-valid calculated parameters of previous iterations.

  12. RF propagation simulator to predict location accuracy of GSM mobile phones for emergency applications

    NASA Astrophysics Data System (ADS)

    Green, Marilynn P.; Wang, S. S. Peter

    2002-11-01

    Mobile location is one of the fastest growing areas for the development of new technologies, services and applications. This paper describes the channel models that were developed as a basis of discussion to assist the Technical Subcommittee T1P1.5 in its consideration of various mobile location technologies for emergency applications (1997 - 1998) for presentation to the U.S. Federal Communication Commission (FCC). It also presents the PCS 1900 extension to this model, which is based on the COST-231 extended Hata model and review of the original Okumura graphical interpretation of signal propagation characteristics in different environments. Based on a wide array of published (and non-publicly disclosed) empirical data, the signal propagation models described in this paper were all obtained by consensus of a group of inter-company participants in order to facilitate the direct comparison between simulations of different handset-based and network-based location methods prior to their standardization for emergency E-911 applications by the FCC. Since that time, this model has become a de-facto standard for assessing the positioning accuracy of different location technologies using GSM mobile terminals. In this paper, the radio environment is described to the level of detail that is necessary to replicate it in a software environment.

  13. Relative accuracy evaluation.

    PubMed

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  14. Relative Accuracy Evaluation

    PubMed Central

    Zhang, Yan; Wang, Hongzhi; Yang, Zhongsheng; Li, Jianzhong

    2014-01-01

    The quality of data plays an important role in business analysis and decision making, and data accuracy is an important aspect in data quality. Thus one necessary task for data quality management is to evaluate the accuracy of the data. And in order to solve the problem that the accuracy of the whole data set is low while a useful part may be high, it is also necessary to evaluate the accuracy of the query results, called relative accuracy. However, as far as we know, neither measure nor effective methods for the accuracy evaluation methods are proposed. Motivated by this, for relative accuracy evaluation, we propose a systematic method. We design a relative accuracy evaluation framework for relational databases based on a new metric to measure the accuracy using statistics. We apply the methods to evaluate the precision and recall of basic queries, which show the result's relative accuracy. We also propose the method to handle data update and to improve accuracy evaluation using functional dependencies. Extensive experimental results show the effectiveness and efficiency of our proposed framework and algorithms. PMID:25133752

  15. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    LRO definitive and predictive accuracy requirements were easily met in the nominal mission orbit, using the LP150Q lunar gravity model. center dot Accuracy of the LP150Q model is poorer in the extended mission elliptical orbit. center dot Later lunar gravity models, in particular GSFC-GRAIL-270, improve OD accuracy in the extended mission. center dot Implementation of a constrained plane when the orbit is within 45 degrees of the Earth-Moon line improves cross-track accuracy. center dot Prediction accuracy is still challenged during full-Sun periods due to coarse spacecraft area modeling - Implementation of a multi-plate area model with definitive attitude input can eliminate prediction violations. - The FDF is evaluating using analytic and predicted attitude modeling to improve full-Sun prediction accuracy. center dot Comparison of FDF ephemeris file to high-precision ephemeris files provides gross confirmation that overlap compares properly assess orbit accuracy.

  16. GEOSPATIAL DATA ACCURACY ASSESSMENT

    EPA Science Inventory

    The development of robust accuracy assessment methods for the validation of spatial data represent's a difficult scientific challenge for the geospatial science community. The importance and timeliness of this issue is related directly to the dramatic escalation in the developmen...

  17. Overlay accuracy fundamentals

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Levinski, Vladimir; Sapiens, Noam; Cohen, Guy; Amit, Eran; Klein, Dana; Vakshtein, Irina

    2012-03-01

    Currently, the performance of overlay metrology is evaluated mainly based on random error contributions such as precision and TIS variability. With the expected shrinkage of the overlay metrology budget to < 0.5nm, it becomes crucial to include also systematic error contributions which affect the accuracy of the metrology. Here we discuss fundamental aspects of overlay accuracy and a methodology to improve accuracy significantly. We identify overlay mark imperfections and their interaction with the metrology technology, as the main source of overlay inaccuracy. The most important type of mark imperfection is mark asymmetry. Overlay mark asymmetry leads to a geometrical ambiguity in the definition of overlay, which can be ~1nm or less. It is shown theoretically and in simulations that the metrology may enhance the effect of overlay mark asymmetry significantly and lead to metrology inaccuracy ~10nm, much larger than the geometrical ambiguity. The analysis is carried out for two different overlay metrology technologies: Imaging overlay and DBO (1st order diffraction based overlay). It is demonstrated that the sensitivity of DBO to overlay mark asymmetry is larger than the sensitivity of imaging overlay. Finally, we show that a recently developed measurement quality metric serves as a valuable tool for improving overlay metrology accuracy. Simulation results demonstrate that the accuracy of imaging overlay can be improved significantly by recipe setup optimized using the quality metric. We conclude that imaging overlay metrology, complemented by appropriate use of measurement quality metric, results in optimal overlay accuracy.

  18. Spacecraft attitude determination accuracy from mission experience

    NASA Technical Reports Server (NTRS)

    Brasoveanu, D.; Hashmall, J.

    1994-01-01

    This paper summarizes a compilation of attitude determination accuracies attained by a number of satellites supported by the Goddard Space Flight Center Flight Dynamics Facility. The compilation is designed to assist future mission planners in choosing and placing attitude hardware and selecting the attitude determination algorithms needed to achieve given accuracy requirements. The major goal of the compilation is to indicate realistic accuracies achievable using a given sensor complement based on mission experience. It is expected that the use of actual spacecraft experience will make the study especially useful for mission design. A general description of factors influencing spacecraft attitude accuracy is presented. These factors include determination algorithms, inertial reference unit characteristics, and error sources that can affect measurement accuracy. Possible techniques for mitigating errors are also included. Brief mission descriptions are presented with the attitude accuracies attained, grouped by the sensor pairs used in attitude determination. The accuracies for inactive missions represent a compendium of missions report results, and those for active missions represent measurements of attitude residuals. Both three-axis and spin stabilized missions are included. Special emphasis is given to high-accuracy sensor pairs, such as two fixed-head star trackers (FHST's) and fine Sun sensor plus FHST. Brief descriptions of sensor design and mode of operation are included. Also included are brief mission descriptions and plots summarizing the attitude accuracy attained using various sensor complements.

  19. Interoceptive accuracy and panic.

    PubMed

    Zoellner, L A; Craske, M G

    1999-12-01

    Psychophysiological models of panic hypothesize that panickers focus attention on and become anxious about the physical sensations associated with panic. Attention on internal somatic cues has been labeled interoception. The present study examined the role of physiological arousal and subjective anxiety on interoceptive accuracy. Infrequent panickers and nonanxious participants participated in an initial baseline to examine overall interoceptive accuracy. Next, participants ingested caffeine, about which they received either safety or no safety information. Using a mental heartbeat tracking paradigm, participants' count of their heartbeats during specific time intervals were coded based on polygraph measures. Infrequent panickers were more accurate in the perception of their heartbeats than nonanxious participants. Changes in physiological arousal were not associated with increased accuracy on the heartbeat perception task. However, higher levels of self-reported anxiety were associated with superior performance. PMID:10596462

  20. Accuracy of deception judgments.

    PubMed

    Bond, Charles F; DePaulo, Bella M

    2006-01-01

    We analyze the accuracy of deception judgments, synthesizing research results from 206 documents and 24,483 judges. In relevant studies, people attempt to discriminate lies from truths in real time with no special aids or training. In these circumstances, people achieve an average of 54% correct lie-truth judgments, correctly classifying 47% of lies as deceptive and 61% of truths as nondeceptive. Relative to cross-judge differences in accuracy, mean lie-truth discrimination abilities are nontrivial, with a mean accuracy d of roughly .40. This produces an effect that is at roughly the 60th percentile in size, relative to others that have been meta-analyzed by social psychologists. Alternative indexes of lie-truth discrimination accuracy correlate highly with percentage correct, and rates of lie detection vary little from study to study. Our meta-analyses reveal that people are more accurate in judging audible than visible lies, that people appear deceptive when motivated to be believed, and that individuals regard their interaction partners as honest. We propose that people judge others' deceptions more harshly than their own and that this double standard in evaluating deceit can explain much of the accumulated literature. PMID:16859438

  1. Towards Arbitrary Accuracy Inviscid Surface Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.; Hixon, Ray

    2002-01-01

    Inviscid nonlinear surface boundary conditions are currently limited to third order accuracy in time for non-moving surfaces and actually reduce to first order in time when the surfaces move. For steady-state calculations it may be possible to achieve higher accuracy in space, but high accuracy in time is required for efficient simulation of multiscale unsteady phenomena. A surprisingly simple technique is shown here that can be used to correct the normal pressure derivatives of the flow at a surface on a Cartesian grid so that arbitrarily high order time accuracy is achieved in idealized cases. This work demonstrates that nonlinear high order time accuracy at a solid surface is possible and desirable, but it also shows that the current practice of only correcting the pressure is inadequate.

  2. Optimal design of robot accuracy compensators

    SciTech Connect

    Zhuang, H.; Roth, Z.S. . Robotics Center and Electrical Engineering Dept.); Hamano, Fumio . Dept. of Electrical Engineering)

    1993-12-01

    The problem of optimal design of robot accuracy compensators is addressed. Robot accuracy compensation requires that actual kinematic parameters of a robot be previously identified. Additive corrections of joint commands, including those at singular configurations, can be computed without solving the inverse kinematics problem for the actual robot. This is done by either the damped least-squares (DLS) algorithm or the linear quadratic regulator (LQR) algorithm, which is a recursive version of the DLS algorithm. The weight matrix in the performance index can be selected to achieve specific objectives, such as emphasizing end-effector's positioning accuracy over orientation accuracy or vice versa, or taking into account proximity to robot joint travel limits and singularity zones. The paper also compares the LQR and the DLS algorithms in terms of computational complexity, storage requirement, and programming convenience. Simulation results are provided to show the effectiveness of the algorithms.

  3. 76 FR 40729 - Sunshine Act Meeting; Open Commission Meeting; Tuesday, July 12, 2011

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-11

    ...The Commission will consider a Notice of Proposed Rule Making designed to empower consumers to prevent and detect unauthorized telephone bill charges (``mystery fees'' or ``cramming'') by improving the disclosure of third-party charges on telephone bills. 3................ Public Safety and Title: Wireless E911 Location Homeland Security Accuracy Requirements (PS Docket Bureau. No. 07-114);......

  4. High accuracy OMEGA timekeeping

    NASA Technical Reports Server (NTRS)

    Imbier, E. A.

    1982-01-01

    The Smithsonian Astrophysical Observatory (SAO) operates a worldwide satellite tracking network which uses a combination of OMEGA as a frequency reference, dual timing channels, and portable clock comparisons to maintain accurate epoch time. Propagational charts from the U.S. Coast Guard OMEGA monitor program minimize diurnal and seasonal effects. Daily phase value publications of the U.S. Naval Observatory provide corrections to the field collected timing data to produce an averaged time line comprised of straight line segments called a time history file (station clock minus UTC). Depending upon clock location, reduced time data accuracies of between two and eight microseconds are typical.

  5. Anatomy-aware measurement of segmentation accuracy

    NASA Astrophysics Data System (ADS)

    Tizhoosh, H. R.; Othman, A. A.

    2016-03-01

    Quantifying the accuracy of segmentation and manual delineation of organs, tissue types and tumors in medical images is a necessary measurement that suffers from multiple problems. One major shortcoming of all accuracy measures is that they neglect the anatomical significance or relevance of different zones within a given segment. Hence, existing accuracy metrics measure the overlap of a given segment with a ground-truth without any anatomical discrimination inside the segment. For instance, if we understand the rectal wall or urethral sphincter as anatomical zones, then current accuracy measures ignore their significance when they are applied to assess the quality of the prostate gland segments. In this paper, we propose an anatomy-aware measurement scheme for segmentation accuracy of medical images. The idea is to create a "master gold" based on a consensus shape containing not just the outline of the segment but also the outlines of the internal zones if existent or relevant. To apply this new approach to accuracy measurement, we introduce the anatomy-aware extensions of both Dice coefficient and Jaccard index and investigate their effect using 500 synthetic prostate ultrasound images with 20 different segments for each image. We show that through anatomy-sensitive calculation of segmentation accuracy, namely by considering relevant anatomical zones, not only the measurement of individual users can change but also the ranking of users' segmentation skills may require reordering.

  6. Accuracy in Judgments of Aggressiveness

    PubMed Central

    Kenny, David A.; West, Tessa V.; Cillessen, Antonius H. N.; Coie, John D.; Dodge, Kenneth A.; Hubbard, Julie A.; Schwartz, David

    2009-01-01

    Perceivers are both accurate and biased in their understanding of others. Past research has distinguished between three types of accuracy: generalized accuracy, a perceiver’s accuracy about how a target interacts with others in general; perceiver accuracy, a perceiver’s view of others corresponding with how the perceiver is treated by others in general; and dyadic accuracy, a perceiver’s accuracy about a target when interacting with that target. Researchers have proposed that there should be more dyadic than other forms of accuracy among well-acquainted individuals because of the pragmatic utility of forecasting the behavior of interaction partners. We examined behavioral aggression among well-acquainted peers. A total of 116 9-year-old boys rated how aggressive their classmates were toward other classmates. Subsequently, 11 groups of 6 boys each interacted in play groups, during which observations of aggression were made. Analyses indicated strong generalized accuracy yet little dyadic and perceiver accuracy. PMID:17575243

  7. Increasing Accuracy in Environmental Measurements

    NASA Astrophysics Data System (ADS)

    Jacksier, Tracey; Fernandes, Adelino; Matthew, Matt; Lehmann, Horst

    2016-04-01

    Human activity is increasing the concentrations of green house gases (GHG) in the atmosphere which results in temperature increases. High precision is a key requirement of atmospheric measurements to study the global carbon cycle and its effect on climate change. Natural air containing stable isotopes are used in GHG monitoring to calibrate analytical equipment. This presentation will examine the natural air and isotopic mixture preparation process, for both molecular and isotopic concentrations, for a range of components and delta values. The role of precisely characterized source material will be presented. Analysis of individual cylinders within multiple batches will be presented to demonstrate the ability to dynamically fill multiple cylinders containing identical compositions without isotopic fractionation. Additional emphasis will focus on the ability to adjust isotope ratios to more closely bracket sample types without the reliance on combusting naturally occurring materials, thereby improving analytical accuracy.

  8. Accuracy of Pressure Sensitive Paint

    NASA Technical Reports Server (NTRS)

    Liu, Tianshu; Guille, M.; Sullivan, J. P.

    2001-01-01

    Uncertainty in pressure sensitive paint (PSP) measurement is investigated from a standpoint of system modeling. A functional relation between the imaging system output and luminescent emission from PSP is obtained based on studies of radiative energy transports in PSP and photodetector response to luminescence. This relation provides insights into physical origins of various elemental error sources and allows estimate of the total PSP measurement uncertainty contributed by the elemental errors. The elemental errors and their sensitivity coefficients in the error propagation equation are evaluated. Useful formulas are given for the minimum pressure uncertainty that PSP can possibly achieve and the upper bounds of the elemental errors to meet required pressure accuracy. An instructive example of a Joukowsky airfoil in subsonic flows is given to illustrate uncertainty estimates in PSP measurements.

  9. Accuracy of tablet splitting.

    PubMed

    McDevitt, J T; Gurst, A H; Chen, Y

    1998-01-01

    We attempted to determine the accuracy of manually splitting hydrochlorothiazide tablets. Ninety-four healthy volunteers each split ten 25-mg hydrochlorothiazide tablets, which were then weighed using an analytical balance. Demographics, grip and pinch strength, digit circumference, and tablet-splitting experience were documented. Subjects were also surveyed regarding their willingness to pay a premium for commercially available, lower-dose tablets. Of 1752 manually split tablet portions, 41.3% deviated from ideal weight by more than 10% and 12.4% deviated by more than 20%. Gender, age, education, and tablet-splitting experience were not predictive of variability. Most subjects (96.8%) stated a preference for commercially produced, lower-dose tablets, and 77.2% were willing to pay more for them. For drugs with steep dose-response curves or narrow therapeutic windows, the differences we recorded could be clinically relevant. PMID:9469693

  10. 27 CFR 19.185 - Testing scale tanks for accuracy.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2011-04-01 2011-04-01 false Testing scale tanks for... Requirements Tank Requirements § 19.185 Testing scale tanks for accuracy. (a) A proprietor who uses a scale tank for tax determination must ensure the accuracy of the scale through periodic testing. Testing...

  11. 27 CFR 19.185 - Testing scale tanks for accuracy.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2014-04-01 2014-04-01 false Testing scale tanks for... Requirements Tank Requirements § 19.185 Testing scale tanks for accuracy. (a) A proprietor who uses a scale tank for tax determination must ensure the accuracy of the scale through periodic testing. Testing...

  12. 27 CFR 19.185 - Testing scale tanks for accuracy.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2012-04-01 2012-04-01 false Testing scale tanks for... Requirements Tank Requirements § 19.185 Testing scale tanks for accuracy. (a) A proprietor who uses a scale tank for tax determination must ensure the accuracy of the scale through periodic testing. Testing...

  13. 27 CFR 19.185 - Testing scale tanks for accuracy.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 27 Alcohol, Tobacco Products and Firearms 1 2013-04-01 2013-04-01 false Testing scale tanks for... Requirements Tank Requirements § 19.185 Testing scale tanks for accuracy. (a) A proprietor who uses a scale tank for tax determination must ensure the accuracy of the scale through periodic testing. Testing...

  14. Navigation Accuracy Guidelines for Orbital Formation Flying

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Alfriend, Kyle T.

    2004-01-01

    Some simple guidelines based on the accuracy in determining a satellite formation s semi-major axis differences are useful in making preliminary assessments of the navigation accuracy needed to support such missions. These guidelines are valid for any elliptical orbit, regardless of eccentricity. Although maneuvers required for formation establishment, reconfiguration, and station-keeping require accurate prediction of the state estimate to the maneuver time, and hence are directly affected by errors in all the orbital elements, experience has shown that determination of orbit plane orientation and orbit shape to acceptable levels is less challenging than the determination of orbital period or semi-major axis. Furthermore, any differences among the member s semi-major axes are undesirable for a satellite formation, since it will lead to differential along-track drift due to period differences. Since inevitable navigation errors prevent these differences from ever being zero, one may use the guidelines this paper presents to determine how much drift will result from a given relative navigation accuracy, or conversely what navigation accuracy is required to limit drift to a given rate. Since the guidelines do not account for non-two-body perturbations, they may be viewed as useful preliminary design tools, rather than as the basis for mission navigation requirements, which should be based on detailed analysis of the mission configuration, including all relevant sources of uncertainty.

  15. Accuracy of Information Processing under Focused Attention.

    ERIC Educational Resources Information Center

    Bastick, Tony

    This paper reports the results of an experiment on the accuracy of information processing during attention focused arousal under two conditions: single estimation and double estimation. The attention of 187 college students was focused by a task requiring high level competition for a monetary prize ($10) under severely limited time conditions. The…

  16. Lunar Reconnaissance Orbiter Orbit Determination Accuracy Analysis

    NASA Technical Reports Server (NTRS)

    Slojkowski, Steven E.

    2014-01-01

    Results from operational OD produced by the NASA Goddard Flight Dynamics Facility for the LRO nominal and extended mission are presented. During the LRO nominal mission, when LRO flew in a low circular orbit, orbit determination requirements were met nearly 100% of the time. When the extended mission began, LRO returned to a more elliptical frozen orbit where gravity and other modeling errors caused numerous violations of mission accuracy requirements. Prediction accuracy is particularly challenged during periods when LRO is in full-Sun. A series of improvements to LRO orbit determination are presented, including implementation of new lunar gravity models, improved spacecraft solar radiation pressure modeling using a dynamic multi-plate area model, a shorter orbit determination arc length, and a constrained plane method for estimation. The analysis presented in this paper shows that updated lunar gravity models improved accuracy in the frozen orbit, and a multiplate dynamic area model improves prediction accuracy during full-Sun orbit periods. Implementation of a 36-hour tracking data arc and plane constraints during edge-on orbit geometry also provide benefits. A comparison of the operational solutions to precision orbit determination solutions shows agreement on a 100- to 250-meter level in definitive accuracy.

  17. Reticence, Accuracy and Efficacy

    NASA Astrophysics Data System (ADS)

    Oreskes, N.; Lewandowsky, S.

    2015-12-01

    James Hansen has cautioned the scientific community against "reticence," by which he means a reluctance to speak in public about the threat of climate change. This may contribute to social inaction, with the result that society fails to respond appropriately to threats that are well understood scientifically. Against this, others have warned against the dangers of "crying wolf," suggesting that reticence protects scientific credibility. We argue that both these positions are missing an important point: that reticence is not only a matter of style but also of substance. In previous work, Bysse et al. (2013) showed that scientific projections of key indicators of climate change have been skewed towards the low end of actual events, suggesting a bias in scientific work. More recently, we have shown that scientific efforts to be responsive to contrarian challenges have led scientists to adopt the terminology of a "pause" or "hiatus" in climate warming, despite the lack of evidence to support such a conclusion (Lewandowsky et al., 2015a. 2015b). In the former case, scientific conservatism has led to under-estimation of climate related changes. In the latter case, the use of misleading terminology has perpetuated scientific misunderstanding and hindered effective communication. Scientific communication should embody two equally important goals: 1) accuracy in communicating scientific information and 2) efficacy in expressing what that information means. Scientists should strive to be neither conservative nor adventurous but to be accurate, and to communicate that accurate information effectively.

  18. Accuracy in Quantitative 3D Image Analysis

    PubMed Central

    Bassel, George W.

    2015-01-01

    Quantitative 3D imaging is becoming an increasingly popular and powerful approach to investigate plant growth and development. With the increased use of 3D image analysis, standards to ensure the accuracy and reproducibility of these data are required. This commentary highlights how image acquisition and postprocessing can introduce artifacts into 3D image data and proposes steps to increase both the accuracy and reproducibility of these analyses. It is intended to aid researchers entering the field of 3D image processing of plant cells and tissues and to help general readers in understanding and evaluating such data. PMID:25804539

  19. On the Standardization of Vertical Accuracy Figures in Dems

    NASA Astrophysics Data System (ADS)

    Casella, V.; Padova, B.

    2013-01-01

    Digital Elevation Models (DEMs) play a key role in hydrological risk prevention and mitigation: hydraulic numeric simulations, slope and aspect maps all heavily rely on DEMs. Hydraulic numeric simulations require the used DEM to have a defined accuracy, in order to obtain reliable results. Are the DEM accuracy figures clearly and uniquely defined? The paper focuses on some issues concerning DEM accuracy definition and assessment. Two DEM accuracy definitions can be found in literature: accuracy at the interpolated point and accuracy at the nodes. The former can be estimated by means of randomly distributed check points, while the latter by means of check points coincident with the nodes. The two considered accuracy figures are often treated as equivalent, but they aren't. Given the same DEM, assessing it through one or the other approach gives different results. Our paper performs an in-depth characterization of the two figures and proposes standardization coefficients.

  20. Landsat classification accuracy assessment procedures

    USGS Publications Warehouse

    Mead, R. R.; Szajgin, John

    1982-01-01

    A working conference was held in Sioux Falls, South Dakota, 12-14 November, 1980 dealing with Landsat classification Accuracy Assessment Procedures. Thirteen formal presentations were made on three general topics: (1) sampling procedures, (2) statistical analysis techniques, and (3) examples of projects which included accuracy assessment and the associated costs, logistical problems, and value of the accuracy data to the remote sensing specialist and the resource manager. Nearly twenty conference attendees participated in two discussion sessions addressing various issues associated with accuracy assessment. This paper presents an account of the accomplishments of the conference.

  1. High Accuracy Transistor Compact Model Calibrations

    SciTech Connect

    Hembree, Charles E.; Mar, Alan; Robertson, Perry J.

    2015-09-01

    Typically, transistors are modeled by the application of calibrated nominal and range models. These models consists of differing parameter values that describe the location and the upper and lower limits of a distribution of some transistor characteristic such as current capacity. Correspond- ingly, when using this approach, high degrees of accuracy of the transistor models are not expected since the set of models is a surrogate for a statistical description of the devices. The use of these types of models describes expected performances considering the extremes of process or transistor deviations. In contrast, circuits that have very stringent accuracy requirements require modeling techniques with higher accuracy. Since these accurate models have low error in transistor descriptions, these models can be used to describe part to part variations as well as an accurate description of a single circuit instance. Thus, models that meet these stipulations also enable the calculation of quantifi- cation of margins with respect to a functional threshold and uncertainties in these margins. Given this need, new model high accuracy calibration techniques for bipolar junction transis- tors have been developed and are described in this report.

  2. ACCURACY OF CO2 SENSORS

    SciTech Connect

    Fisk, William J.; Faulkner, David; Sullivan, Douglas P.

    2008-10-01

    Are the carbon dioxide (CO2) sensors in your demand controlled ventilation systems sufficiently accurate? The data from these sensors are used to automatically modulate minimum rates of outdoor air ventilation. The goal is to keep ventilation rates at or above design requirements while adjusting the ventilation rate with changes in occupancy in order to save energy. Studies of energy savings from demand controlled ventilation and of the relationship of indoor CO2 concentrations with health and work performance provide a strong rationale for use of indoor CO2 data to control minimum ventilation rates1-7. However, this strategy will only be effective if, in practice, the CO2 sensors have a reasonable accuracy. The objective of this study was; therefore, to determine if CO2 sensor performance, in practice, is generally acceptable or problematic. This article provides a summary of study methods and findings ? additional details are available in a paper in the proceedings of the ASHRAE IAQ?2007 Conference8.

  3. Field Accuracy Test of Rpas Photogrammetry

    NASA Astrophysics Data System (ADS)

    Barry, P.; Coakley, R.

    2013-08-01

    Baseline Surveys Ltd is a company which specialises in the supply of accurate geospatial data, such as cadastral, topographic and engineering survey data to commercial and government bodies. Baseline Surveys Ltd invested in aerial drone photogrammetric technology and had a requirement to establish the spatial accuracy of the geographic data derived from our unmanned aerial vehicle (UAV) photogrammetry before marketing our new aerial mapping service. Having supplied the construction industry with survey data for over 20 years, we felt that is was crucial for our clients to clearly understand the accuracy of our photogrammetry so they can safely make informed spatial decisions, within the known accuracy limitations of our data. This information would also inform us on how and where UAV photogrammetry can be utilised. What we wanted to find out was the actual accuracy that can be reliably achieved using a UAV to collect data under field conditions throughout a 2 Ha site. We flew a UAV over the test area in a "lawnmower track" pattern with an 80% front and 80% side overlap; we placed 45 ground markers as check points and surveyed them in using network Real Time Kinematic Global Positioning System (RTK GPS). We specifically designed the ground markers to meet our accuracy needs. We established 10 separate ground markers as control points and inputted these into our photo modelling software, Agisoft PhotoScan. The remaining GPS coordinated check point data were added later in ArcMap to the completed orthomosaic and digital elevation model so we could accurately compare the UAV photogrammetry XYZ data with the RTK GPS XYZ data at highly reliable common points. The accuracy we achieved throughout the 45 check points was 95% reliably within 41 mm horizontally and 68 mm vertically and with an 11.7 mm ground sample distance taken from a flight altitude above ground level of 90 m.The area covered by one image was 70.2 m × 46.4 m, which equals 0.325 Ha. This finding has shown

  4. 10 CFR 54.13 - Completeness and accuracy of information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Completeness and accuracy of information. 54.13 Section 54.13 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) REQUIREMENTS FOR RENEWAL OF OPERATING LICENSES FOR NUCLEAR POWER PLANTS General Provisions § 54.13 Completeness and accuracy of information....

  5. Achieving Climate Change Absolute Accuracy in Orbit

    NASA Technical Reports Server (NTRS)

    Wielicki, Bruce A.; Young, D. F.; Mlynczak, M. G.; Thome, K. J; Leroy, S.; Corliss, J.; Anderson, J. G.; Ao, C. O.; Bantges, R.; Best, F.; Bowman, K.; Brindley, H.; Butler, J. J.; Collins, W.; Dykema, J. A.; Doelling, D. R.; Feldman, D. R.; Fox, N.; Huang, X.; Holz, R.; Huang, Y.; Jennings, D.; Jin, Z.; Johnson, D. G.; Jucks, K.; Kato, S.; Kratz, D. P.; Liu, X.; Lukashin, C.; Mannucci, A. J.; Phojanamongkolkij, N.; Roithmayr, C. M.; Sandford, S.; Taylor, P. C.; Xiong, X.

    2013-01-01

    The Climate Absolute Radiance and Refractivity Observatory (CLARREO) mission will provide a calibration laboratory in orbit for the purpose of accurately measuring and attributing climate change. CLARREO measurements establish new climate change benchmarks with high absolute radiometric accuracy and high statistical confidence across a wide range of essential climate variables. CLARREO's inherently high absolute accuracy will be verified and traceable on orbit to Système Internationale (SI) units. The benchmarks established by CLARREO will be critical for assessing changes in the Earth system and climate model predictive capabilities for decades into the future as society works to meet the challenge of optimizing strategies for mitigating and adapting to climate change. The CLARREO benchmarks are derived from measurements of the Earth's thermal infrared spectrum (5-50 micron), the spectrum of solar radiation reflected by the Earth and its atmosphere (320-2300 nm), and radio occultation refractivity from which accurate temperature profiles are derived. The mission has the ability to provide new spectral fingerprints of climate change, as well as to provide the first orbiting radiometer with accuracy sufficient to serve as the reference transfer standard for other space sensors, in essence serving as a "NIST [National Institute of Standards and Technology] in orbit." CLARREO will greatly improve the accuracy and relevance of a wide range of space-borne instruments for decadal climate change. Finally, CLARREO has developed new metrics and methods for determining the accuracy requirements of climate observations for a wide range of climate variables and uncertainty sources. These methods should be useful for improving our understanding of observing requirements for most climate change observations.

  6. Meditation Experience Predicts Introspective Accuracy

    PubMed Central

    Fox, Kieran C. R.; Zakarauskas, Pierre; Dixon, Matt; Ellamil, Melissa; Thompson, Evan; Christoff, Kalina

    2012-01-01

    The accuracy of subjective reports, especially those involving introspection of one's own internal processes, remains unclear, and research has demonstrated large individual differences in introspective accuracy. It has been hypothesized that introspective accuracy may be heightened in persons who engage in meditation practices, due to the highly introspective nature of such practices. We undertook a preliminary exploration of this hypothesis, examining introspective accuracy in a cross-section of meditation practitioners (1–15,000 hrs experience). Introspective accuracy was assessed by comparing subjective reports of tactile sensitivity for each of 20 body regions during a ‘body-scanning’ meditation with averaged, objective measures of tactile sensitivity (mean size of body representation area in primary somatosensory cortex; two-point discrimination threshold) as reported in prior research. Expert meditators showed significantly better introspective accuracy than novices; overall meditation experience also significantly predicted individual introspective accuracy. These results suggest that long-term meditators provide more accurate introspective reports than novices. PMID:23049790

  7. High accuracy radiation efficiency measurement techniques

    NASA Technical Reports Server (NTRS)

    Kozakoff, D. J.; Schuchardt, J. M.

    1981-01-01

    The relatively large antenna subarrays (tens of meters) to be used in the Solar Power Satellite, and the desire to accurately quantify antenna performance, dictate the requirement for specialized measurement techniques. The error contributors associated with both far-field and near-field antenna measurement concepts were quantified. As a result, instrumentation configurations with measurement accuracy potential were identified. In every case, advances in the state of the art of associated electronics were found to be required. Relative cost trade-offs between a candidate far-field elevated antenna range and near-field facility were also performed.

  8. High accuracy in short ISS missions

    NASA Astrophysics Data System (ADS)

    Rüeger, J. M.

    1986-06-01

    Traditionally Inertial Surveying Systems ( ISS) are used for missions of 30 km to 100 km length. Today, a new type of ISS application is emanating from an increased need for survey control densification in urban areas often in connection with land information systems or cadastral surveys. The accuracy requirements of urban surveys are usually high. The loss in accuracy caused by the coordinate transfer between IMU and ground marks is investigated and an offsetting system based on electronic tacheometers is proposed. An offsetting system based on a Hewlett-Packard HP 3820A electronic tacheometer has been tested in Sydney (Australia) in connection with a vehicle mounted LITTON Auto-Surveyor System II. On missions over 750 m ( 8 stations, 25 minutes duration, 3.5 minute ZUPT intervals, mean offset distances 9 metres) accuracies of 37 mm (one sigma) in position and 8 mm in elevation were achieved. Some improvements to the LITTON Auto-Surveyor System II are suggested which would improve the accuracies even further.

  9. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Carter, Delano R.; Todirita, Monica; Kronenwetter, Jeffrey; Chu, Donald

    2016-01-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. Both behave as expected, and their accuracy predictions agree within 30%. With the proposed calibration regimen, both suggest that the GOES-R magnetometer subsystem will meet its accuracy requirements.

  10. Assessing and Ensuring GOES-R Magnetometer Accuracy

    NASA Technical Reports Server (NTRS)

    Kronenwetter, Jeffrey; Carter, Delano R.; Todirita, Monica; Chu, Donald

    2016-01-01

    The GOES-R magnetometer accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma. To achieve this, the sensor itself has better than 1 nT accuracy. Because zero offset and scale factor drift over time, it is also necessary to perform annual calibration maneuvers. To predict performance, we used covariance analysis and attempted to corroborate it with simulations. Although not perfect, the two generally agree and show the expected behaviors. With the annual calibration regimen, these predictions suggest that the magnetometers will meet their accuracy requirements.

  11. Accuracy assessment system and operation

    NASA Technical Reports Server (NTRS)

    Pitts, D. E.; Houston, A. G.; Badhwar, G.; Bender, M. J.; Rader, M. L.; Eppler, W. G.; Ahlers, C. W.; White, W. P.; Vela, R. R.; Hsu, E. M. (Principal Investigator)

    1979-01-01

    The accuracy and reliability of LACIE estimates of wheat production, area, and yield is determined at regular intervals throughout the year by the accuracy assessment subsystem which also investigates the various LACIE error sources, quantifies the errors, and relates then to their causes. Timely feedback of these error evaluations to the LACIE project was the only mechanism by which improvements in the crop estimation system could be made during the short 3 year experiment.

  12. Evaluating LANDSAT wildland classification accuracies

    NASA Technical Reports Server (NTRS)

    Toll, D. L.

    1980-01-01

    Procedures to evaluate the accuracy of LANDSAT derived wildland cover classifications are described. The evaluation procedures include: (1) implementing a stratified random sample for obtaining unbiased verification data; (2) performing area by area comparisons between verification and LANDSAT data for both heterogeneous and homogeneous fields; (3) providing overall and individual classification accuracies with confidence limits; (4) displaying results within contingency tables for analysis of confusion between classes; and (5) quantifying the amount of information (bits/square kilometer) conveyed in the LANDSAT classification.

  13. The accuracy of automatic tracking

    NASA Technical Reports Server (NTRS)

    Kastrov, V. V.

    1974-01-01

    It has been generally assumed that tracking accuracy changes similarly to the rate of change of the curve of the measurement conversion. The problem that internal noise increases along with the signals processed by the tracking device and that tracking accuracy thus drops were considered. The main prerequisite for solution is consideration of the dependences of the output signal of the tracking device sensor not only on the measured parameter but on the signal itself.

  14. Position determination accuracy from the microwave landing system

    NASA Technical Reports Server (NTRS)

    Cicolani, L. S.

    1973-01-01

    Analysis and results are given for the position determination accuracy obtainable from the microwave landing guidance system. Siting arrangements, coverage volumes, and accuracy standards for the azimuth, elevation, and range functions of the microwave system are discussed. Results are given for the complete coverage of the systems and are related to flight operational requirements for position estimation during flare, glide slope, and general terminal area approaches. Range rate estimation from range data is also analyzed. The distance measuring equipment accuracy required to meet the range rate estimation standards is determined, and a method of optimizing the range rate estimate is also given.

  15. Audiovisual biofeedback improves motion prediction accuracy

    PubMed Central

    Pollock, Sean; Lee, Danny; Keall, Paul; Kim, Taeho

    2013-01-01

    Purpose: The accuracy of motion prediction, utilized to overcome the system latency of motion management radiotherapy systems, is hampered by irregularities present in the patients’ respiratory pattern. Audiovisual (AV) biofeedback has been shown to reduce respiratory irregularities. The aim of this study was to test the hypothesis that AV biofeedback improves the accuracy of motion prediction. Methods: An AV biofeedback system combined with real-time respiratory data acquisition and MR images were implemented in this project. One-dimensional respiratory data from (1) the abdominal wall (30 Hz) and (2) the thoracic diaphragm (5 Hz) were obtained from 15 healthy human subjects across 30 studies. The subjects were required to breathe with and without the guidance of AV biofeedback during each study. The obtained respiratory signals were then implemented in a kernel density estimation prediction algorithm. For each of the 30 studies, five different prediction times ranging from 50 to 1400 ms were tested (150 predictions performed). Prediction error was quantified as the root mean square error (RMSE); the RMSE was calculated from the difference between the real and predicted respiratory data. The statistical significance of the prediction results was determined by the Student's t-test. Results: Prediction accuracy was considerably improved by the implementation of AV biofeedback. Of the 150 respiratory predictions performed, prediction accuracy was improved 69% (103/150) of the time for abdominal wall data, and 78% (117/150) of the time for diaphragm data. The average reduction in RMSE due to AV biofeedback over unguided respiration was 26% (p < 0.001) and 29% (p < 0.001) for abdominal wall and diaphragm respiratory motion, respectively. Conclusions: This study was the first to demonstrate that the reduction of respiratory irregularities due to the implementation of AV biofeedback improves prediction accuracy. This would result in increased efficiency of motion

  16. High Accuracy Fuel Flowmeter, Phase 1

    NASA Technical Reports Server (NTRS)

    Mayer, C.; Rose, L.; Chan, A.; Chin, B.; Gregory, W.

    1983-01-01

    Technology related to aircraft fuel mass - flowmeters was reviewed to determine what flowmeter types could provide 0.25%-of-point accuracy over a 50 to one range in flowrates. Three types were selected and were further analyzed to determine what problem areas prevented them from meeting the high accuracy requirement, and what the further development needs were for each. A dual-turbine volumetric flowmeter with densi-viscometer and microprocessor compensation was selected for its relative simplicity and fast response time. An angular momentum type with a motor-driven, spring-restrained turbine and viscosity shroud was selected for its direct mass-flow output. This concept also employed a turbine for fast response and a microcomputer for accurate viscosity compensation. The third concept employed a vortex precession volumetric flowmeter and was selected for its unobtrusive design. Like the turbine flowmeter, it uses a densi-viscometer and microprocessor for density correction and accurate viscosity compensation.

  17. 47 CFR 12.3 - 911 and E911 analyses and reports.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... pursuant to procedures set forth in 47 CFR 0.461. Notice of any requests for inspection of these reports will be provided to the filers of the reports pursuant to 47 CFR 0.461(d)(3). ... (VoIP) service providers. LECs that meet the definition of a Class B company set forth in §...

  18. 47 CFR 12.3 - 911 and E911 analyses and reports.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... pursuant to procedures set forth in 47 CFR 0.461. Notice of any requests for inspection of these reports will be provided to the filers of the reports pursuant to 47 CFR 0.461(d)(3). ... (VoIP) service providers. LECs that meet the definition of a Class B company set forth in §...

  19. 47 CFR 12.3 - 911 and E911 analyses and reports.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... access to these reports must be sought pursuant to procedures set forth in 47 CFR 0.461. Notice of any requests for inspection of these reports will be provided to the filers of the reports pursuant to 47 CFR 0... Voice over Internet Protocol (VoIP) service providers. LECs that meet the definition of a Class...

  20. 47 CFR 12.3 - 911 and E911 analyses and reports.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... pursuant to procedures set forth in 47 CFR 0.461. Notice of any requests for inspection of these reports will be provided to the filers of the reports pursuant to 47 CFR 0.461(d)(3). ... wireless 911 rules set forth in § 20.18 of this chapter; and interconnected Voice over Internet...

  1. 47 CFR 12.3 - 911 and E911 analyses and reports.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... pursuant to procedures set forth in 47 CFR 0.461. Notice of any requests for inspection of these reports will be provided to the filers of the reports pursuant to 47 CFR 0.461(d)(3). ... the service providers intend to take to ensure diversity and dependability in their 911 and...

  2. Accuracy in optical overlay metrology

    NASA Astrophysics Data System (ADS)

    Bringoltz, Barak; Marciano, Tal; Yaziv, Tal; DeLeeuw, Yaron; Klein, Dana; Feler, Yoel; Adam, Ido; Gurevich, Evgeni; Sella, Noga; Lindenfeld, Ze'ev; Leviant, Tom; Saltoun, Lilach; Ashwal, Eltsafon; Alumot, Dror; Lamhot, Yuval; Gao, Xindong; Manka, James; Chen, Bryan; Wagner, Mark

    2016-03-01

    In this paper we discuss the mechanism by which process variations determine the overlay accuracy of optical metrology. We start by focusing on scatterometry, and showing that the underlying physics of this mechanism involves interference effects between cavity modes that travel between the upper and lower gratings in the scatterometry target. A direct result is the behavior of accuracy as a function of wavelength, and the existence of relatively well defined spectral regimes in which the overlay accuracy and process robustness degrades (`resonant regimes'). These resonances are separated by wavelength regions in which the overlay accuracy is better and independent of wavelength (we term these `flat regions'). The combination of flat and resonant regions forms a spectral signature which is unique to each overlay alignment and carries certain universal features with respect to different types of process variations. We term this signature the `landscape', and discuss its universality. Next, we show how to characterize overlay performance with a finite set of metrics that are available on the fly, and that are derived from the angular behavior of the signal and the way it flags resonances. These metrics are used to guarantee the selection of accurate recipes and targets for the metrology tool, and for process control with the overlay tool. We end with comments on the similarity of imaging overlay to scatterometry overlay, and on the way that pupil overlay scatterometry and field overlay scatterometry differ from an accuracy perspective.

  3. Current Concept of Geometrical Accuracy

    NASA Astrophysics Data System (ADS)

    Görög, Augustín; Görögová, Ingrid

    2014-06-01

    Within the solving VEGA 1/0615/12 research project "Influence of 5-axis grinding parameters on the shank cutteŕs geometric accuracy", the research team will measure and evaluate geometrical accuracy of the produced parts. They will use the contemporary measurement technology (for example the optical 3D scanners). During the past few years, significant changes have occurred in the field of geometrical accuracy. The objective of this contribution is to analyse the current standards in the field of geometric tolerance. It is necessary to bring an overview of the basic concepts and definitions in the field. It will prevent the use of outdated and invalidated terms and definitions in the field. The knowledge presented in the contribution will provide the new perspective of the measurement that will be evaluated according to the current standards.

  4. Precision standoff guidance antenna accuracy evaluation

    NASA Astrophysics Data System (ADS)

    Irons, F. H.; Landesberg, M. M.

    1981-02-01

    This report presents a summary of work done to determine the inherent angular accuracy achievable with the guidance and control precision standoff guidance antenna. The antenna is a critical element in the anti-jam single station guidance program since its characteristics can limit the intrinsic location guidance accuracy. It was important to determine the extent to which high ratio beamsplitting results could be achieved repeatedly and what issues were involved with calibrating the antenna. The antenna accuracy has been found to be on the order of 0.006 deg. through the use of a straightforward lookup table concept. This corresponds to a cross range error of 21 m at a range of 200 km. This figure includes both pointing errors and off-axis estimation errors. It was found that the antenna off-boresight calibration is adequately represented by a straight line for each position plus a lookup table for pointing errors relative to broadside. In the event recalibration is required, it was found that only 1% of the model would need to be corrected.

  5. A Family of Rater Accuracy Models.

    PubMed

    Wolfe, Edward W; Jiao, Hong; Song, Tian

    2015-01-01

    Engelhard (1996) proposed a rater accuracy model (RAM) as a means of evaluating rater accuracy in rating data, but very little research exists to determine the efficacy of that model. The RAM requires a transformation of the raw score data to accuracy measures by comparing rater-assigned scores to true scores. Indices computed based on raw scores also exist for measuring rater effects, but these indices ignore deviations of rater-assigned scores from true scores. This paper demonstrates the efficacy of two versions of the RAM (based on dichotomized and polytomized deviations of rater-assigned scores from true scores) to two versions of raw score rater effect models (i.e., a Rasch partial credit model, PCM, and a Rasch rating scale model, RSM). Simulated data are used to demonstrate the efficacy with which these four models detect and differentiate three rater effects: severity, centrality, and inaccuracy. Results indicate that the RAMs are able to detect, but not differentiate, rater severity and inaccuracy, but not rater centrality. The PCM and RSM, on the other hand, are able to both detect and differentiate all three of these rater effects. However, the RSM and PCM do not take into account true scores and may, therefore, be misleading when pervasive trends exist in the rater-assigned data. PMID:26075664

  6. ACCURACY AND TRACE ORGANIC ANALYSES

    EPA Science Inventory

    Accuracy in trace organic analysis presents a formidable problem to the residue chemist. He is confronted with the analysis of a large number and variety of compounds present in a multiplicity of substrates at levels as low as parts-per-trillion. At these levels, collection, isol...

  7. Improving Speaking Accuracy through Awareness

    ERIC Educational Resources Information Center

    Dormer, Jan Edwards

    2013-01-01

    Increased English learner accuracy can be achieved by leading students through six stages of awareness. The first three awareness stages build up students' motivation to improve, and the second three provide learners with crucial input for change. The final result is "sustained language awareness," resulting in ongoing…

  8. The hidden KPI registration accuracy.

    PubMed

    Shorrosh, Paul

    2011-09-01

    Determining the registration accuracy rate is fundamental to improving revenue cycle key performance indicators. A registration quality assurance (QA) process allows errors to be corrected before bills are sent and helps registrars learn from their mistakes. Tools are available to help patient access staff who perform registration QA manually. PMID:21923052

  9. Psychology Textbooks: Examining Their Accuracy

    ERIC Educational Resources Information Center

    Steuer, Faye B.; Ham, K. Whitfield, II

    2008-01-01

    Sales figures and recollections of psychologists indicate textbooks play a central role in psychology students' education, yet instructors typically must select texts under time pressure and with incomplete information. Although selection aids are available, none adequately address the accuracy of texts. We describe a technique for sampling…

  10. Improved accuracies for satellite tracking

    NASA Technical Reports Server (NTRS)

    Kammeyer, P. C.; Fiala, A. D.; Seidelmann, P. K.

    1991-01-01

    A charge coupled device (CCD) camera on an optical telescope which follows the stars can be used to provide high accuracy comparisons between the line of sight to a satellite, over a large range of satellite altitudes, and lines of sight to nearby stars. The CCD camera can be rotated so the motion of the satellite is down columns of the CCD chip, and charge can be moved from row to row of the chip at a rate which matches the motion of the optical image of the satellite across the chip. Measurement of satellite and star images, together with accurate timing of charge motion, provides accurate comparisons of lines of sight. Given lines of sight to stars near the satellite, the satellite line of sight may be determined. Initial experiments with this technique, using an 18 cm telescope, have produced TDRS-4 observations which have an rms error of 0.5 arc second, 100 m at synchronous altitude. Use of a mosaic of CCD chips, each having its own rate of charge motion, in the focal place of a telescope would allow point images of a geosynchronous satellite and of stars to be formed simultaneously in the same telescope. The line of sight of such a satellite could be measured relative to nearby star lines of sight with an accuracy of approximately 0.03 arc second. Development of a star catalog with 0.04 arc second rms accuracy and perhaps ten stars per square degree would allow determination of satellite lines of sight with 0.05 arc second rms absolute accuracy, corresponding to 10 m at synchronous altitude. Multiple station time transfers through a communications satellite can provide accurate distances from the satellite to the ground stations. Such observations can, if calibrated for delays, determine satellite orbits to an accuracy approaching 10 m rms.

  11. MAPPING SPATIAL THEMATIC ACCURACY WITH FUZZY SETS

    EPA Science Inventory

    Thematic map accuracy is not spatially homogenous but variable across a landscape. Properly analyzing and representing spatial pattern and degree of thematic map accuracy would provide valuable information for using thematic maps. However, current thematic map accuracy measures (...

  12. Arizona Vegetation Resource Inventory (AVRI) accuracy assessment

    USGS Publications Warehouse

    Szajgin, John; Pettinger, L.R.; Linden, D.S.; Ohlen, D.O.

    1982-01-01

    A quantitative accuracy assessment was performed for the vegetation classification map produced as part of the Arizona Vegetation Resource Inventory (AVRI) project. This project was a cooperative effort between the Bureau of Land Management (BLM) and the Earth Resources Observation Systems (EROS) Data Center. The objective of the accuracy assessment was to estimate (with a precision of ?10 percent at the 90 percent confidence level) the comission error in each of the eight level II hierarchical vegetation cover types. A stratified two-phase (double) cluster sample was used. Phase I consisted of 160 photointerpreted plots representing clusters of Landsat pixels, and phase II consisted of ground data collection at 80 of the phase I cluster sites. Ground data were used to refine the phase I error estimates by means of a linear regression model. The classified image was stratified by assigning each 15-pixel cluster to the stratum corresponding to the dominant cover type within each cluster. This method is known as stratified plurality sampling. Overall error was estimated to be 36 percent with a standard error of 2 percent. Estimated error for individual vegetation classes ranged from a low of 10 percent ?6 percent for evergreen woodland to 81 percent ?7 percent for cropland and pasture. Total cost of the accuracy assessment was $106,950 for the one-million-hectare study area. The combination of the stratified plurality sampling (SPS) method of sample allocation with double sampling provided the desired estimates within the required precision levels. The overall accuracy results confirmed that highly accurate digital classification of vegetation is difficult to perform in semiarid environments, due largely to the sparse vegetation cover. Nevertheless, these techniques show promise for providing more accurate information than is presently available for many BLM-administered lands.

  13. Assessing and ensuring GOES-R magnetometer accuracy

    NASA Astrophysics Data System (ADS)

    Carter, Delano; Todirita, Monica; Kronenwetter, Jeffrey; Dahya, Melissa; Chu, Donald

    2016-05-01

    The GOES-R magnetometer subsystem accuracy requirement is 1.7 nanoteslas (nT). During quiet times (100 nT), accuracy is defined as absolute mean plus 3 sigma error per axis. During storms (300 nT), accuracy is defined as absolute mean plus 2 sigma error per axis. Error comes both from outside the magnetometers, e.g. spacecraft fields and misalignments, as well as inside, e.g. zero offset and scale factor errors. Because zero offset and scale factor drift over time, it will be necessary to perform annual calibration maneuvers. To predict performance before launch, we have used Monte Carlo simulations and covariance analysis. With the proposed calibration regimen, both suggest that the magnetometer subsystem will meet its accuracy requirements.

  14. Accuracy Assessment of Coastal Topography Derived from Uav Images

    NASA Astrophysics Data System (ADS)

    Long, N.; Millescamps, B.; Pouget, F.; Dumon, A.; Lachaussée, N.; Bertin, X.

    2016-06-01

    To monitor coastal environments, Unmanned Aerial Vehicle (UAV) is a low-cost and easy to use solution to enable data acquisition with high temporal frequency and spatial resolution. Compared to Light Detection And Ranging (LiDAR) or Terrestrial Laser Scanning (TLS), this solution produces Digital Surface Model (DSM) with a similar accuracy. To evaluate the DSM accuracy on a coastal environment, a campaign was carried out with a flying wing (eBee) combined with a digital camera. Using the Photoscan software and the photogrammetry process (Structure From Motion algorithm), a DSM and an orthomosaic were produced. Compared to GNSS surveys, the DSM accuracy is estimated. Two parameters are tested: the influence of the methodology (number and distribution of Ground Control Points, GCPs) and the influence of spatial image resolution (4.6 cm vs 2 cm). The results show that this solution is able to reproduce the topography of a coastal area with a high vertical accuracy (< 10 cm). The georeferencing of the DSM require a homogeneous distribution and a large number of GCPs. The accuracy is correlated with the number of GCPs (use 19 GCPs instead of 10 allows to reduce the difference of 4 cm); the required accuracy should be dependant of the research problematic. Last, in this particular environment, the presence of very small water surfaces on the sand bank does not allow to improve the accuracy when the spatial resolution of images is decreased.

  15. Accuracy of remotely sensed data: Sampling and analysis procedures

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Oderwald, R. G.; Mead, R. A.

    1982-01-01

    A review and update of the discrete multivariate analysis techniques used for accuracy assessment is given. A listing of the computer program written to implement these techniques is given. New work on evaluating accuracy assessment using Monte Carlo simulation with different sampling schemes is given. The results of matrices from the mapping effort of the San Juan National Forest is given. A method for estimating the sample size requirements for implementing the accuracy assessment procedures is given. A proposed method for determining the reliability of change detection between two maps of the same area produced at different times is given.

  16. Air traffic control surveillance accuracy and update rate study

    NASA Technical Reports Server (NTRS)

    Craigie, J. H.; Morrison, D. D.; Zipper, I.

    1973-01-01

    The results of an air traffic control surveillance accuracy and update rate study are presented. The objective of the study was to establish quantitative relationships between the surveillance accuracies, update rates, and the communication load associated with the tactical control of aircraft for conflict resolution. The relationships are established for typical types of aircraft, phases of flight, and types of airspace. Specific cases are analyzed to determine the surveillance accuracies and update rates required to prevent two aircraft from approaching each other too closely.

  17. ACCURACY LIMITATIONS IN LONG TRACE PROFILOMETRY.

    SciTech Connect

    TAKACS,P.Z.; QIAN,S.

    2003-08-25

    As requirements for surface slope error quality of grazing incidence optics approach the 100 nanoradian level, it is necessary to improve the performance of the measuring instruments to achieve accurate and repeatable results at this level. We have identified a number of internal error sources in the Long Trace Profiler (LTP) that affect measurement quality at this level. The LTP is sensitive to phase shifts produced within the millimeter diameter of the pencil beam probe by optical path irregularities with scale lengths of a fraction of a millimeter. We examine the effects of mirror surface ''macroroughness'' and internal glass homogeneity on the accuracy of the LTP through experiment and theoretical modeling. We will place limits on the allowable surface ''macroroughness'' and glass homogeneity required to achieve accurate measurements in the nanoradian range.

  18. Accuracy Limitations in Long-Trace Profilometry

    SciTech Connect

    Takacs, Peter Z.; Qian Shinan

    2004-05-12

    As requirements for surface slope error quality of grazing incidence optics approach the 100 nanoradian level, it is necessary to improve the performance of the measuring instruments to achieve accurate and repeatable results at this level. We have identified a number of internal error sources in the Long Trace Profiler (LTP) that affect measurement quality at this level. The LTP is sensitive to phase shifts produced within the millimeter diameter of the pencil beam probe by optical path irregularities with scale lengths of a fraction of a millimeter. We examine the effects of mirror surface 'macroroughness' and internal glass homogeneity on the accuracy of the LTP through experiment and theoretical modeling. We will place limits on the allowable surface 'macroroughness' and glass homogeneity required to achieve accurate measurements in the nanoradian range.

  19. 40 CFR 53.53 - Test for flow rate accuracy, regulation, measurement accuracy, and cut-off.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... recording at intervals not to exceed 5 minutes. (4) Flow measurement adaptor (40 CFR part 50, appendix L.... (7) Teflon sample filter, as specified in section 6 of 40 CFR part 50, appendix L (if required). (d... calibration, certification of calibration accuracy, and NIST-traceability (if required) of all...

  20. A high accuracy sun sensor

    NASA Astrophysics Data System (ADS)

    Bokhove, H.

    The High Accuracy Sun Sensor (HASS) is described, concentrating on measurement principle, the CCD detector used, the construction of the sensorhead and the operation of the sensor electronics. Tests on a development model show that the main aim of a 0.01-arcsec rms stability over a 10-minute period is closely approached. Remaining problem areas are associated with the sensor sensitivity to illumination level variations, the shielding of the detector, and the test and calibration equipment.

  1. Insensitivity of the octahedral spherical hohlraum to power imbalance, pointing accuracy, and assemblage accuracy

    SciTech Connect

    Huo, Wen Yi; Zhao, Yiqing; Zheng, Wudi; Liu, Jie; Lan, Ke

    2014-11-15

    The random radiation asymmetry in the octahedral spherical hohlraum [K. Lan et al., Phys. Plasmas 21, 0 10704 (2014)] arising from the power imbalance, pointing accuracy of laser quads, and the assemblage accuracy of capsule is investigated by using the 3-dimensional view factor model. From our study, for the spherical hohlraum, the random radiation asymmetry arising from the power imbalance of the laser quads is about half of that in the cylindrical hohlraum; the random asymmetry arising from the pointing error is about one order lower than that in the cylindrical hohlraum; and the random asymmetry arising from the assemblage error of capsule is about one third of that in the cylindrical hohlraum. Moreover, the random radiation asymmetry in the spherical hohlraum is also less than the amount in the elliptical hohlraum. The results indicate that the spherical hohlraum is more insensitive to the random variations than the cylindrical hohlraum and the elliptical hohlraum. Hence, the spherical hohlraum can relax the requirements to the power imbalance and pointing accuracy of laser facility and the assemblage accuracy of capsule.

  2. Municipal water consumption forecast accuracy

    NASA Astrophysics Data System (ADS)

    Fullerton, Thomas M.; Molina, Angel L.

    2010-06-01

    Municipal water consumption planning is an active area of research because of infrastructure construction and maintenance costs, supply constraints, and water quality assurance. In spite of that, relatively few water forecast accuracy assessments have been completed to date, although some internal documentation may exist as part of the proprietary "grey literature." This study utilizes a data set of previously published municipal consumption forecasts to partially fill that gap in the empirical water economics literature. Previously published municipal water econometric forecasts for three public utilities are examined for predictive accuracy against two random walk benchmarks commonly used in regional analyses. Descriptive metrics used to quantify forecast accuracy include root-mean-square error and Theil inequality statistics. Formal statistical assessments are completed using four-pronged error differential regression F tests. Similar to studies for other metropolitan econometric forecasts in areas with similar demographic and labor market characteristics, model predictive performances for the municipal water aggregates in this effort are mixed for each of the municipalities included in the sample. Given the competitiveness of the benchmarks, analysts should employ care when utilizing econometric forecasts of municipal water consumption for planning purposes, comparing them to recent historical observations and trends to insure reliability. Comparative results using data from other markets, including regions facing differing labor and demographic conditions, would also be helpful.

  3. 40 CFR 1066.290 - Verification of speed accuracy for the driver's aid.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Verification of speed accuracy for the... Verification of speed accuracy for the driver's aid. Use good engineering judgment to provide a driver's aid that facilitates compliance with the requirements of § 1066.425. Verify the speed accuracy of...

  4. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  5. [True color accuracy in digital forensic photography].

    PubMed

    Ramsthaler, Frank; Birngruber, Christoph G; Kröll, Ann-Katrin; Kettner, Mattias; Verhoff, Marcel A

    2016-01-01

    Forensic photographs not only need to be unaltered and authentic and capture context-relevant images, along with certain minimum requirements for image sharpness and information density, but color accuracy also plays an important role, for instance, in the assessment of injuries or taphonomic stages, or in the identification and evaluation of traces from photos. The perception of color not only varies subjectively from person to person, but as a discrete property of an image, color in digital photos is also to a considerable extent influenced by technical factors such as lighting, acquisition settings, camera, and output medium (print, monitor). For these reasons, consistent color accuracy has so far been limited in digital photography. Because images usually contain a wealth of color information, especially for complex or composite colors or shades of color, and the wavelength-dependent sensitivity to factors such as light and shadow may vary between cameras, the usefulness of issuing general recommendations for camera capture settings is limited. Our results indicate that true image colors can best and most realistically be captured with the SpyderCheckr technical calibration tool for digital cameras tested in this study. Apart from aspects such as the simplicity and quickness of the calibration procedure, a further advantage of the tool is that the results are independent of the camera used and can also be used for the color management of output devices such as monitors and printers. The SpyderCheckr color-code patches allow true colors to be captured more realistically than with a manual white balance tool or an automatic flash. We therefore recommend that the use of a color management tool should be considered for the acquisition of all images that demand high true color accuracy (in particular in the setting of injury documentation). PMID:27386623

  6. Accuracy Assessment of Altimeter Derived Geostrophic Velocities

    NASA Astrophysics Data System (ADS)

    Leben, R. R.; Powell, B. S.; Born, G. H.; Guinasso, N. L.

    2002-12-01

    Along track sea surface height anomaly gradients are proportional to cross track geostrophic velocity anomalies allowing satellite altimetry to provide much needed satellite observations of changes in the geostrophic component of surface ocean currents. Often, surface height gradients are computed from altimeter data archives that have been corrected to give the most accurate absolute sea level, a practice that may unnecessarily increase the error in the cross track velocity anomalies and thereby require excessive smoothing to mitigate noise. Because differentiation along track acts as a high-pass filter, many of the path length corrections applied to altimeter data for absolute height accuracy are unnecessary for the corresponding gradient calculations. We report on a study to investigate appropriate altimetric corrections and processing techniques for improving geostrophic velocity accuracy. Accuracy is assessed by comparing cross track current measurements from two moorings placed along the descending TOPEX/POSEIDON ground track number 52 in the Gulf of Mexico to the corresponding altimeter velocity estimates. The buoys are deployed and maintained by the Texas Automated Buoy System (TABS) under Interagency Contracts with Texas A&M University. The buoys telemeter observations in near real-time via satellite to the TABS station located at the Geochemical and Environmental Research Group (GERG) at Texas A&M. Buoy M is located in shelf waters of 57 m depth with a second, Buoy N, 38 km away on the shelf break at 105 m depth. Buoy N has been operational since the beginning of 2002 and has a current meter at 2m depth providing in situ measurements of surface velocities coincident with Jason and TOPEX/POSEIDON altimeter over flights. This allows one of the first detailed comparisons of shallow water near surface current meter time series to coincident altimetry.

  7. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units

    PubMed Central

    Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang

    2016-01-01

    An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10−6°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs. PMID:27338408

  8. Systematic Calibration for Ultra-High Accuracy Inertial Measurement Units.

    PubMed

    Cai, Qingzhong; Yang, Gongliu; Song, Ningfang; Liu, Yiliang

    2016-01-01

    An inertial navigation system (INS) has been widely used in challenging GPS environments. With the rapid development of modern physics, an atomic gyroscope will come into use in the near future with a predicted accuracy of 5 × 10(-6)°/h or better. However, existing calibration methods and devices can not satisfy the accuracy requirements of future ultra-high accuracy inertial sensors. In this paper, an improved calibration model is established by introducing gyro g-sensitivity errors, accelerometer cross-coupling errors and lever arm errors. A systematic calibration method is proposed based on a 51-state Kalman filter and smoother. Simulation results show that the proposed calibration method can realize the estimation of all the parameters using a common dual-axis turntable. Laboratory and sailing tests prove that the position accuracy in a five-day inertial navigation can be improved about 8% by the proposed calibration method. The accuracy can be improved at least 20% when the position accuracy of the atomic gyro INS can reach a level of 0.1 nautical miles/5 d. Compared with the existing calibration methods, the proposed method, with more error sources and high order small error parameters calibrated for ultra-high accuracy inertial measurement units (IMUs) using common turntables, has a great application potential in future atomic gyro INSs. PMID:27338408

  9. Numerical accuracy of mean-field calculations in coordinate space

    NASA Astrophysics Data System (ADS)

    Ryssens, W.; Heenen, P.-H.; Bender, M.

    2015-12-01

    Background: Mean-field methods based on an energy density functional (EDF) are powerful tools used to describe many properties of nuclei in the entirety of the nuclear chart. The accuracy required of energies for nuclear physics and astrophysics applications is of the order of 500 keV and much effort is undertaken to build EDFs that meet this requirement. Purpose: Mean-field calculations have to be accurate enough to preserve the accuracy of the EDF. We study this numerical accuracy in detail for a specific numerical choice of representation for mean-field equations that can accommodate any kind of symmetry breaking. Method: The method that we use is a particular implementation of three-dimensional mesh calculations. Its numerical accuracy is governed by three main factors: the size of the box in which the nucleus is confined, the way numerical derivatives are calculated, and the distance between the points on the mesh. Results: We examine the dependence of the results on these three factors for spherical doubly magic nuclei, neutron-rich 34Ne , the fission barrier of 240Pu , and isotopic chains around Z =50 . Conclusions: Mesh calculations offer the user extensive control over the numerical accuracy of the solution scheme. When appropriate choices for the numerical scheme are made the achievable accuracy is well below the model uncertainties of mean-field methods.

  10. High accuracy time transfer synchronization

    NASA Technical Reports Server (NTRS)

    Wheeler, Paul J.; Koppang, Paul A.; Chalmers, David; Davis, Angela; Kubik, Anthony; Powell, William M.

    1995-01-01

    In July 1994, the U.S. Naval Observatory (USNO) Time Service System Engineering Division conducted a field test to establish a baseline accuracy for two-way satellite time transfer synchronization. Three Hewlett-Packard model 5071 high performance cesium frequency standards were transported from the USNO in Washington, DC to Los Angeles, California in the USNO's mobile earth station. Two-Way Satellite Time Transfer links between the mobile earth station and the USNO were conducted each day of the trip, using the Naval Research Laboratory(NRL) designed spread spectrum modem, built by Allen Osborne Associates(AOA). A Motorola six channel GPS receiver was used to track the location and altitude of the mobile earth station and to provide coordinates for calculating Sagnac corrections for the two-way measurements, and relativistic corrections for the cesium clocks. This paper will discuss the trip, the measurement systems used and the results from the data collected. We will show the accuracy of using two-way satellite time transfer for synchronization and the performance of the three HP 5071 cesium clocks in an operational environment.

  11. Knowledge discovery by accuracy maximization

    PubMed Central

    Cacciatore, Stefano; Luchinat, Claudio; Tenori, Leonardo

    2014-01-01

    Here we describe KODAMA (knowledge discovery by accuracy maximization), an unsupervised and semisupervised learning algorithm that performs feature extraction from noisy and high-dimensional data. Unlike other data mining methods, the peculiarity of KODAMA is that it is driven by an integrated procedure of cross-validation of the results. The discovery of a local manifold’s topology is led by a classifier through a Monte Carlo procedure of maximization of cross-validated predictive accuracy. Briefly, our approach differs from previous methods in that it has an integrated procedure of validation of the results. In this way, the method ensures the highest robustness of the obtained solution. This robustness is demonstrated on experimental datasets of gene expression and metabolomics, where KODAMA compares favorably with other existing feature extraction methods. KODAMA is then applied to an astronomical dataset, revealing unexpected features. Interesting and not easily predictable features are also found in the analysis of the State of the Union speeches by American presidents: KODAMA reveals an abrupt linguistic transition sharply separating all post-Reagan from all pre-Reagan speeches. The transition occurs during Reagan’s presidency and not from its beginning. PMID:24706821

  12. Accuracy of analyses of microelectronics nanostructures in atom probe tomography

    NASA Astrophysics Data System (ADS)

    Vurpillot, F.; Rolland, N.; Estivill, R.; Duguay, S.; Blavette, D.

    2016-07-01

    The routine use of atom probe tomography (APT) as a nano-analysis microscope in the semiconductor industry requires the precise evaluation of the metrological parameters of this instrument (spatial accuracy, spatial precision, composition accuracy or composition precision). The spatial accuracy of this microscope is evaluated in this paper in the analysis of planar structures such as high-k metal gate stacks. It is shown both experimentally and theoretically that the in-depth accuracy of reconstructed APT images is perturbed when analyzing this structure composed of an oxide layer of high electrical permittivity (higher-k dielectric constant) that separates the metal gate and the semiconductor channel of a field emitter transistor. Large differences in the evaporation field between these layers (resulting from large differences in material properties) are the main sources of image distortions. An analytic model is used to interpret inaccuracy in the depth reconstruction of these devices in APT.

  13. Measuring the Accuracy of Diagnostic Systems

    NASA Astrophysics Data System (ADS)

    Swets, John A.

    1988-06-01

    Diagnostic systems of several kinds are used to distinguish between two classes of events, essentially ``signals'' and ``noise.'' For then, analysis in terms of the ``relative operating characteristic'' of signal detection theory provides a precise and valid measure of diagnostic accuracy. It is the only measure available that is uninfluenced by decision biases and prior probabilities, and it places the performances of diverse systems on a common, easily interpreted scale. Representative values of this measure are reported here for systems in medical imaging, materials testing, weather forecasting, information retrieval, polygraph lie detection, and aptitude testing. Though the measure itself is sound, the values obtained from tests of diagnostic systems often require qualification because the test data on which they are based are of unsure quality. A common set of problems in testing is faced in all fields. How well these problems are handled, or can be handled in a given field, determines the degree of confidence that can be placed in a measured value of accuracy. Some fields fare much better than others.

  14. New analytical algorithm for overlay accuracy

    NASA Astrophysics Data System (ADS)

    Ham, Boo-Hyun; Yun, Sangho; Kwak, Min-Cheol; Ha, Soon Mok; Kim, Cheol-Hong; Nam, Suk-Woo

    2012-03-01

    The extension of optical lithography to 2Xnm and beyond is often challenged by overlay control. With reduced overlay measurement error budget in the sub-nm range, conventional Total Measurement Uncertainty (TMU) data is no longer sufficient. Also there is no sufficient criterion in overlay accuracy. In recent years, numerous authors have reported new method of the accuracy of the overlay metrology: Through focus and through color. Still quantifying uncertainty in overlay measurement is most difficult work in overlay metrology. According to the ITRS roadmap, total overlay budget is getting tighter than former device node as a design rule shrink on each device node. Conventionally, the total overlay budget is defined as the square root of square sum of the following contributions: the scanner overlay performance, wafer process, metrology and mask registration. All components have been supplying sufficiently performance tool to each device nodes, delivering new scanner, new metrology tools, and new mask e-beam writers. Especially the scanner overlay performance was drastically decreased from 9nm in 8x node to 2.5nm in 3x node. The scanner overlay seems to reach the limitation the overlay performance after 3x nod. The importance of the wafer process overlay as a contribution of total wafer overlay became more important. In fact, the wafer process overlay was decreased by 3nm between DRAM 8x node and DRAM 3x node. We develop an analytical algorithm for overlay accuracy. And a concept of nondestructive method is proposed in this paper. For on product layer we discovered the layer has overlay inaccuracy. Also we use find out source of the overlay error though the new technique. In this paper, authors suggest an analytical algorithm for overlay accuracy. And a concept of non-destructive method is proposed in this paper. For on product layers, we discovered it has overlay inaccuracy. Also we use find out source of the overlay error though the new technique. Furthermore

  15. Accuracy evaluation of residual stress measurements

    SciTech Connect

    Yerman, J.A.; Kroenke, W.C.; Long, W.H.

    1996-05-01

    The accuracy of residual stress measurement techniques is difficult to assess due to the lack of available reference standards. To satisfy the need for reference standards, two specimens were designed and developed to provide known stress magnitudes and distributions: one with a uniform stress distribution and one with a nonuniform linear stress distribution. A reusable, portable load fixture was developed for use with each of the two specimens. Extensive bench testing was performed to determine if the specimens provide desired known stress magnitudes and distributions and stability of the known stress with time. The testing indicated that the nonuniform linear specimen and load fixture provided the desired known stress magnitude and distribution but that modifications were required for the uniform stress specimen. A trial use of the specimens and load fixtures using hole drilling was successful.

  16. PHAT: PHoto-z Accuracy Testing

    NASA Astrophysics Data System (ADS)

    Hildebrandt, H.; Arnouts, S.; Capak, P.; Moustakas, L. A.; Wolf, C.; Abdalla, F. B.; Assef, R. J.; Banerji, M.; Benítez, N.; Brammer, G. B.; Budavári, T.; Carliles, S.; Coe, D.; Dahlen, T.; Feldmann, R.; Gerdes, D.; Gillis, B.; Ilbert, O.; Kotulla, R.; Lahav, O.; Li, I. H.; Miralles, J.-M.; Purger, N.; Schmidt, S.; Singal, J.

    2010-11-01

    Context. Photometric redshifts (photo-z's) have become an essential tool in extragalactic astronomy. Many current and upcoming observing programmes require great accuracy of photo-z's to reach their scientific goals. Aims: Here we introduce PHAT, the PHoto-z Accuracy Testing programme, an international initiative to test and compare different methods of photo-z estimation. Methods: Two different test environments are set up, one (PHAT0) based on simulations to test the basic functionality of the different photo-z codes, and another one (PHAT1) based on data from the GOODS survey including 18-band photometry and ~2000 spectroscopic redshifts. Results: The accuracy of the different methods is expressed and ranked by the global photo-z bias, scatter, and outlier rates. While most methods agree very well on PHAT0 there are differences in the handling of the Lyman-α forest for higher redshifts. Furthermore, different methods produce photo-z scatters that can differ by up to a factor of two even in this idealised case. A larger spread in accuracy is found for PHAT1. Few methods benefit from the addition of mid-IR photometry. The accuracy of the other methods is unaffected or suffers when IRAC data are included. Remaining biases and systematic effects can be explained by shortcomings in the different template sets (especially in the mid-IR) and the use of priors on the one hand and an insufficient training set on the other hand. Some strategies to overcome these problems are identified by comparing the methods in detail. Scatters of 4-8% in Δz/(1+z) were obtained, consistent with other studies. However, somewhat larger outlier rates (>7.5% with Δz/(1+z)>0.15; >4.5% after cleaning) are found for all codes that can only partly be explained by AGN or issues in the photometry or the spec-z catalogue. Some outliers were probably missed in comparisons of photo-z's to other, less complete spectroscopic surveys in the past. There is a general trend that empirical codes produce

  17. Accuracy of numerically produced compensators.

    PubMed

    Thompson, H; Evans, M D; Fallone, B G

    1999-01-01

    A feasibility study is performed to assess the utility of a computer numerically controlled (CNC) mill to produce compensating filters for conventional clinical use and for the delivery of intensity-modulated beams. A computer aided machining (CAM) software is used to assist in the design and construction of such filters. Geometric measurements of stepped and wedged surfaces are made to examine the accuracy of surface milling. Molds are milled and filled with molten alloy to produce filters, and both the molds and filters are examined for consistency and accuracy. Results show that the deviation of the filter surfaces from design does not exceed 1.5%. The effective attenuation coefficient is measured for CadFree, a cadmium-free alloy, in a 6 MV photon beam. The effective attenuation coefficients at the depth of maximum dose (1.5 cm) and at 10 cm in solid water phantom are found to be 0.546 cm-1 and 0.522 cm-1, respectively. Further attenuation measurements are made with Cerrobend to assess the variations of the effective attenuation coefficient with field size and source-surface distance. The ability of the CNC mill to accurately produce surfaces is verified with dose profile measurements in a 6 MV photon beam. The test phantom is composed of a 10 degrees polystyrene wedge and a 30 degrees polystyrene wedge, presenting both a sharp discontinuity and sloped surfaces. Dose profiles, measured at the depth of compensation (10 cm) beneath the test phantom and beneath a flat phantom, are compared to those produced by a commercial treatment planning system. Agreement between measured and predicted profiles is within 2%, indicating the viability of the system for filter production. PMID:10100166

  18. High Accuracy Wavelength Calibration For A Scanning Visible Spectrometer

    SciTech Connect

    Filippo Scotti and Ronald Bell

    2010-07-29

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤ 0.2Â. An automated calibration for a scanning spectrometer has been developed to achieve a high wavelength accuracy overr the visible spectrum, stable over time and environmental conditions, without the need to recalibrate after each grating movement. The method fits all relevant spectrometer paraameters using multiple calibration spectra. With a steping-motor controlled sine-drive, accuracies of ~0.025 Â have been demonstrated. With the addition of high resolution (0.075 aresec) optical encoder on the grading stage, greater precision (~0.005 Â) is possible, allowing absolute velocity measurements with ~0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively.

  19. High accuracy wavelength calibration for a scanning visible spectrometer.

    PubMed

    Scotti, Filippo; Bell, Ronald E

    2010-10-01

    Spectroscopic applications for plasma velocity measurements often require wavelength accuracies ≤0.2 Å. An automated calibration, which is stable over time and environmental conditions without the need to recalibrate after each grating movement, was developed for a scanning spectrometer to achieve high wavelength accuracy over the visible spectrum. This method fits all relevant spectrometer parameters using multiple calibration spectra. With a stepping-motor controlled sine drive, an accuracy of ∼0.25 Å has been demonstrated. With the addition of a high resolution (0.075 arc  sec) optical encoder on the grating stage, greater precision (∼0.005 Å) is possible, allowing absolute velocity measurements within ∼0.3 km/s. This level of precision requires monitoring of atmospheric temperature and pressure and of grating bulk temperature to correct for changes in the refractive index of air and the groove density, respectively. PMID:21033925

  20. Accuracy Improvement of Neutron Nuclear Data on Minor Actinides

    NASA Astrophysics Data System (ADS)

    Harada, Hideo; Iwamoto, Osamu; Iwamoto, Nobuyuki; Kimura, Atsushi; Terada, Kazushi; Nakao, Taro; Nakamura, Shoji; Mizuyama, Kazuhito; Igashira, Masayuki; Katabuchi, Tatsuya; Sano, Tadafumi; Takahashi, Yoshiyuki; Takamiya, Koichi; Pyeon, Cheol Ho; Fukutani, Satoshi; Fujii, Toshiyuki; Hori, Jun-ichi; Yagi, Takahiro; Yashima, Hiroshi

    2015-05-01

    Improvement of accuracy of neutron nuclear data for minor actinides (MAs) and long-lived fission products (LLFPs) is required for developing innovative nuclear system transmuting these nuclei. In order to meet the requirement, the project entitled as "Research and development for Accuracy Improvement of neutron nuclear data on Minor ACtinides (AIMAC)" has been started as one of the "Innovative Nuclear Research and Development Program" in Japan at October 2013. The AIMAC project team is composed of researchers in four different fields: differential nuclear data measurement, integral nuclear data measurement, nuclear chemistry, and nuclear data evaluation. By integrating all of the forefront knowledge and techniques in these fields, the team aims at improving the accuracy of the data. The background and research plan of the AIMAC project are presented.

  1. Voyager navigation strategy and accuracy

    NASA Technical Reports Server (NTRS)

    Jones, J. B.; Mcdanell, J. P.; Bantell, M. H., Jr.; Chadwick, C.; Jacobson, R. A.; Miller, L. J.; Synnott, S. P.; Van Allen, R. E.

    1977-01-01

    The paper presents the results of the prelaunch navigation studies conducted for the Mariner spacecraft launched toward encounters with the giant planets. The navigation system and the strategy for using this system are described. The requirements on the navigation system demanded by the goals of the project are mentioned, and the predicted navigational capability relative to each of the requirements is discussed. Baseline navigation results for three possible trajectories are analyzed.

  2. Machine tool accuracy characterization workshops. Final report, May 5, 1992--November 5 1993

    SciTech Connect

    1995-01-06

    The ability to assess the accuracy of machine tools is required by both tool builders and users. Builders must have this ability in order to predict the accuracy capability of a machine tool for different part geometry`s, to provide verifiable accuracy information for sales purposes, and to locate error sources for maintenance, troubleshooting, and design enhancement. Users require the same ability in order to make intelligent choices in selecting or procuring machine tools, to predict component manufacturing accuracy, and to perform maintenance and troubleshooting. In both instances, the ability to fully evaluate the accuracy capabilities of a machine tool and the source of its limitations is essential for using the tool to its maximum accuracy and productivity potential. This project was designed to transfer expertise in modern machine tool accuracy testing methods from LLNL to US industry, and to educate users on the use and application of emerging standards for machine tool performance testing.

  3. Data accuracy assessment using enterprise architecture

    NASA Astrophysics Data System (ADS)

    Närman, Per; Holm, Hannes; Johnson, Pontus; König, Johan; Chenine, Moustafa; Ekstedt, Mathias

    2011-02-01

    Errors in business processes result in poor data accuracy. This article proposes an architecture analysis method which utilises ArchiMate and the Probabilistic Relational Model formalism to model and analyse data accuracy. Since the resources available for architecture analysis are usually quite scarce, the method advocates interviews as the primary data collection technique. A case study demonstrates that the method yields correct data accuracy estimates and is more resource-efficient than a competing sampling-based data accuracy estimation method.

  4. 10 CFR 72.11 - Completeness and accuracy of information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 2 2011-01-01 2011-01-01 false Completeness and accuracy of information. 72.11 Section 72.11 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSING REQUIREMENTS FOR THE INDEPENDENT STORAGE OF SPENT NUCLEAR FUEL, HIGH-LEVEL RADIOACTIVE WASTE, AND REACTOR-RELATED GREATER THAN CLASS...

  5. Navigation Accuracy Guidelines for Orbital Formation Flying Missions

    NASA Technical Reports Server (NTRS)

    Carpenter, J. Russell; Alfriend, Kyle T.

    2003-01-01

    Some simple guidelines based on the accuracy in determining a satellite formation's semi-major axis differences are useful in making preliminary assessments of the navigation accuracy needed to support such missions. These guidelines are valid for any elliptical orbit, regardless of eccentricity. Although maneuvers required for formation establishment, reconfiguration, and station-keeping require accurate prediction of the state estimate to the maneuver we, and hence are directly affected by errors in all the orbital elements, experience has shown that determination of orbit plane orientation and orbit shape to acceptable levels is less challenging than the determination of orbital period or semi-major axis. Furthermore, any differences among the member s semi-major axes are undesirable for a satellite formation, since it will lead to differential along-track drift due to period differences. Since inevitable navigation errors prevent these differences from ever being zero, one may use the guidelines this paper presents to determine how much drift will result from a given relative navigation accuracy, or conversely what navigation accuracy is required to limit drift to a given rate. Since the guidelines do not account for non-two-body perturbations, they may be viewed as useful preliminary design tools, rather than as the basis for mission navigation requirements, which should be based on detailed analysis of the mission configuration, including all relevant sources of uncertainty.

  6. A Nonparametric Approach to Estimate Classification Accuracy and Consistency

    ERIC Educational Resources Information Center

    Lathrop, Quinn N.; Cheng, Ying

    2014-01-01

    When cut scores for classifications occur on the total score scale, popular methods for estimating classification accuracy (CA) and classification consistency (CC) require assumptions about a parametric form of the test scores or about a parametric response model, such as item response theory (IRT). This article develops an approach to estimate CA…

  7. Preschoolers Monitor the Relative Accuracy of Informants

    ERIC Educational Resources Information Center

    Pasquini, Elisabeth S.; Corriveau, Kathleen H.; Koenig, Melissa; Harris, Paul L.

    2007-01-01

    In 2 studies, the sensitivity of 3- and 4-year-olds to the previous accuracy of informants was assessed. Children viewed films in which 2 informants labeled familiar objects with differential accuracy (across the 2 experiments, children were exposed to the following rates of accuracy by the more and less accurate informants, respectively: 100% vs.…

  8. Distinguishing Fast and Slow Processes in Accuracy - Response Time Data.

    PubMed

    Coomans, Frederik; Hofman, Abe; Brinkhuis, Matthieu; van der Maas, Han L J; Maris, Gunter

    2016-01-01

    We investigate the relation between speed and accuracy within problem solving in its simplest non-trivial form. We consider tests with only two items and code the item responses in two binary variables: one indicating the response accuracy, and one indicating the response speed. Despite being a very basic setup, it enables us to study item pairs stemming from a broad range of domains such as basic arithmetic, first language learning, intelligence-related problems, and chess, with large numbers of observations for every pair of problems under consideration. We carry out a survey over a large number of such item pairs and compare three types of psychometric accuracy-response time models present in the literature: two 'one-process' models, the first of which models accuracy and response time as conditionally independent and the second of which models accuracy and response time as conditionally dependent, and a 'two-process' model which models accuracy contingent on response time. We find that the data clearly violates the restrictions imposed by both one-process models and requires additional complexity which is parsimoniously provided by the two-process model. We supplement our survey with an analysis of the erroneous responses for an example item pair and demonstrate that there are very significant differences between the types of errors in fast and slow responses. PMID:27167518

  9. Distinguishing Fast and Slow Processes in Accuracy - Response Time Data

    PubMed Central

    Coomans, Frederik; Hofman, Abe; Brinkhuis, Matthieu; van der Maas, Han L. J.; Maris, Gunter

    2016-01-01

    We investigate the relation between speed and accuracy within problem solving in its simplest non-trivial form. We consider tests with only two items and code the item responses in two binary variables: one indicating the response accuracy, and one indicating the response speed. Despite being a very basic setup, it enables us to study item pairs stemming from a broad range of domains such as basic arithmetic, first language learning, intelligence-related problems, and chess, with large numbers of observations for every pair of problems under consideration. We carry out a survey over a large number of such item pairs and compare three types of psychometric accuracy-response time models present in the literature: two ‘one-process’ models, the first of which models accuracy and response time as conditionally independent and the second of which models accuracy and response time as conditionally dependent, and a ‘two-process’ model which models accuracy contingent on response time. We find that the data clearly violates the restrictions imposed by both one-process models and requires additional complexity which is parsimoniously provided by the two-process model. We supplement our survey with an analysis of the erroneous responses for an example item pair and demonstrate that there are very significant differences between the types of errors in fast and slow responses. PMID:27167518

  10. New Reconstruction Accuracy Metric for 3D PIV

    NASA Astrophysics Data System (ADS)

    Bajpayee, Abhishek; Techet, Alexandra

    2015-11-01

    Reconstruction for 3D PIV typically relies on recombining images captured from different viewpoints via multiple cameras/apertures. Ideally, the quality of reconstruction dictates the accuracy of the derived velocity field. A reconstruction quality parameter Q is commonly used as a measure of the accuracy of reconstruction algorithms. By definition, a high Q value requires intensity peak levels and shapes in the reconstructed and reference volumes to be matched. We show that accurate velocity fields rely only on the peak locations in the volumes and not on intensity peak levels and shapes. In synthetic aperture (SA) PIV reconstructions, the intensity peak shapes and heights vary with the number of cameras and due to spatial/temporal particle intensity variation respectively. This lowers Q but not the accuracy of the derived velocity field. We introduce a new velocity vector correlation factor Qv as a metric to assess the accuracy of 3D PIV techniques, which provides a better indication of algorithm accuracy. For SAPIV, the number of cameras required for a high Qv are lower than that for a high Q. We discuss Qv in the context of 3D PIV and also present a preliminary comparison of the performance of TomoPIV and SAPIV based on Qv.

  11. Astrophysics with Microarcsecond Accuracy Astrometry

    NASA Technical Reports Server (NTRS)

    Unwin, Stephen C.

    2008-01-01

    Space-based astrometry promises to provide a powerful new tool for astrophysics. At a precision level of a few microarcsonds, a wide range of phenomena are opened up for study. In this paper we discuss the capabilities of the SIM Lite mission, the first space-based long-baseline optical interferometer, which will deliver parallaxes to 4 microarcsec. A companion paper in this volume will cover the development and operation of this instrument. At the level that SIM Lite will reach, better than 1 microarcsec in a single measurement, planets as small as one Earth can be detected around many dozen of the nearest stars. Not only can planet masses be definitely measured, but also the full orbital parameters determined, allowing study of system stability in multiple planet systems. This capability to survey our nearby stellar neighbors for terrestrial planets will be a unique contribution to our understanding of the local universe. SIM Lite will be able to tackle a wide range of interesting problems in stellar and Galactic astrophysics. By tracing the motions of stars in dwarf spheroidal galaxies orbiting our Milky Way, SIM Lite will probe the shape of the galactic potential history of the formation of the galaxy, and the nature of dark matter. Because it is flexibly scheduled, the instrument can dwell on faint targets, maintaining its full accuracy on objects as faint as V=19. This paper is a brief survey of the diverse problems in modern astrophysics that SIM Lite will be able to address.

  12. High accuracy broadband infrared spectropolarimetry

    NASA Astrophysics Data System (ADS)

    Krishnaswamy, Venkataramanan

    Mueller matrix spectroscopy or Spectropolarimetry combines conventional spectroscopy with polarimetry, providing more information than can be gleaned from spectroscopy alone. Experimental studies on infrared polarization properties of materials covering a broad spectral range have been scarce due to the lack of available instrumentation. This dissertation aims to fill the gap by the design, development, calibration and testing of a broadband Fourier Transform Infra-Red (FT-IR) spectropolarimeter. The instrument operates over the 3-12 mum waveband and offers better overall accuracy compared to the previous generation instruments. Accurate calibration of a broadband spectropolarimeter is a non-trivial task due to the inherent complexity of the measurement process. An improved calibration technique is proposed for the spectropolarimeter and numerical simulations are conducted to study the effectiveness of the proposed technique. Insights into the geometrical structure of the polarimetric measurement matrix is provided to aid further research towards global optimization of Mueller matrix polarimeters. A high performance infrared wire-grid polarizer is characterized using the spectropolarimeter. Mueller matrix spectrum measurements on Penicillin and pine pollen are also presented.

  13. Ground Truth Sampling and LANDSAT Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Robinson, J. W.; Gunther, F. J.; Campbell, W. J.

    1982-01-01

    It is noted that the key factor in any accuracy assessment of remote sensing data is the method used for determining the ground truth, independent of the remote sensing data itself. The sampling and accuracy procedures developed for nuclear power plant siting study are described. The purpose of the sampling procedure was to provide data for developing supervised classifications for two study sites and for assessing the accuracy of that and the other procedures used. The purpose of the accuracy assessment was to allow the comparison of the cost and accuracy of various classification procedures as applied to various data types.

  14. Lung Ultrasound Diagnostic Accuracy in Neonatal Pneumothorax

    PubMed Central

    Copetti, Roberto

    2016-01-01

    Background. Pneumothorax (PTX) still remains a common cause of morbidity in critically ill and ventilated neonates. At the present time, lung ultrasound (LUS) is not included in the diagnostic work-up of PTX in newborns despite of excellent evidence of reliability in adults. The aim of this study was to compare LUS, chest X-ray (CXR), and chest transillumination (CTR) for PTX diagnosis in a group of neonates in which the presence of air in the pleural space was confirmed. Methods. In a 36-month period, 49 neonates with respiratory distress were enrolled in the study. Twenty-three had PTX requiring aspiration or chest drainage (birth weight 2120 ± 1640 grams; gestational age = 36 ± 5 weeks), and 26 were suffering from respiratory distress without PTX (birth weight 2120 ± 1640 grams; gestational age = 34 ± 5 weeks). Both groups had done LUS, CTR, and CXR. Results. LUS was consistent with PTX in all 23 patients requiring chest aspiration. In this group, CXR did not detect PTX in one patient while CTR did not detect it in 3 patients. Sensitivity and specificity in diagnosing PTX were therefore 1 for LUS, 0.96 and 1 for CXR, and 0.87 and 0.96 for CTR. Conclusions. Our results confirm that also in newborns LUS is at least as accurate as CXR in the diagnosis of PTX while CTR has a lower accuracy.

  15. Timing accuracy of the GEO 600 data acquisition system

    NASA Astrophysics Data System (ADS)

    Kötter, K.; Hewitson, M.; Ward, H.

    2004-03-01

    This paper describes the tests done for validating the timing accuracy of the GEO 600 data acquisition system. Correct time stamping of the recorded data is required for a number of search algorithms for gravitational wave signals (coincidence analysis, targeted pulsar searches, etc). Tests on the current system determined the absolute timing offset to be 15.89 µs with a standard deviation of 63 ns. Both offset and jitter were measured against an external reference clock. Additional analysis of data recorded during the S1 data taking run was done to validate the timing accuracy during this period.

  16. Tracking accuracy for Leosat-Geosat laser links

    NASA Astrophysics Data System (ADS)

    Seshamani, Ramani; Rao, D. V. B.; Alex, T. K.; Jain, Y. K.

    1989-06-01

    A tracking accuracy of 1 microrad is required for the achievement of Leosat-Geosat laser communications links, entailing exceptionally accurate alignment between transmitter and receiver as well as point-ahead capability. The pointing and acquisition procedure would involve the two optical system/telescope units to be pointed toward each other with an attitude accuracy smaller than the position uncertainty; a spatial-scam operation by the Leosat's narrow beam, and subsequently by the Geosat's would have to be conducted before acquisition is completed, allowing switching from acquisition to tracking mode.

  17. Submicron accuracy optimization for laser beam soldering processes

    NASA Astrophysics Data System (ADS)

    Beckert, Erik; Burkhardt, Thomas; Hornaff, Marcel; Kamm, Andreas; Scheidig, Ingo; Stiehl, Cornelia; Eberhardt, Ramona; Tünnermann, Andreas

    2010-02-01

    Laser beam soldering is a packaging technology alternative to polymeric adhesive bonding in terms of stability and functionality. Nevertheless, when packaging especially micro optical and MOEMS systems this technology has to fulfil stringent requirements for accuracy in the micron and submicron range. Investigating the assembly of several laser optical systems it has been shown that micron accuracy and submicron reproducibility can be reached when using design-of-experiment optimized solder processes that are based on applying liquid solder drops ("Solder Bumping") onto wettable metalized joining surfaces of optical components. The soldered assemblies were subject to thermal cycles and vibration/ shock test also.

  18. Differential signal scatterometry overlay metrology: an accuracy investigation

    NASA Astrophysics Data System (ADS)

    Kandel, Daniel; Adel, Mike; Dinu, Berta; Golovanevsky, Boris; Izikson, Pavel; Levinski, Vladimir; Vakshtein, Irina; Leray, Philippe; Vasconi, Mauro; Salski, Bartlomiej

    2007-06-01

    The overlay control budget for the 32nm technology node will be 5.7nm according to the ITRS. The overlay metrology budget is typically 1/10 of the overlay control budget resulting in overlay metrology total measurement uncertainty (TMU) requirements of 0.57nm for the most challenging use cases of the 32nm node. The current state of the art imaging overlay metrology technology does not meet this strict requirement, and further technology development is required to bring it to this level. In this work we present results of a study of an alternative technology for overlay metrology - Differential signal scatterometry overlay (SCOL). Theoretical considerations show that overlay technology based on differential signal scatterometry has inherent advantages, which will allow it to achieve the 32nm technology node requirements and go beyond it. We present results of simulations of the expected accuracy associated with a variety of scatterometry overlay target designs. We also present our first experimental results of scatterometry overlay measurements, comparing this technology with the standard imaging overlay metrology technology. In particular, we present performance results (precision and tool induced shift) and address the issue of accuracy of scatterometry overlay. We show that with the appropriate target design and algorithms scatterometry overlay achieves the accuracy required for future technology nodes.

  19. Helmet-mounted display accuracy in the aircraft cockpit

    NASA Astrophysics Data System (ADS)

    Mulholland, Fred F.

    2002-08-01

    When a Helmet-Mounted Display (HMD) system is used in an aircraft cockpit, the usual intent is to overlay symbols or images in the display on their real-world object counterparts. The HMD system determines a pointing angle (in aircraft coordinates) to the real-world object. This pointing angle is sent to the Mission Computer (MC) for use by other aircraft systems and is used by the HMD to position symbology in the HMD image. The accuracy of the HMD is defined as the error of the pointing angle sent to the MC versus the real-world angle to the object. This error is usually given in terms of milli-radians (mrad). Note that having the symbol in the HMD image overlay the corresponding object in the real world does not necessarily ensure an accurate pointing angle. One example of HMD use is to position an aiming cross in the display over an aircraft in the sky. The pointing angle to that aircraft is sent via the MC to another sensor (radar, missile, targeting pod) which then locks onto that aircraft or object. The accuracy requirement is to get the other sensor pointed at an angle to detect the same aircraft. There are aircraft integration issues to ensure target acquisition, but these will not be covered in this paper. One component of the HMD system is a tracker system, and the tracker system's accuracy is often looked at as the HMD accuracy. However, the accuracy of the tracker system is only one piece of the total HMD system accuracy, and as trackers get better, they may not even be the largest error component. This paper identifies the various error components of the HMD system installed in the aircraft cockpit and discusses the techniques used for minimizing errors and improving accuracy.

  20. Accuracy analysis and design of A3 parallel spindle head

    NASA Astrophysics Data System (ADS)

    Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan

    2016-03-01

    As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.

  1. Spacecraft attitude determination accuracy from mission experience

    NASA Technical Reports Server (NTRS)

    Brasoveanu, D.; Hashmall, J.; Baker, D.

    1994-01-01

    This document presents a compilation of the attitude accuracy attained by a number of satellites that have been supported by the Flight Dynamics Facility (FDF) at Goddard Space Flight Center (GSFC). It starts with a general description of the factors that influence spacecraft attitude accuracy. After brief descriptions of the missions supported, it presents the attitude accuracy results for currently active and older missions, including both three-axis stabilized and spin-stabilized spacecraft. The attitude accuracy results are grouped by the sensor pair used to determine the attitudes. A supplementary section is also included, containing the results of theoretical computations of the effects of variation of sensor accuracy on overall attitude accuracy.

  2. Tracking accuracy assessment for concentrator photovoltaic systems

    NASA Astrophysics Data System (ADS)

    Norton, Matthew S. H.; Anstey, Ben; Bentley, Roger W.; Georghiou, George E.

    2010-10-01

    The accuracy to which a concentrator photovoltaic (CPV) system can track the sun is an important parameter that influences a number of measurements that indicate the performance efficiency of the system. This paper presents work carried out into determining the tracking accuracy of a CPV system, and illustrates the steps involved in gaining an understanding of the tracking accuracy. A Trac-Stat SL1 accuracy monitor has been used in the determination of pointing accuracy and has been integrated into the outdoor CPV module test facility at the Photovoltaic Technology Laboratories in Nicosia, Cyprus. Results from this work are provided to demonstrate how important performance indicators may be presented, and how the reliability of results is improved through the deployment of such accuracy monitors. Finally, recommendations on the use of such sensors are provided as a means to improve the interpretation of real outdoor performance.

  3. Accuracy of GIPSY PPP from a denser network

    NASA Astrophysics Data System (ADS)

    Gokhan Hayal, Adem; Ugur Sanli, Dogan

    2015-04-01

    Researchers need to know about the accuracy of GPS for the planning of their field survey and hence to obtain reliable positions as well as deformation rates. Geophysical applications such as monitoring of development of a fault creep or of crustal motion for global sea level rise studies necessitate the use of continuous GPS whereas applications such as determining co-seismic displacements where permanent GPS sites are sparsely scattered require the employment of episodic campaigns. Recently, real time applications of GPS in relation to the early prediction of earthquakes and tsunamis are in concern. Studying the static positioning accuracy of GPS has been of interest to researchers for more than a decade now. Various software packages and modeling strategies have been tested so far. Relative positioning accuracy was compared with PPP accuracy. For relative positioning, observing session duration and network geometry of reference stations appear to be the dominant factors on GPS accuracy whereas observing session duration seems to be the only factor influencing the PPP accuracy. We believe that latest developments concerning the accuracy of static GPS from well-established software will form a basis for the quality of GPS field works mentioned above especially for real time applications which are referred to more frequently nowadays. To assess the GPS accuracy, conventionally some 10 to 30 regionally or globally scattered networks of GPS stations are used. In this study, we enlarge the size of GPS network up to 70 globally scattered IGS stations to observe the changes on our previous accuracy modeling which employed only 13 stations. We use the latest version 6.3 of GIPSY/OASIS II software and download the data from SOPAC archives. Noting the effect of the ionosphere on our previous accuracy modeling, here we selected the GPS days through which the k-index values are lower than 4. This enabled us to extend the interval of observing session duration used for the

  4. Cost and accuracy of advanced breeding trial designs in apple

    PubMed Central

    Harshman, Julia M; Evans, Kate M; Hardner, Craig M

    2016-01-01

    Trialing advanced candidates in tree fruit crops is expensive due to the long-term nature of the planting and labor-intensive evaluations required to make selection decisions. How closely the trait evaluations approximate the true trait value needs balancing with the cost of the program. Designs of field trials of advanced apple candidates in which reduced number of locations, the number of years and the number of harvests per year were modeled to investigate the effect on the cost and accuracy in an operational breeding program. The aim was to find designs that would allow evaluation of the most additional candidates while sacrificing the least accuracy. Critical percentage difference, response to selection, and correlated response were used to examine changes in accuracy of trait evaluations. For the quality traits evaluated, accuracy and response to selection were not substantially reduced for most trial designs. Risk management influences the decision to change trial design, and some designs had greater risk associated with them. Balancing cost and accuracy with risk yields valuable insight into advanced breeding trial design. The methods outlined in this analysis would be well suited to other horticultural crop breeding programs. PMID:27019717

  5. Accuracy potential of large-format still-video cameras

    NASA Astrophysics Data System (ADS)

    Maas, Hans-Gerd; Niederoest, Markus

    1997-07-01

    High resolution digital stillvideo cameras have found wide interest in digital close range photogrammetry in the last five years. They can be considered fully autonomous digital image acquisition systems without the requirement of permanent connection to an external power supply and a host computer for camera control and data storage, thus allowing for convenient data acquisition in many applications of digital photogrammetry. The accuracy potential of stillvideo cameras has been extensively discussed. While large format CCD sensors themselves can be considered very accurate measurement devices, lenses, camera bodies and sensor mounts of stillvideo cameras are not compression techniques in image storage, which may also affect the accuracy potential. This presentation shows recent experiences from accuracy tests with a number of large format stillvideo cameras, including a modified Kodak DCS200, a Kodak DCS460, a Nikon E2 and a Polaroid PDC-2000. The tests of the cameras include absolute and relative measurements and were performed using strong photogrammetric networks and good external reference. The results of the tests indicate that very high accuracies can be achieved with large blocks of stillvideo imagery especially in deformation measurements. In absolute measurements, however, the accuracy potential of the large format CCD sensors is partly ruined by a lack of stability of the cameras.

  6. High accuracy optical rate sensor

    NASA Technical Reports Server (NTRS)

    Uhde-Lacovara, J.

    1990-01-01

    Optical rate sensors, in particular CCD arrays, will be used on Space Station Freedom to track stars in order to provide inertial attitude reference. An algorithm to provide attitude rate information by directly manipulating the sensor pixel intensity output is presented. The star image produced by a sensor in the laboratory is modeled. Simulated, moving star images are generated, and the algorithm is applied to this data for a star moving at a constant rate. The algorithm produces accurate derived rate of the above data. A step rate change requires two frames for the output of the algorithm to accurately reflect the new rate. When zero mean Gaussian noise with a standard deviation of 5 is added to the simulated data of a star image moving at a constant rate, the algorithm derives the rate with an error of 1.9 percent at a rate of 1.28 pixels per frame.

  7. Accuracy of the vivofit activity tracker.

    PubMed

    Alsubheen, Sana'a A; George, Amanda M; Baker, Alicia; Rohr, Linda E; Basset, Fabien A

    2016-08-01

    The purpose of this study was to examine the accuracy of the vivofit activity tracker in assessing energy expenditure and step count. Thirteen participants wore the vivofit activity tracker for five days. Participants were required to independently perform 1 h of self-selected activity each day of the study. On day four, participants came to the lab to undergo BMR and a treadmill-walking task (TWT). On day five, participants completed 1 h of office-type activities. BMR values estimated by the vivofit were not significantly different from the values measured through indirect calorimetry (IC). The vivofit significantly underestimated EE for treadmill walking, but responded to the differences in the inclination. Vivofit underestimated step count for level walking but provided an accurate estimate for incline walking. There was a strong correlation between EE and the exercise intensity. The vivofit activity tracker is on par with similar devices and can be used to track physical activity. PMID:27266422

  8. Accuracy testing of steel and electric groundwater-level measuring tapes: Test method and in-service tape accuracy

    USGS Publications Warehouse

    Fulford, Janice M.; Clayton, Christopher S.

    2015-01-01

    The calibration device and proposed method were used to calibrate a sample of in-service USGS steel and electric groundwater tapes. The sample of in-service groundwater steel tapes were in relatively good condition. All steel tapes, except one, were accurate to ±0.01 ft per 100 ft over their entire length. One steel tape, which had obvious damage in the first hundred feet, was marginally outside the accuracy of ±0.01 ft per 100 ft by 0.001 ft. The sample of in-service groundwater-level electric tapes were in a range of conditions—from like new, with cosmetic damage, to nonfunctional. The in-service electric tapes did not meet the USGS accuracy recommendation of ±0.01 ft. In-service electric tapes, except for the nonfunctional tape, were accurate to about ±0.03 ft per 100 ft. A comparison of new with in-service electric tapes found that steel-core electric tapes maintained their length and accuracy better than electric tapes without a steel core. The in-service steel tapes could be used as is and achieve USGS accuracy recommendations for groundwater-level measurements. The in-service electric tapes require tape corrections to achieve USGS accuracy recommendations for groundwater-level measurement.

  9. Total solar irradiance record accuracy and recent improvements

    NASA Astrophysics Data System (ADS)

    Kopp, Greg

    /TIMs are intended to achieve levels of absolute accuracy that should reduce the TSI record's reliance on measurement continuity. I will discuss the climate-derived requirements for the levels of absolute accuracy and instrument stability needed for TSI measurements and describe current work that is underway to achieve these measurement requirements.

  10. Accuracy analysis of distributed simulation systems

    NASA Astrophysics Data System (ADS)

    Lin, Qi; Guo, Jing

    2010-08-01

    Existed simulation works always emphasize on procedural verification, which put too much focus on the simulation models instead of simulation itself. As a result, researches on improving simulation accuracy are always limited in individual aspects. As accuracy is the key in simulation credibility assessment and fidelity study, it is important to give an all-round discussion of the accuracy of distributed simulation systems themselves. First, the major elements of distributed simulation systems are summarized, which can be used as the specific basis of definition, classification and description of accuracy of distributed simulation systems. In Part 2, the framework of accuracy of distributed simulation systems is presented in a comprehensive way, which makes it more sensible to analyze and assess the uncertainty of distributed simulation systems. The concept of accuracy of distributed simulation systems is divided into 4 other factors and analyzed respectively further more in Part 3. In Part 4, based on the formalized description of framework of accuracy analysis in distributed simulation systems, the practical approach are put forward, which can be applied to study unexpected or inaccurate simulation results. Following this, a real distributed simulation system based on HLA is taken as an example to verify the usefulness of the approach proposed. The results show that the method works well and is applicable in accuracy analysis of distributed simulation systems.

  11. Accuracy of Parent Identification of Stuttering Occurrence

    ERIC Educational Resources Information Center

    Einarsdottir, Johanna; Ingham, Roger

    2009-01-01

    Background: Clinicians rely on parents to provide information regarding the onset and development of stuttering in their own children. The accuracy and reliability of their judgments of stuttering is therefore important and is not well researched. Aim: To investigate the accuracy of parent judgements of stuttering in their own children's speech…

  12. Stereotype Accuracy: Toward Appreciating Group Differences.

    ERIC Educational Resources Information Center

    Lee, Yueh-Ting, Ed.; And Others

    The preponderance of scholarly theory and research on stereotypes assumes that they are bad and inaccurate, but understanding stereotype accuracy and inaccuracy is more interesting and complicated than simpleminded accusations of racism or sexism would seem to imply. The selections in this collection explore issues of the accuracy of stereotypes…

  13. Accuracy assessment of GPS satellite orbits

    NASA Technical Reports Server (NTRS)

    Schutz, B. E.; Tapley, B. D.; Abusali, P. A. M.; Ho, C. S.

    1991-01-01

    GPS orbit accuracy is examined using several evaluation procedures. The existence is shown of unmodeled effects which correlate with the eclipsing of the sun. The ability to obtain geodetic results that show an accuracy of 1-2 parts in 10 to the 8th or better has not diminished.

  14. The Accuracy of Gender Stereotypes Regarding Occupations.

    ERIC Educational Resources Information Center

    Beyer, Sylvia; Finnegan, Andrea

    Given the salience of biological sex, it is not surprising that gender stereotypes are pervasive. To explore the prevalence of such stereotypes, the accuracy of gender stereotyping regarding occupations is presented in this paper. The paper opens with an overview of gender stereotype measures that use self-perceptions as benchmarks of accuracy,…

  15. Individual Differences in Eyewitness Recall Accuracy.

    ERIC Educational Resources Information Center

    Berger, James D.; Herringer, Lawrence G.

    1991-01-01

    Presents study results comparing college students' self-evaluation of recall accuracy to actual recall of detail after viewing a crime scenario. Reports that self-reported ability to remember detail correlates with accuracy in memory of specifics. Concludes that people may have a good indication early in the eyewitness situation of whether they…

  16. Scientific Sources' Perception of Network News Accuracy.

    ERIC Educational Resources Information Center

    Moore, Barbara; Singletary, Michael

    Recent polls seem to indicate that many Americans rely on television as a credible and primary source of news. To test the accuracy of this news, a study examined three networks' newscasts of science news, the attitudes of the science sources toward reporting in their field, and the factors related to accuracy. The Vanderbilt News Archives Index…

  17. Accuracy of Carbohydrate Counting in Adults.

    PubMed

    Meade, Lisa T; Rushton, Wanda E

    2016-07-01

    In Brief This study investigates carbohydrate counting accuracy in patients using insulin through a multiple daily injection regimen or continuous subcutaneous insulin infusion. The average accuracy test score for all patients was 59%. The carbohydrate test in this study can be used to emphasize the importance of carbohydrate counting to patients and to provide ongoing education. PMID:27621531

  18. Theoferometer for High Accuracy Optical Alignment and Metrology

    NASA Technical Reports Server (NTRS)

    Toland, Ronald; Leviton, Doug; Koterba, Seth

    2004-01-01

    The accurate measurement of the orientation of optical parts and systems is a pressing problem for upcoming space missions, such as stellar interferometers, requiring the knowledge and maintenance of positions to the sub-arcsecond level. Theodolites, the devices commonly used to make these measurements, cannot provide the needed level of accuracy. This paper describes the design, construction, and testing of an interferometer system to fill the widening gap between future requirements and current capabilities. A Twyman-Green interferometer mounted on a 2 degree of freedom rotation stage is able to obtain sub-arcsecond, gravity-referenced tilt measurements of a sample alignment cube. Dubbed a 'theoferometer,' this device offers greater ease-of-use, accuracy, and repeatability than conventional methods, making it a suitable 21st-century replacement for the theodolite.

  19. Optimizing the geometrical accuracy of curvilinear meshes

    NASA Astrophysics Data System (ADS)

    Toulorge, Thomas; Lambrechts, Jonathan; Remacle, Jean-François

    2016-04-01

    This paper presents a method to generate valid high order meshes with optimized geometrical accuracy. The high order meshing procedure starts with a linear mesh, that is subsequently curved without taking care of the validity of the high order elements. An optimization procedure is then used to both untangle invalid elements and optimize the geometrical accuracy of the mesh. Standard measures of the distance between curves are considered to evaluate the geometrical accuracy in planar two-dimensional meshes, but they prove computationally too costly for optimization purposes. A fast estimate of the geometrical accuracy, based on Taylor expansions of the curves, is introduced. An unconstrained optimization procedure based on this estimate is shown to yield significant improvements in the geometrical accuracy of high order meshes, as measured by the standard Hausdorff distance between the geometrical model and the mesh. Several examples illustrate the beneficial impact of this method on CFD solutions, with a particular role of the enhanced mesh boundary smoothness.

  20. 10 CFR 61.9a - Completeness and accuracy of information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 2 2011-01-01 2011-01-01 false Completeness and accuracy of information. 61.9a Section 61.9a Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSING REQUIREMENTS FOR LAND DISPOSAL OF RADIOACTIVE WASTE General Provisions § 61.9a Completeness and accuracy of information. (a)...

  1. 10 CFR 61.9a - Completeness and accuracy of information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 2 2012-01-01 2012-01-01 false Completeness and accuracy of information. 61.9a Section 61.9a Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) LICENSING REQUIREMENTS FOR LAND DISPOSAL OF RADIOACTIVE WASTE General Provisions § 61.9a Completeness and accuracy of information. (a)...

  2. 40 CFR 92.127 - Emission measurement accuracy.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... procedure: (i) Span the full analyzer range using a top range calibration gas meeting the calibration gas... applicable requirements of §§ 92.118 through 92.122. (iii) Select a calibration gas (a span gas may be used... increments. This gas must be “named” to an accuracy of ±1.0 percent (±2.0 percent for CO2 span gas) of...

  3. 40 CFR 92.127 - Emission measurement accuracy.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... procedure: (i) Span the full analyzer range using a top range calibration gas meeting the calibration gas... applicable requirements of §§ 92.118 through 92.122. (iii) Select a calibration gas (a span gas may be used... increments. This gas must be “named” to an accuracy of ±1.0 percent (±2.0 percent for CO2 span gas) of...

  4. 40 CFR 92.127 - Emission measurement accuracy.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... procedure: (i) Span the full analyzer range using a top range calibration gas meeting the calibration gas... applicable requirements of §§ 92.118 through 92.122. (iii) Select a calibration gas (a span gas may be used... increments. This gas must be “named” to an accuracy of ±1.0 percent (±2.0 percent for CO2 span gas) of...

  5. High accuracy ground target location using loitering munitions platforms

    NASA Astrophysics Data System (ADS)

    Wang, Zhifei; Wang, Hua; Han, Jing

    2011-08-01

    Precise ground target localization is an interesting problem and relevant not only for military but also for civilian applications, and this is expected to be an emerging field with many potential applications. Ground Target Location Using Loitering Munitions (LM) requires estimation of aircraft position and attitude to a high degree of accuracy, and data derived by processing sensor images might be useful for supplementing other navigation sensor information and increasing the reliability and accuracy of navigation estimates during this flight phase. This paper presents a method for high accuracy ground target localization using Loitering Munitions (LM) equipped with a video camera sensor. The proposed method is based on a satellite or aerial image matching technique. In order to acquire the target position of ground intelligently and rapidly and to improve the localization accuracy estimating the target position jointly with the systematic LM and camera attitude measurement errors, several techniques have been proposed. Firstly, ground target geo-location based on tray tracing was used for comparison against our approach. By proposed methods the calculation from pixel to world coordinates can be done. Then Hough transform was used to image alignment and a median filter was applied for removing small details which are visible from the sensed image but not visible from the reference image. Finally, A novel edge detection method and an image matching algorithm based on bifurcation extraction were proposed. This method did not require accurate knowledge of the aircraft position and attitude and high performance sensors, therefore it is especially suitable for LM which did not have capability to carry accurate sensors due to their limited play weight and power resources. The results of simulation experiments and theory analyzing demonstrate that high accuracy ground target localization is reached with low performance sensors, and achieve timely. The method is used in

  6. The Social Accuracy Model of Interpersonal Perception: Assessing Individual Differences in Perceptive and Expressive Accuracy

    ERIC Educational Resources Information Center

    Biesanz, Jeremy C.

    2010-01-01

    The social accuracy model of interpersonal perception (SAM) is a componential model that estimates perceiver and target effects of different components of accuracy across traits simultaneously. For instance, Jane may be generally accurate in her perceptions of others and thus high in "perceptive accuracy"--the extent to which a particular…

  7. Systematic review of discharge coding accuracy

    PubMed Central

    Burns, E.M.; Rigby, E.; Mamidanna, R.; Bottle, A.; Aylin, P.; Ziprin, P.; Faiz, O.D.

    2012-01-01

    Introduction Routinely collected data sets are increasingly used for research, financial reimbursement and health service planning. High quality data are necessary for reliable analysis. This study aims to assess the published accuracy of routinely collected data sets in Great Britain. Methods Systematic searches of the EMBASE, PUBMED, OVID and Cochrane databases were performed from 1989 to present using defined search terms. Included studies were those that compared routinely collected data sets with case or operative note review and those that compared routinely collected data with clinical registries. Results Thirty-two studies were included. Twenty-five studies compared routinely collected data with case or operation notes. Seven studies compared routinely collected data with clinical registries. The overall median accuracy (routinely collected data sets versus case notes) was 83.2% (IQR: 67.3–92.1%). The median diagnostic accuracy was 80.3% (IQR: 63.3–94.1%) with a median procedure accuracy of 84.2% (IQR: 68.7–88.7%). There was considerable variation in accuracy rates between studies (50.5–97.8%). Since the 2002 introduction of Payment by Results, accuracy has improved in some respects, for example primary diagnoses accuracy has improved from 73.8% (IQR: 59.3–92.1%) to 96.0% (IQR: 89.3–96.3), P= 0.020. Conclusion Accuracy rates are improving. Current levels of reported accuracy suggest that routinely collected data are sufficiently robust to support their use for research and managerial decision-making. PMID:21795302

  8. Geometric accuracy in airborne SAR images

    NASA Technical Reports Server (NTRS)

    Blacknell, D.; Quegan, S.; Ward, I. A.; Freeman, A.; Finley, I. P.

    1989-01-01

    Uncorrected across-track motions of a synthetic aperture radar (SAR) platform can cause both a severe loss of azimuthal positioning accuracy in, and defocusing of, the resultant SAR image. It is shown how the results of an autofocus procedure can be incorporated in the azimuth processing to produce a fully focused image that is geometrically accurate in azimuth. Range positioning accuracy is also discussed, leading to a comprehensive treatment of all aspects of geometric accuracy. The system considered is an X-band SAR.

  9. High accuracy calibration of the fiber spectroradiometer

    NASA Astrophysics Data System (ADS)

    Wu, Zhifeng; Dai, Caihong; Wang, Yanfei; Chen, Binhua

    2014-11-01

    Comparing to the big-size scanning spectroradiometer, the compact and convenient fiber spectroradiometer is widely used in various kinds of fields, such as the remote sensing, aerospace monitoring, and solar irradiance measurement. High accuracy calibration should be made before the use, which involves the wavelength accuracy, the background environment noise, the nonlinear effect, the bandwidth, the stray light and et al. The wavelength lamp and tungsten lamp are frequently used to calibration the fiber spectroradiometer. The wavelength difference can be easily reduced through the software or calculation. However, the nonlinear effect and the bandwidth always can affect the measurement accuracy significantly.

  10. Accuracy and consistency of modern elastomeric pumps.

    PubMed

    Weisman, Robyn S; Missair, Andres; Pham, Phung; Gutierrez, Juan F; Gebhard, Ralf E

    2014-01-01

    Continuous peripheral nerve blockade has become a popular method of achieving postoperative analgesia for many surgical procedures. The safety and reliability of infusion pumps are dependent on their flow rate accuracy and consistency. Knowledge of pump rate profiles can help physicians determine which infusion pump is best suited for their clinical applications and specific patient population. Several studies have investigated the accuracy of portable infusion pumps. Using methodology similar to that used by Ilfeld et al, we investigated the accuracy and consistency of several current elastomeric pumps. PMID:25140510

  11. Discrimination in measures of knowledge monitoring accuracy

    PubMed Central

    Was, Christopher A.

    2014-01-01

    Knowledge monitoring predicts academic outcomes in many contexts. However, measures of knowledge monitoring accuracy are often incomplete. In the current study, a measure of students’ ability to discriminate known from unknown information as a component of knowledge monitoring was considered. Undergraduate students’ knowledge monitoring accuracy was assessed and used to predict final exam scores in a specific course. It was found that gamma, a measure commonly used as the measure of knowledge monitoring accuracy, accounted for a small, but significant amount of variance in academic performance whereas the discrimination and bias indexes combined to account for a greater amount of variance in academic performance. PMID:25339979

  12. Accuracy Evaluation of Tracking Equipment Based on Star- Station- Difference Technique

    NASA Astrophysics Data System (ADS)

    Zhao, LiJian; Huang, XiaoJuan; Pan, Liang; Xu, RuXiang

    2016-02-01

    An approach based on the star station difference technology is proposed for accuracy evaluation of tracking and controlling shipboard equipments in this paper. The proposed method has the advantages of simple equipment, convenient measurement, and low requirements on environmental conditions.

  13. Key technologies for high-accuracy large mesh antenna reflectors

    NASA Astrophysics Data System (ADS)

    Meguro, Akira; Harada, Satoshi; Watanabe, Mitsunobu

    2003-12-01

    Nippon Telephone and Telegram Corporation (NTT) continues to develop the modular mesh-type deployable antenna. Antenna diameter can be changed from 5 m to about 20 m by changing the number of modules used with surface accuracy better than 2.4 mm RMS (including all error factors) with sufficient deployment reliability. Key technologies are the antenna's structural design, the deployment mechanism, the design tool, the analysis tool, and modularized testing/evaluation methods. This paper describes our beam steering mechanism. Tests show that it yields a beam pointing accuracy of better than 0.1°. Based on the S-band modular mesh antenna reflector, the surface accuracy degradation factors that must be considered in designing the new antenna are partially identified. The influence of modular connection errors on surface accuracy is quantitatively estimated. Our analysis tool SPADE is extended to include the addition of joint gaps. The addition of gaps allows non-linear vibration characteristics due to gapping in deployment hinges to be calculated. We intend to design a new type of mesh antenna reflector. Our new goal is an antenna for Ku or Ka band satellite communication. For this mission, the surface shape must be 5 times more accurate than is required for an S-band antenna.

  14. Spatial accuracy of a rapid defense behavior in caterpillars.

    PubMed

    van Griethuijsen, Linnea I; Banks, Kelly M; Trimmer, Barry A

    2013-02-01

    Aimed movements require that an animal accurately locates the target and correctly reaches that location. One such behavior is the defensive strike seen in Manduca sexta larva. These caterpillars respond to noxious mechanical stimuli applied to their abdomen with a strike of the mandibles towards the location of the stimulus. The accuracy with which the first strike movement reaches the stimulus site depends on the location of the stimulus. Reponses to dorsal stimuli are less accurate than those to ventral stimuli and the mandibles generally land ventral to the stimulus site. Responses to stimuli applied to anterior abdominal segments are less accurate than responses to stimuli applied to more posterior segments and the mandibles generally land posterior to the stimulus site. A trade-off between duration of the strike and radial accuracy is only seen in the anterior stimulus location (body segment A4). The lower accuracy of the responses to anterior and dorsal stimuli can be explained by the morphology of the animal; to reach these areas the caterpillar needs to move its body into a tight curve. Nevertheless, the accuracy is not exact in locations that the animal has shown it can reach, which suggests that consistently aiming more ventral and posterior of the stimulation site might be a defense strategy. PMID:23325858

  15. [Systematic review of diagnostic tests accuracy: a narrative review].

    PubMed

    de Oliveira, Glória Maria; Camargo, Fábio Trinca; Gonçalves, Eduardo Costa; Duarte, Carlos Vinicius Nascimento; Guimarães, Carlos Alberto

    2010-04-01

    The aim of this study is to perform a narrative review of systematic reviews of diagnostic tests accuracy. We undertook a search using The Cochrane Methodology Reviews (Cochrane Reviews of Diagnostic Test Accuracy), Medline and LILACS up to October 2009. Reference lists of included studies were also hand searched. The following search strategy was constructed by using a combination of subject headings and text words: 1. Cochrane Methodology Reviews: accuracy study "Methodology" 2. In Pubmed "Meta-Analysis" [Publication Type] AND "Evidence-Based Medicine" [Mesh]) AND "Sensitivity and Specificity" [Mesh] 3. LILACS (revisao sistematica) or "literatura de REVISAO como assunto" [Descritor de assunto] and (sistematica) or "SISTEMATICA" [Descritor de assunto] and (acuracia) or "SENSIBILIDADE e especificidade" [Descritor de assunto]. In summary, the methodological planning and preparation of systematic reviews of therapeutic interventions are prior to that used in systematic reviews of diagnostic tests accuracy. There are more sources of heterogeneity in design of diagnostic test studies, which impair the synthesis - meta-analysis - of the results. To work around this problem, there are currently uniform requirements for diagnostic test manuscripts submitted to leading biomedical journals. PMID:20549106

  16. Objective sampling with EAGLE to improve acoustic prediction accuracy

    NASA Astrophysics Data System (ADS)

    Rike, Erik R.; Delbalzo, Donald R.

    2003-10-01

    Some Navy operations require extensive acoustic calculations. The standard computational approach is to calculate on a regular grid of points and radials. In complex environmental areas, this implies a dense grid and many radials (i.e., long run times) to achieve acceptable accuracy and detail. However, Navy tactical decision aid calculations must be timely and exhibit adequate accuracy or the results may be too old or too imprecise to be valuable. This dilemma led to a new concept, OGRES (Objective Grid/Radials using Environmentally-sensitive Selection), which produces irregular acoustic grids [Rike and DelBalzo, Proc. IEEE Oceans (2002)]. Its premise is that physical environmental complexity controls the need for dense sampling in space and azimuth, and that transmission loss already computed for nearby coordinates on previous iterations can be used to predict that complexity. Recent work in this area to further increase accuracy and efficiency by using better metrics and interpolation routines has led to the Efficient Acoustic Gridder for Littoral Environments (EAGLE). On each iteration, EAGLE produces an acoustic field for the entire area of interest with ever-increasing resolution and accuracy. An example is presented where approximately an order of magnitude efficiency improvement (over regular grids) is demonstrated. [Work sponsored by ONR.

  17. Performance and accuracy benchmarks for a next generation geodynamo simulation

    NASA Astrophysics Data System (ADS)

    Matsui, H.

    2015-12-01

    A number of numerical dynamo models have successfully represented basic characteristics of the geomagnetic field in the last twenty years. However, parameters in the current dynamo model are far from realistic for the Earth's core. To approach a realistic parameters for the Earth's core in geodynmo simulations, extremely large spatial resolutions are required to resolve convective turbulence and small-scale magnetic fields. To assess the next generation dynamo models on a massively parallel computer, we performed performance and accuracy benchmarks from 15 dynamo codes which employ a diverse range of discretization (spectral, finite difference, finite element, and hybrid methods) and parallelization methods. In the performance benchmark, we compare elapsed time and parallelization capability on the TACC Stampede platform, using up to 16384 processor cores. In the accuracy benchmark, we compare required resolutions to obtain less than 1% error from the suggested solutions. The results of the performance benchmark show that codes using 2-D or 3-D parallelization models have a capability to run with 16384 processor cores. The elapsed time for Calypso and Rayleigh, two parallelized codes that use the spectral method, scales with a smaller exponent than the ideal scaling. The elapsed time of SFEMaNS, which uses finite element and Fourier transform, has the smallest growth of the elapsed time with the resolution and parallelization. However, the accuracy benchmark results show that SFEMaNS require three times more degrees of freedoms in each direction compared with a spherical harmonics expansion. Consequently, SFEMaNS needs more than 200 times of elapsed time for the Calypso and Rayleigh with 10000 cores to obtain the same accuracy. These benchmark results indicate that the spectral method with 2-D or 3-D domain decomposition is the most promising methodology for advancing numerical dynamo simulations in the immediate future.

  18. Accuracy analysis of automatic distortion correction

    NASA Astrophysics Data System (ADS)

    Kolecki, Jakub; Rzonca, Antoni

    2015-06-01

    The paper addresses the problem of the automatic distortion removal from images acquired with non-metric SLR camera equipped with prime lenses. From the photogrammetric point of view the following question arises: is the accuracy of distortion control data provided by the manufacturer for a certain lens model (not item) sufficient in order to achieve demanded accuracy? In order to obtain the reliable answer to the aforementioned problem the two kinds of tests were carried out for three lens models. Firstly the multi-variant camera calibration was conducted using the software providing full accuracy analysis. Secondly the accuracy analysis using check points took place. The check points were measured in the images resampled based on estimated distortion model or in distortion-free images simply acquired in the automatic distortion removal mode. The extensive conclusions regarding application of each calibration approach in practice are given. Finally the rules of applying automatic distortion removal in photogrammetric measurements are suggested.

  19. Empathic Embarrassment Accuracy in Autism Spectrum Disorder.

    PubMed

    Adler, Noga; Dvash, Jonathan; Shamay-Tsoory, Simone G

    2015-06-01

    Empathic accuracy refers to the ability of perceivers to accurately share the emotions of protagonists. Using a novel task assessing embarrassment, the current study sought to compare levels of empathic embarrassment accuracy among individuals with autism spectrum disorders (ASD) with those of matched controls. To assess empathic embarrassment accuracy, we compared the level of embarrassment experienced by protagonists to the embarrassment felt by participants while watching the protagonists. The results show that while the embarrassment ratings of participants and protagonists were highly matched among controls, individuals with ASD failed to exhibit this matching effect. Furthermore, individuals with ASD rated their embarrassment higher than controls when viewing themselves and protagonists on film, but not while performing the task itself. These findings suggest that individuals with ASD tend to have higher ratings of empathic embarrassment, perhaps due to difficulties in emotion regulation that may account for their impaired empathic accuracy and aberrant social behavior. PMID:25732043

  20. Coding accuracy on the psychophysical scale

    PubMed Central

    Kostal, Lubomir; Lansky, Petr

    2016-01-01

    Sensory neurons are often reported to adjust their coding accuracy to the stimulus statistics. The observed match is not always perfect and the maximal accuracy does not align with the most frequent stimuli. As an alternative to a physiological explanation we show that the match critically depends on the chosen stimulus measurement scale. More generally, we argue that if we measure the stimulus intensity on the scale which is proportional to the perception intensity, an improved adjustment in the coding accuracy is revealed. The unique feature of stimulus units based on the psychophysical scale is that the coding accuracy can be meaningfully compared for different stimuli intensities, unlike in the standard case of a metric scale. PMID:27021783

  1. Measuring the Accuracy of Diagnostic Systems.

    ERIC Educational Resources Information Center

    Swets, John A.

    1988-01-01

    Discusses the relative operating characteristic analysis of signal detection theory as a measure of diagnostic accuracy. Reports representative values of this measure in several fields. Compares how problems in these fields are handled. (CW)

  2. Sun-pointing programs and their accuracy

    SciTech Connect

    Zimmerman, J.C.

    1981-05-01

    Several sun-pointing programs and their accuracy are described. FORTRAN program listings are given. Program descriptions are given for both Hewlett-Packard (HP-67) and Texas Instruments (TI-59) hand-held calculators.

  3. Nonverbal self-accuracy in interpersonal interaction.

    PubMed

    Hall, Judith A; Murphy, Nora A; Mast, Marianne Schmid

    2007-12-01

    Four studies measure participants' accuracy in remembering, without forewarning, their own nonverbal behavior after an interpersonal interaction. Self-accuracy for smiling, nodding, gazing, hand gesturing, and self-touching is scored by comparing the participants' recollections with coding based on videotape. Self-accuracy is above chance and of modest magnitude on average. Self-accuracy is greatest for smiling; intermediate for nodding, gazing, and gesturing; and lowest for self-touching. It is higher when participants focus attention away from the self (learning as much as possible about the partner, rearranging the furniture in the room, evaluating the partner, smiling and gazing at the partner) than when participants are more self-focused (getting acquainted, trying to make a good impression on the partner, being evaluated by the partner, engaging in more self-touching). The contributions of cognitive demand and affective state are discussed. PMID:18000102

  4. Accuracy potentials for large space antenna structures

    NASA Technical Reports Server (NTRS)

    Hedgepeth, J. M.

    1980-01-01

    The relationships among materials selection, truss design, and manufacturing techniques in the interest of surface accuracies for large space antennas are discussed. Among the antenna configurations considered are: tetrahedral truss, pretensioned truss, and geodesic dome and radial rib structures. Comparisons are made of the accuracy achievable by truss and dome structure types for a wide variety of diameters, focal lengths, and wavelength of radiated signal, taking into account such deforming influences as solar heating-caused thermal transients and thermal gradients.

  5. Accuracy, resolution, and cost comparisons between small format and mapping cameras for environmental mapping

    NASA Technical Reports Server (NTRS)

    Clegg, R. H.; Scherz, J. P.

    1975-01-01

    Successful aerial photography depends on aerial cameras providing acceptable photographs within cost restrictions of the job. For topographic mapping where ultimate accuracy is required only large format mapping cameras will suffice. For mapping environmental patterns of vegetation, soils, or water pollution, 9-inch cameras often exceed accuracy and cost requirements, and small formats may be better. In choosing the best camera for environmental mapping, relative capabilities and costs must be understood. This study compares resolution, photo interpretation potential, metric accuracy, and cost of 9-inch, 70mm, and 35mm cameras for obtaining simultaneous color and color infrared photography for environmental mapping purposes.

  6. Analysis of deformable image registration accuracy using computational modeling.

    PubMed

    Zhong, Hualiang; Kim, Jinkoo; Chetty, Indrin J

    2010-03-01

    Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results show that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that parameter

  7. Analysis of deformable image registration accuracy using computational modeling

    SciTech Connect

    Zhong Hualiang; Kim, Jinkoo; Chetty, Indrin J.

    2010-03-15

    Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results show that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that parameter

  8. Increasing Accuracy in Computed Inviscid Boundary Conditions

    NASA Technical Reports Server (NTRS)

    Dyson, Roger

    2004-01-01

    A technique has been devised to increase the accuracy of computational simulations of flows of inviscid fluids by increasing the accuracy with which surface boundary conditions are represented. This technique is expected to be especially beneficial for computational aeroacoustics, wherein it enables proper accounting, not only for acoustic waves, but also for vorticity and entropy waves, at surfaces. Heretofore, inviscid nonlinear surface boundary conditions have been limited to third-order accuracy in time for stationary surfaces and to first-order accuracy in time for moving surfaces. For steady-state calculations, it may be possible to achieve higher accuracy in space, but high accuracy in time is needed for efficient simulation of multiscale unsteady flow phenomena. The present technique is the first surface treatment that provides the needed high accuracy through proper accounting of higher-order time derivatives. The present technique is founded on a method known in art as the Hermitian modified solution approximation (MESA) scheme. This is because high time accuracy at a surface depends upon, among other things, correction of the spatial cross-derivatives of flow variables, and many of these cross-derivatives are included explicitly on the computational grid in the MESA scheme. (Alternatively, a related method other than the MESA scheme could be used, as long as the method involves consistent application of the effects of the cross-derivatives.) While the mathematical derivation of the present technique is too lengthy and complex to fit within the space available for this article, the technique itself can be characterized in relatively simple terms: The technique involves correction of surface-normal spatial pressure derivatives at a boundary surface to satisfy the governing equations and the boundary conditions and thereby achieve arbitrarily high orders of time accuracy in special cases. The boundary conditions can now include a potentially infinite number

  9. USDA registration and rectification requirements

    NASA Technical Reports Server (NTRS)

    Allen, R.

    1982-01-01

    Some of the requirements of the United States Department of Agriculture for accuracy of aerospace acquired data, and specifically, requirements for registration and rectification of remotely sensed data are discussed. Particular attention is given to foreign and domestic crop estimation and forecasting, forestry information applications, and rangeland condition evaluations.

  10. Calculation and accuracy of ERBE scanner measurement locations

    NASA Technical Reports Server (NTRS)

    Hoffman, Lawrence H.; Weaver, William L.; Kibler, James F.

    1987-01-01

    The Earth Radiation Budget Experiment (ERBE) uses scanning radiometers to measure shortwave and longwave components of the Earth's radiation field at about 40 km resolution. It is essential that these measurements be accurately located at the top of the Earth's atmosphere so they can be properly interpreted by users of the data. Before the launch of the ERBE instrument sets, a substantial emphasis was placed on understanding all factors which influence the determination of measurement locations and properly modeling those factors in the data processing system. After the launch of ERBE instruments on the Earth Radiation Budget Satellite and NOAA 9 spacecraft in 1984, a coastline projection method was developed to assess the accuracy of the algorithms and data used in the location calculations. Using inflight scanner data and the coastline detection technique, the measurement location errors are found to be smaller than the resolution of the scanner instruments. This accuracy is well within the required location knowledge for useful science analysis.

  11. Accuracy of polyp localization at colonoscopy

    PubMed Central

    O’Connor, Sam A.; Hewett, David G.; Watson, Marcus O.; Kendall, Bradley J.; Hourigan, Luke F.; Holtmann, Gerald

    2016-01-01

    Background and study aims: Accurate documentation of lesion localization at the time of colonoscopic polypectomy is important for future surveillance, management of complications such as delayed bleeding, and for guiding surgical resection. We aimed to assess the accuracy of endoscopic localization of polyps during colonoscopy and examine variables that may influence this accuracy. Patients and methods: We conducted a prospective observational study in consecutive patients presenting for elective, outpatient colonoscopy. All procedures were performed by Australian certified colonoscopists. The endoscopic location of each polyp was reported by the colonoscopist at the time of resection and prospectively recorded. Magnetic endoscope imaging was used to determine polyp location, and colonoscopists were blinded to this image. Three experienced colonoscopists, blinded to the endoscopist’s assessment of polyp location, independently scored the magnetic endoscope images to obtain a reference standard for polyp location (Cronbach alpha 0.98). The accuracy of colonoscopist polyp localization using this reference standard was assessed, and colonoscopist, procedural and patient variables affecting accuracy were evaluated. Results: A total of 155 patients were enrolled and 282 polyps were resected in 95 patients by 14 colonoscopists. The overall accuracy of polyp localization was 85 % (95 % confidence interval, CI; 60 – 96 %). Accuracy varied significantly (P < 0.001) by colonic segment: caecum 100 %, ascending 77 % (CI;65 – 90), transverse 84 % (CI;75 – 92), descending 56 % (CI;32 – 81), sigmoid 88 % (CI;79 – 97), rectum 96 % (CI;90 – 101). There were significant differences in accuracy between colonoscopists (P < 0.001), and colonoscopist experience was a significant independent predictor of accuracy (OR 3.5, P = 0.028) after adjustment for patient and procedural variables. Conclusions: Accuracy of

  12. Towards Experimental Accuracy from the First Principles

    NASA Astrophysics Data System (ADS)

    Polyansky, O. L.; Lodi, L.; Tennyson, J.; Zobov, N. F.

    2013-06-01

    Producing ab initio ro-vibrational energy levels of small, gas-phase molecules with an accuracy of 0.10 cm^{-1} would constitute a significant step forward in theoretical spectroscopy and would place calculated line positions considerably closer to typical experimental accuracy. Such an accuracy has been recently achieved for the H_3^+ molecular ion for line positions up to 17 000 cm ^{-1}. However, since H_3^+ is a two-electron system, the electronic structure methods used in this study are not applicable to larger molecules. A major breakthrough was reported in ref., where an accuracy of 0.10 cm^{-1} was achieved ab initio for seven water isotopologues. Calculated vibrational and rotational energy levels up to 15 000 cm^{-1} and J=25 resulted in a standard deviation of 0.08 cm^{-1} with respect to accurate reference data. As far as line intensities are concerned, we have already achieved for water a typical accuracy of 1% which supersedes average experimental accuracy. Our results are being actively extended along two major directions. First, there are clear indications that our results for water can be improved to an accuracy of the order of 0.01 cm^{-1} by further, detailed ab initio studies. Such level of accuracy would already be competitive with experimental results in some situations. A second, major, direction of study is the extension of such a 0.1 cm^{-1} accuracy to molecules containg more electrons or more than one non-hydrogen atom, or both. As examples of such developments we will present new results for CO, HCN and H_2S, as well as preliminary results for NH_3 and CH_4. O.L. Polyansky, A. Alijah, N.F. Zobov, I.I. Mizus, R. Ovsyannikov, J. Tennyson, L. Lodi, T. Szidarovszky and A.G. Csaszar, Phil. Trans. Royal Soc. London A, {370}, 5014-5027 (2012). O.L. Polyansky, R.I. Ovsyannikov, A.A. Kyuberis, L. Lodi, J. Tennyson and N.F. Zobov, J. Phys. Chem. A, (in press). L. Lodi, J. Tennyson and O.L. Polyansky, J. Chem. Phys. {135}, 034113 (2011).

  13. Accuracy metrics for judging time scale algorithms

    NASA Technical Reports Server (NTRS)

    Douglas, R. J.; Boulanger, J.-S.; Jacques, C.

    1994-01-01

    Time scales have been constructed in different ways to meet the many demands placed upon them for time accuracy, frequency accuracy, long-term stability, and robustness. Usually, no single time scale is optimum for all purposes. In the context of the impending availability of high-accuracy intermittently-operated cesium fountains, we reconsider the question of evaluating the accuracy of time scales which use an algorithm to span interruptions of the primary standard. We consider a broad class of calibration algorithms that can be evaluated and compared quantitatively for their accuracy in the presence of frequency drift and a full noise model (a mixture of white PM, flicker PM, white FM, flicker FM, and random walk FM noise). We present the analytic techniques for computing the standard uncertainty for the full noise model and this class of calibration algorithms. The simplest algorithm is evaluated to find the average-frequency uncertainty arising from the noise of the cesium fountain's local oscillator and from the noise of a hydrogen maser transfer-standard. This algorithm and known noise sources are shown to permit interlaboratory frequency transfer with a standard uncertainty of less than 10(exp -15) for periods of 30-100 days.

  14. Activity monitor accuracy in persons using canes.

    PubMed

    Wendland, Deborah Michael; Sprigle, Stephen H

    2012-01-01

    The StepWatch activity monitor has not been validated on multiple indoor and outdoor surfaces in a population using ambulation aids. The aims of this technical report are to report on strategies to configure the StepWatch activity monitor on subjects using a cane and to report the accuracy of both leg-mounted and cane-mounted StepWatch devices on people ambulating over different surfaces while using a cane. Sixteen subjects aged 67 to 85 yr (mean 75.6) who regularly use a cane for ambulation participated. StepWatch calibration was performed by adjusting sensitivity and cadence. Following calibration optimization, accuracy was tested on both the leg-mounted and cane-mounted devices on different surfaces, including linoleum, sidewalk, grass, ramp, and stairs. The leg-mounted device had an accuracy of 93.4% across all surfaces, while the cane-mounted device had an aggregate accuracy of 84.7% across all surfaces. Accuracy of the StepWatch on the stairs was significantly less accurate (p < 0.001) when comparing surfaces using repeated measures analysis of variance. When monitoring community mobility, placement of a StepWatch on a person and his/her ambulation aid can accurately document both activity and device use. PMID:23341318

  15. Asymptotic accuracy of two-class discrimination

    SciTech Connect

    Ho, T.K.; Baird, H.S.

    1994-12-31

    Poor quality-e.g. sparse or unrepresentative-training data is widely suspected to be one cause of disappointing accuracy of isolated-character classification in modern OCR machines. We conjecture that, for many trainable classification techniques, it is in fact the dominant factor affecting accuracy. To test this, we have carried out a study of the asymptotic accuracy of three dissimilar classifiers on a difficult two-character recognition problem. We state this problem precisely in terms of high-quality prototype images and an explicit model of the distribution of image defects. So stated, the problem can be represented as a stochastic source of an indefinitely long sequence of simulated images labeled with ground truth. Using this sequence, we were able to train all three classifiers to high and statistically indistinguishable asymptotic accuracies (99.9%). This result suggests that the quality of training data was the dominant factor affecting accuracy. The speed of convergence during training, as well as time/space trade-offs during recognition, differed among the classifiers.

  16. Decreased interoceptive accuracy following social exclusion.

    PubMed

    Durlik, Caroline; Tsakiris, Manos

    2015-04-01

    The need for social affiliation is one of the most important and fundamental human needs. Unsurprisingly, humans display strong negative reactions to social exclusion. In the present study, we investigated the effect of social exclusion on interoceptive accuracy - accuracy in detecting signals arising inside the body - measured with a heartbeat perception task. We manipulated exclusion using Cyberball, a widely used paradigm of a virtual ball-tossing game, with half of the participants being included during the game and the other half of participants being ostracized during the game. Our results indicated that heartbeat perception accuracy decreased in the excluded, but not in the included, participants. We discuss these results in the context of social and physical pain overlap, as well as in relation to internally versus externally oriented attention. PMID:25701592

  17. Affecting speed and accuracy in perception.

    PubMed

    Bocanegra, Bruno R

    2014-12-01

    An account of affective modulations in perceptual speed and accuracy (ASAP: Affecting Speed and Accuracy in Perception) is proposed and tested. This account assumes an emotion-induced inhibitory interaction between parallel channels in the visual system that modulates the onset latencies and response durations of visual signals. By trading off speed and accuracy between channels, this mechanism achieves (a) fast visuo-motor responding to course-grained information, and (b) accurate visuo-attentional selection of fine-grained information. ASAP gives a functional account of previously counterintuitive findings, and may be useful for explaining affective influences in both featural-level single-stimulus tasks and object-level multistimulus tasks. PMID:24853268

  18. Training in timing improves accuracy in golf.

    PubMed

    Libkuman, Terry M; Otani, Hajime; Steger, Neil

    2002-01-01

    In this experiment, the authors investigated the influence of training in timing on performance accuracy in golf. During pre- and posttesting, 40 participants hit golf balls with 4 different clubs in a golf course simulator. The dependent measure was the distance in feet that the ball ended from the target. Between the pre- and posttest, participants in the experimental condition received 10 hr of timing training with an instrument that was designed to train participants to tap their hands and feet in synchrony with target sounds. The participants in the control condition read literature about how to improve their golf swing. The results indicated that the participants in the experimental condition significantly improved their accuracy relative to the participants in the control condition, who did not show any improvement. We concluded that training in timing leads to improvement in accuracy, and that our results have implications for training in golf as well as other complex motor activities. PMID:12038497

  19. Final Technical Report: Increasing Prediction Accuracy.

    SciTech Connect

    King, Bruce Hardison; Hansen, Clifford; Stein, Joshua

    2015-12-01

    PV performance models are used to quantify the value of PV plants in a given location. They combine the performance characteristics of the system, the measured or predicted irradiance and weather at a site, and the system configuration and design into a prediction of the amount of energy that will be produced by a PV system. These predictions must be as accurate as possible in order for finance charges to be minimized. Higher accuracy equals lower project risk. The Increasing Prediction Accuracy project at Sandia focuses on quantifying and reducing uncertainties in PV system performance models.

  20. The accuracy of Halley's cometary orbits

    NASA Astrophysics Data System (ADS)

    Hughes, D. W.

    The accuracy of a scientific computation depends in the main on the data fed in and the analysis method used. This statement is certainly true of Edmond Halley's cometary orbit work. Considering the 420 comets that had been seen before Halley's era of orbital calculation (1695 - 1702) only 24, according to him, had been observed well enough for their orbits to be calculated. Two questions are considered in this paper. Do all the orbits listed by Halley have the same accuracy? and, secondly, how accurate was Halley's method of calculation?

  1. Development and evaluation of a Kalman-filter algorithm for terminal area navigation using sensors of moderate accuracy

    NASA Technical Reports Server (NTRS)

    Kanning, G.; Cicolani, L. S.; Schmidt, S. F.

    1983-01-01

    Translational state estimation in terminal area operations, using a set of commonly available position, air data, and acceleration sensors, is described. Kalman filtering is applied to obtain maximum estimation accuracy from the sensors but feasibility in real-time computations requires a variety of approximations and devices aimed at minimizing the required computation time with only negligible loss of accuracy. Accuracy behavior throughout the terminal area, its relation to sensor accuracy, its effect on trajectory tracking errors and control activity in an automatic flight control system, and its adequacy in terms of existing criteria for various terminal area operations are examined. The principal investigative tool is a simulation of the system.

  2. Standardized accuracy assessment of the calypso wireless transponder tracking system

    NASA Astrophysics Data System (ADS)

    Franz, A. M.; Schmitt, D.; Seitel, A.; Chatrasingh, M.; Echner, G.; Oelfke, U.; Nill, S.; Birkfellner, W.; Maier-Hein, L.

    2014-11-01

    Electromagnetic (EM) tracking allows localization of small EM sensors in a magnetic field of known geometry without line-of-sight. However, this technique requires a cable connection to the tracked object. A wireless alternative based on magnetic fields, referred to as transponder tracking, has been proposed by several authors. Although most of the transponder tracking systems are still in an early stage of development and not ready for clinical use yet, Varian Medical Systems Inc. (Palo Alto, California, USA) presented the Calypso system for tumor tracking in radiation therapy which includes transponder technology. But it has not been used for computer-assisted interventions (CAI) in general or been assessed for accuracy in a standardized manner, so far. In this study, we apply a standardized assessment protocol presented by Hummel et al (2005 Med. Phys. 32 2371-9) to the Calypso system for the first time. The results show that transponder tracking with the Calypso system provides a precision and accuracy below 1 mm in ideal clinical environments, which is comparable with other EM tracking systems. Similar to other systems the tracking accuracy was affected by metallic distortion, which led to errors of up to 3.2 mm. The potential of the wireless transponder tracking technology for use in many future CAI applications can be regarded as extremely high.

  3. Prefrontal consolidation supports the attainment of fear memory accuracy

    PubMed Central

    Vieira, Philip A.; Lovelace, Jonathan W.; Corches, Alex; Rashid, Asim J.; Josselyn, Sheena A.

    2014-01-01

    The neural mechanisms underlying the attainment of fear memory accuracy for appropriate discriminative responses to aversive and nonaversive stimuli are unclear. Considerable evidence indicates that coactivator of transcription and histone acetyltransferase cAMP response element binding protein (CREB) binding protein (CBP) is critically required for normal neural function. CBP hypofunction leads to severe psychopathological symptoms in human and cognitive abnormalities in genetic mutant mice with severity dependent on the neural locus and developmental time of the gene inactivation. Here, we showed that an acute hypofunction of CBP in the medial prefrontal cortex (mPFC) results in a disruption of fear memory accuracy in mice. In addition, interruption of CREB function in the mPFC also leads to a deficit in auditory discrimination of fearful stimuli. While mice with deficient CBP/CREB signaling in the mPFC maintain normal responses to aversive stimuli, they exhibit abnormal responses to similar but nonrelevant stimuli when compared to control animals. These data indicate that improvement of fear memory accuracy involves mPFC-dependent suppression of fear responses to nonrelevant stimuli. Evidence from a context discriminatory task and a newly developed task that depends on the ability to distinguish discrete auditory cues indicated that CBP-dependent neural signaling within the mPFC circuitry is an important component of the mechanism for disambiguating the meaning of fear signals with two opposing values: aversive and nonaversive. PMID:25031365

  4. Diagnostic Accuracy Comparison of Artificial Immune Algorithms for Primary Headaches

    PubMed Central

    Çelik, Ufuk; Yurtay, Nilüfer; Koç, Emine Rabia; Tepe, Nermin; Güllüoğlu, Halil; Ertaş, Mustafa

    2015-01-01

    The present study evaluated the diagnostic accuracy of immune system algorithms with the aim of classifying the primary types of headache that are not related to any organic etiology. They are divided into four types: migraine, tension, cluster, and other primary headaches. After we took this main objective into consideration, three different neurologists were required to fill in the medical records of 850 patients into our web-based expert system hosted on our project web site. In the evaluation process, Artificial Immune Systems (AIS) were used as the classification algorithms. The AIS are classification algorithms that are inspired by the biological immune system mechanism that involves significant and distinct capabilities. These algorithms simulate the specialties of the immune system such as discrimination, learning, and the memorizing process in order to be used for classification, optimization, or pattern recognition. According to the results, the accuracy level of the classifier used in this study reached a success continuum ranging from 95% to 99%, except for the inconvenient one that yielded 71% accuracy. PMID:26075014

  5. Survey methods for assessing land cover map accuracy

    USGS Publications Warehouse

    Nusser, S.M.; Klaas, E.E.

    2003-01-01

    The increasing availability of digital photographic materials has fueled efforts by agencies and organizations to generate land cover maps for states, regions, and the United States as a whole. Regardless of the information sources and classification methods used, land cover maps are subject to numerous sources of error. In order to understand the quality of the information contained in these maps, it is desirable to generate statistically valid estimates of accuracy rates describing misclassification errors. We explored a full sample survey framework for creating accuracy assessment study designs that balance statistical and operational considerations in relation to study objectives for a regional assessment of GAP land cover maps. We focused not only on appropriate sample designs and estimation approaches, but on aspects of the data collection process, such as gaining cooperation of land owners and using pixel clusters as an observation unit. The approach was tested in a pilot study to assess the accuracy of Iowa GAP land cover maps. A stratified two-stage cluster sampling design addressed sample size requirements for land covers and the need for geographic spread while minimizing operational effort. Recruitment methods used for private land owners yielded high response rates, minimizing a source of nonresponse error. Collecting data for a 9-pixel cluster centered on the sampled pixel was simple to implement, and provided better information on rarer vegetation classes as well as substantial gains in precision relative to observing data at a single-pixel.

  6. Accuracy assessment of fluoroscopy-transesophageal echocardiography registration

    NASA Astrophysics Data System (ADS)

    Lang, Pencilla; Seslija, Petar; Bainbridge, Daniel; Guiraudon, Gerard M.; Jones, Doug L.; Chu, Michael W.; Holdsworth, David W.; Peters, Terry M.

    2011-03-01

    This study assesses the accuracy of a new transesophageal (TEE) ultrasound (US) fluoroscopy registration technique designed to guide percutaneous aortic valve replacement. In this minimally invasive procedure, a valve is inserted into the aortic annulus via a catheter. Navigation and positioning of the valve is guided primarily by intra-operative fluoroscopy. Poor anatomical visualization of the aortic root region can result in incorrect positioning, leading to heart valve embolization, obstruction of the coronary ostia and acute kidney injury. The use of TEE US images to augment intra-operative fluoroscopy provides significant improvements to image-guidance. Registration is achieved using an image-based TEE probe tracking technique and US calibration. TEE probe tracking is accomplished using a single-perspective pose estimation algorithm. Pose estimation from a single image allows registration to be achieved using only images collected in standard OR workflow. Accuracy of this registration technique is assessed using three models: a point target phantom, a cadaveric porcine heart with implanted fiducials, and in-vivo porcine images. Results demonstrate that registration can be achieved with an RMS error of less than 1.5mm, which is within the clinical accuracy requirements of 5mm. US-fluoroscopy registration based on single-perspective pose estimation demonstrates promise as a method for providing guidance to percutaneous aortic valve replacement procedures. Future work will focus on real-time implementation and a visualization system that can be used in the operating room.

  7. Speed-Accuracy Response Models: Scoring Rules Based on Response Time and Accuracy

    ERIC Educational Resources Information Center

    Maris, Gunter; van der Maas, Han

    2012-01-01

    Starting from an explicit scoring rule for time limit tasks incorporating both response time and accuracy, and a definite trade-off between speed and accuracy, a response model is derived. Since the scoring rule is interpreted as a sufficient statistic, the model belongs to the exponential family. The various marginal and conditional distributions…

  8. Direct Behavior Rating: Considerations for Rater Accuracy

    ERIC Educational Resources Information Center

    Harrison, Sayward E.; Riley-Tillman, T. Chris; Chafouleas, Sandra M.

    2014-01-01

    Direct behavior rating (DBR) offers users a flexible, feasible method for the collection of behavioral data. Previous research has supported the validity of using DBR to rate three target behaviors: academic engagement, disruptive behavior, and compliance. However, the effect of the base rate of behavior on rater accuracy has not been established.…

  9. Vowel Space Characteristics and Vowel Identification Accuracy

    ERIC Educational Resources Information Center

    Neel, Amy T.

    2008-01-01

    Purpose: To examine the relation between vowel production characteristics and intelligibility. Method: Acoustic characteristics of 10 vowels produced by 45 men and 48 women from the J. M. Hillenbrand, L. A. Getty, M. J. Clark, and K. Wheeler (1995) study were examined and compared with identification accuracy. Global (mean f0, F1, and F2;…

  10. Seasonal Effects on GPS PPP Accuracy

    NASA Astrophysics Data System (ADS)

    Saracoglu, Aziz; Ugur Sanli, D.

    2016-04-01

    GPS Precise Point Positioning (PPP) is now routinely used in many geophysical applications. Static positioning and 24 h data are requested for high precision results however real life situations do not always let us collect 24 h data. Thus repeated GPS surveys of 8-10 h observation sessions are still used by some research groups. Positioning solutions from shorter data spans are subject to various systematic influences, and the positioning quality as well as the estimated velocity is degraded. Researchers pay attention to the accuracy of GPS positions and of the estimated velocities derived from short observation sessions. Recently some research groups turned their attention to the study of seasonal effects (i.e. meteorological seasons) on GPS solutions. Up to now usually regional studies have been reported. In this study, we adopt a global approach and study the various seasonal effects (including the effect of the annual signal) on GPS solutions produced from short observation sessions. We use the PPP module of the NASA/JPL's GIPSY/OASIS II software and globally distributed GPS stations' data of the International GNSS Service. Accuracy studies previously performed with 10-30 consecutive days of continuous data. Here, data from each month of a year, incorporating two years in succession, is used in the analysis. Our major conclusion is that a reformulation for the GPS positioning accuracy is necessary when taking into account the seasonal effects, and typical one term accuracy formulation is expanded to a two-term one.

  11. Accuracy Assessment for AG500, Electromagnetic Articulograph

    ERIC Educational Resources Information Center

    Yunusova, Yana; Green, Jordan R.; Mefferd, Antje

    2009-01-01

    Purpose: The goal of this article was to evaluate the accuracy and reliability of the AG500 (Carstens Medizinelectronik, Lenglern, Germany), an electromagnetic device developed recently to register articulatory movements in three dimensions. This technology seems to have unprecedented capabilities to provide rich information about time-varying…

  12. 47 CFR 65.306 - Calculation accuracy.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... 47 Telecommunication 3 2011-10-01 2011-10-01 false Calculation accuracy. 65.306 Section 65.306 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Exchange Carriers § 65.306 Calculation...

  13. Observed Consultation: Confidence and Accuracy of Assessors

    ERIC Educational Resources Information Center

    Tweed, Mike; Ingham, Christopher

    2010-01-01

    Judgments made by the assessors observing consultations are widely used in the assessment of medical students. The aim of this research was to study judgment accuracy and confidence and the relationship between these. Assessors watched recordings of consultations, scoring the students on: a checklist of items; attributes of consultation; a…

  14. Accuracy of References in Five Entomology Journals.

    ERIC Educational Resources Information Center

    Kristof, Cynthia

    ln this paper, the bibliographical references in five core entomology journals are examined for citation accuracy in order to determine if the error rates are similar. Every reference printed in each journal's first issue of 1992 was examined, and these were compared to the original (cited) publications, if possible, in order to determine the…

  15. 47 CFR 65.306 - Calculation accuracy.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 47 Telecommunication 3 2010-10-01 2010-10-01 false Calculation accuracy. 65.306 Section 65.306 Telecommunication FEDERAL COMMUNICATIONS COMMISSION (CONTINUED) COMMON CARRIER SERVICES (CONTINUED) INTERSTATE RATE OF RETURN PRESCRIPTION PROCEDURES AND METHODOLOGIES Exchange Carriers § 65.306 Calculation...

  16. Bullet trajectory reconstruction - Methods, accuracy and precision.

    PubMed

    Mattijssen, Erwin J A T; Kerkhoff, Wim

    2016-05-01

    Based on the spatial relation between a primary and secondary bullet defect or on the shape and dimensions of the primary bullet defect, a bullet's trajectory prior to impact can be estimated for a shooting scene reconstruction. The accuracy and precision of the estimated trajectories will vary depending on variables such as, the applied method of reconstruction, the (true) angle of incidence, the properties of the target material and the properties of the bullet upon impact. This study focused on the accuracy and precision of estimated bullet trajectories when different variants of the probing method, ellipse method, and lead-in method are applied on bullet defects resulting from shots at various angles of incidence on drywall, MDF and sheet metal. The results show that in most situations the best performance (accuracy and precision) is seen when the probing method is applied. Only for the lowest angles of incidence the performance was better when either the ellipse or lead-in method was applied. The data provided in this paper can be used to select the appropriate method(s) for reconstruction and to correct for systematic errors (accuracy) and to provide a value of the precision, by means of a confidence interval of the specific measurement. PMID:27044032

  17. Measuring Tracking Accuracy of CCD Imagers

    NASA Technical Reports Server (NTRS)

    Stanton, R. H.; Dennison, E. W.

    1985-01-01

    Tracking accuracy and resolution of charge-coupled device (CCD) imaging arrays measured by instrument originally developed for measuring performance of star-tracking telescope. Operates by projecting one or more artifical star images on surface of CCD array, moving stars in controlled patterns, and comparing star locations computed from CCD outputs with those calculated from step coordinates of micropositioner.

  18. Accuracy of Digital vs. Conventional Implant Impressions

    PubMed Central

    Lee, Sang J.; Betensky, Rebecca A.; Gianneschi, Grace E.; Gallucci, German O.

    2015-01-01

    The accuracy of digital impressions greatly influences the clinical viability in implant restorations. The aim of this study is to compare the accuracy of gypsum models acquired from the conventional implant impression to digitally milled models created from direct digitalization by three-dimensional analysis. Thirty gypsum and 30 digitally milled models impressed directly from a reference model were prepared. The models were scanned by a laboratory scanner and 30 STL datasets from each group were imported to an inspection software. The datasets were aligned to the reference dataset by a repeated best fit algorithm and 10 specified contact locations of interest were measured in mean volumetric deviations. The areas were pooled by cusps, fossae, interproximal contacts, horizontal and vertical axes of implant position and angulation. The pooled areas were statistically analysed by comparing each group to the reference model to investigate the mean volumetric deviations accounting for accuracy and standard deviations for precision. Milled models from digital impressions had comparable accuracy to gypsum models from conventional impressions. However, differences in fossae and vertical displacement of the implant position from the gypsum and digitally milled models compared to the reference model, exhibited statistical significance (p<0.001, p=0.020 respectively). PMID:24720423

  19. Maximum expected accuracy structural neighbors of an RNA secondary structure

    PubMed Central

    2012-01-01

    Background Since RNA molecules regulate genes and control alternative splicing by allostery, it is important to develop algorithms to predict RNA conformational switches. Some tools, such as paRNAss, RNAshapes and RNAbor, can be used to predict potential conformational switches; nevertheless, no existent tool can detect general (i.e., not family specific) entire riboswitches (both aptamer and expression platform) with accuracy. Thus, the development of additional algorithms to detect conformational switches seems important, especially since the difference in free energy between the two metastable secondary structures may be as large as 15-20 kcal/mol. It has recently emerged that RNA secondary structure can be more accurately predicted by computing the maximum expected accuracy (MEA) structure, rather than the minimum free energy (MFE) structure. Results Given an arbitrary RNA secondary structure S0 for an RNA nucleotide sequence a = a1,..., an, we say that another secondary structure S of a is a k-neighbor of S0, if the base pair distance between S0 and S is k. In this paper, we prove that the Boltzmann probability of all k-neighbors of the minimum free energy structure S0 can be approximated with accuracy ε and confidence 1 - p, simultaneously for all 0 ≤ k < K, by a relative frequency count over N sampled structures, provided that N>N(ε,p,K)=Φ-1p2K24ε2, where Φ(z) is the cumulative distribution function (CDF) for the standard normal distribution. We go on to describe the algorithm RNAborMEA, which for an arbitrary initial structure S0 and for all values 0 ≤ k < K, computes the secondary structure MEA(k), having maximum expected accuracy over all k-neighbors of S0. Computation time is O(n3 · K2), and memory requirements are O(n2 · K). We analyze a sample TPP riboswitch, and apply our algorithm to the class of purine riboswitches. Conclusions The approximation of RNAbor by sampling, with rigorous bound on accuracy, together with the computation of

  20. 47 CFR 400.8 - Non-compliance.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... Telecommunication NATIONAL TELECOMMUNICATIONS AND INFORMATION ADMINISTRATION, DEPARTMENT OF COMMERCE, AND NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION E-911 GRANT PROGRAM § 400.8 Non... its certification related to the diversion of E-911 charges, the State shall be required to return...

  1. 47 CFR 400.8 - Non-compliance.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... Telecommunication NATIONAL TELECOMMUNICATIONS AND INFORMATION ADMINISTRATION, DEPARTMENT OF COMMERCE, AND NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION E-911 GRANT PROGRAM § 400.8 Non... its certification related to the diversion of E-911 charges, the State shall be required to return...

  2. 47 CFR 400.8 - Non-compliance.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... Telecommunication NATIONAL TELECOMMUNICATIONS AND INFORMATION ADMINISTRATION, DEPARTMENT OF COMMERCE, AND NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION E-911 GRANT PROGRAM § 400.8 Non... its certification related to the diversion of E-911 charges, the State shall be required to return...

  3. 47 CFR 400.8 - Non-compliance.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... Telecommunication NATIONAL TELECOMMUNICATIONS AND INFORMATION ADMINISTRATION, DEPARTMENT OF COMMERCE, AND NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION E-911 GRANT PROGRAM § 400.8 Non... its certification related to the diversion of E-911 charges, the State shall be required to return...

  4. 47 CFR 400.8 - Non-compliance.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... Telecommunication NATIONAL TELECOMMUNICATIONS AND INFORMATION ADMINISTRATION, DEPARTMENT OF COMMERCE, AND NATIONAL HIGHWAY TRAFFIC SAFETY ADMINISTRATION, DEPARTMENT OF TRANSPORTATION E-911 GRANT PROGRAM § 400.8 Non... its certification related to the diversion of E-911 charges, the State shall be required to return...

  5. Eligibility Requirements

    MedlinePlus

    ... Home > Donating Blood > Eligibility Requirements Printable Version Eligibility Requirements This page uses Javascript. Your browser either doesn' ... donors » Weigh at least 110 lbs. Additional weight requirements apply for donors 18-years-old and younger ...

  6. Analyzing thematic maps and mapping for accuracy

    USGS Publications Warehouse

    Rosenfield, G.H.

    1982-01-01

    Two problems which exist while attempting to test the accuracy of thematic maps and mapping are: (1) evaluating the accuracy of thematic content, and (2) evaluating the effects of the variables on thematic mapping. Statistical analysis techniques are applicable to both these problems and include techniques for sampling the data and determining their accuracy. In addition, techniques for hypothesis testing, or inferential statistics, are used when comparing the effects of variables. A comprehensive and valid accuracy test of a classification project, such as thematic mapping from remotely sensed data, includes the following components of statistical analysis: (1) sample design, including the sample distribution, sample size, size of the sample unit, and sampling procedure; and (2) accuracy estimation, including estimation of the variance and confidence limits. Careful consideration must be given to the minimum sample size necessary to validate the accuracy of a given. classification category. The results of an accuracy test are presented in a contingency table sometimes called a classification error matrix. Usually the rows represent the interpretation, and the columns represent the verification. The diagonal elements represent the correct classifications. The remaining elements of the rows represent errors by commission, and the remaining elements of the columns represent the errors of omission. For tests of hypothesis that compare variables, the general practice has been to use only the diagonal elements from several related classification error matrices. These data are arranged in the form of another contingency table. The columns of the table represent the different variables being compared, such as different scales of mapping. The rows represent the blocking characteristics, such as the various categories of classification. The values in the cells of the tables might be the counts of correct classification or the binomial proportions of these counts divided by

  7. The impact of accuracy motivation on interpretation, comparison, and correction processes: accuracy x knowledge accessibility effects.

    PubMed

    Stapel, D A; Koomen, W; Zeelenberg, M

    1998-04-01

    Four studies provide evidence for the notion that there may be boundaries to the extent to which accuracy motivation may help perceivers to escape the influence of fortuitously activated information. Specifically, although accuracy motivations may eliminate assimilative accessibility effects, they are less likely to eliminate contrastive accessibility effects. It was found that the occurrence of different types of contrast effects (comparison and correction) was not significantly affected by participants' accuracy motivations. Furthermore, it was found that the mechanisms instigated by accuracy motivations differ from those ignited by correction instructions: Accuracy motivations attenuate assimilation effects because perceivers add target interpretations to the one suggested by primed information. Conversely, it was found that correction instructions yield contrast and prompt respondents to remove the priming event's influence from their reaction to the target. PMID:9569650

  8. Holter triage ambulatory ECG analysis. Accuracy and time efficiency.

    PubMed

    Cooper, D H; Kennedy, H L; Lyyski, D S; Sprague, M K

    1996-01-01

    Triage ambulatory electrocardiographic (ECG) analysis permits relatively unskilled office workers to submit 24-hour ambulatory ECG Holter tapes to an automatic instrument (model 563, Del Mar Avionics, Irvine, CA) for interpretation. The instrument system "triages" what it is capable of automatically interpreting and rejects those tapes (with high ventricular arrhythmia density) requiring thorough analysis. Nevertheless, a trained cardiovascular technician ultimately edits what is accepted for analysis. This study examined the clinical validity of one manufacturer's triage instrumentation with regard to accuracy and time efficiency for interpreting ventricular arrhythmia. A database of 50 Holter tapes stratified for frequency of ventricular ectopic beats (VEBs) was examined by triage, conventional, and full-disclosure hand-count Holter analysis. Half of the tapes were found to be automatically analyzable by the triage method. Comparison of the VEB accuracy of triage versus conventional analysis using the full-disclosure hand count as the standard showed that triage analysis overall appeared as accurate as conventional Holter analysis but had limitations in detecting ventricular tachycardia (VT) runs. Overall sensitivity, positive predictive accuracy, and false positive rate for the triage ambulatory ECG analysis were 96, 99, and 0.9%, respectively, for isolated VEBs, 92, 93, and 7%, respectively, for ventricular couplets, and 48, 93, and 7%, respectively, for VT. Error in VT detection by triage analysis occurred on a single tape. Of the remaining 11 tapes containing VT runs, accuracy was significantly increased, with a sensitivity of 86%, positive predictive accuracy of 90%, and false positive rate of 10%. Stopwatch-recorded time efficiency was carefully logged during both triage and conventional ambulatory ECG analysis and divided into five time phases: secretarial, machine, analysis, editing, and total time. Triage analysis was significantly (P < .05) more time

  9. Accuracy and Efficiency in Fixed-Point Neural ODE Solvers.

    PubMed

    Hopkins, Michael; Furber, Steve

    2015-10-01

    Simulation of neural behavior on digital architectures often requires the solution of ordinary differential equations (ODEs) at each step of the simulation. For some neural models, this is a significant computational burden, so efficiency is important. Accuracy is also relevant because solutions can be sensitive to model parameterization and time step. These issues are emphasized on fixed-point processors like the ARM unit used in the SpiNNaker architecture. Using the Izhikevich neural model as an example, we explore some solution methods, showing how specific techniques can be used to find balanced solutions. We have investigated a number of important and related issues, such as introducing explicit solver reduction (ESR) for merging an explicit ODE solver and autonomous ODE into one algebraic formula, with benefits for both accuracy and speed; a simple, efficient mechanism for cancelling the cumulative lag in state variables caused by threshold crossing between time steps; an exact result for the membrane potential of the Izhikevich model with the other state variable held fixed. Parametric variations of the Izhikevich neuron show both similarities and differences in terms of algorithms and arithmetic types that perform well, making an overall best solution challenging to identify, but we show that particular cases can be improved significantly using the techniques described. Using a 1 ms simulation time step and 32-bit fixed-point arithmetic to promote real-time performance, one of the second-order Runge-Kutta methods looks to be the best compromise; Midpoint for speed or Trapezoid for accuracy. SpiNNaker offers an unusual combination of low energy use and real-time performance, so some compromises on accuracy might be expected. However, with a careful choice of approach, results comparable to those of general-purpose systems should be possible in many realistic cases. PMID:26313605

  10. Factors Affecting Accuracy of Data Abstracted from Medical Records

    PubMed Central

    Zozus, Meredith N.; Pieper, Carl; Johnson, Constance M.; Johnson, Todd R.; Franklin, Amy; Smith, Jack; Zhang, Jiajie

    2015-01-01

    Objective Medical record abstraction (MRA) is often cited as a significant source of error in research data, yet MRA methodology has rarely been the subject of investigation. Lack of a common framework has hindered application of the extant literature in practice, and, until now, there were no evidence-based guidelines for ensuring data quality in MRA. We aimed to identify the factors affecting the accuracy of data abstracted from medical records and to generate a framework for data quality assurance and control in MRA. Methods Candidate factors were identified from published reports of MRA. Content validity of the top candidate factors was assessed via a four-round two-group Delphi process with expert abstractors with experience in clinical research, registries, and quality improvement. The resulting coded factors were categorized into a control theory-based framework of MRA. Coverage of the framework was evaluated using the recent published literature. Results Analysis of the identified articles yielded 292 unique factors that affect the accuracy of abstracted data. Delphi processes overall refuted three of the top factors identified from the literature based on importance and five based on reliability (six total factors refuted). Four new factors were identified by the Delphi. The generated framework demonstrated comprehensive coverage. Significant underreporting of MRA methodology in recent studies was discovered. Conclusion The framework generated from this research provides a guide for planning data quality assurance and control for studies using MRA. The large number and variability of factors indicate that while prospective quality assurance likely increases the accuracy of abstracted data, monitoring the accuracy during the abstraction process is also required. Recent studies reporting research results based on MRA rarely reported data quality assurance or control measures, and even less frequently reported data quality metrics with research results. Given

  11. A Quest for Measuring Ion Bunch Longitudinal Profiles with One Picosecond Accuracy in the SNS Linac.

    SciTech Connect

    Aleksandrov, Alexander V; Dickson, Richard W

    2012-01-01

    The SNS linac utilizes several accelerating structures operating at different frequencies and with different transverse focusing structures. Low-loss beam transport requires a careful matching at the transition points in both the transverse and longitudinal axes. Longitudinal beam parameters are measured using four Bunch Shape Monitors (used at many ion accelerator facilities, aka Feschenko devices). These devices, as initially delivered to the SNS, provided an estimated accuracy of about 5 picoseconds, which was sufficient for the initial beam commissioning. New challenges of improving beam transport for higher power operation will require measuring bunch profiles with 1-2 picoseconds accuracy. We have successfully implemented a number of improvements to maximize the performance characteristics of the delivered devices. We will discuss the current status of this instrument, its ultimate theoretical limit of accuracy, and how we measure its accuracy and resolution with real beam conditions.

  12. System accuracy evaluation of the GlucoRx nexus voice TD-4280 blood glucose monitoring system.

    PubMed

    Khan, Muhammad; Broadbent, Keith; Morris, Mike; Ewins, David; Joseph, Franklin

    2014-01-01

    Use of blood glucose (BG) meters in the self-monitoring of blood glucose (SMBG) significantly lowers the risk of diabetic complications. With several BG meters now commercially available, the International Organization for Standardization (ISO) ensures that each BG meter conforms to a set degree of accuracy. Although adherence to ISO guidelines is a prerequisite for commercialization in Europe, several BG meters claim to meet the ISO guidelines yet fail to do so on internal validation. We conducted a study to determine whether the accuracy of the GlucoRx Nexus TD-4280 meter, utilized by our department for its cost-effectiveness, complied with ISO guidelines. 105 patients requiring laboratory blood glucose analysis were randomly selected and reference measurements were determined by the UniCel DxC 800 clinical system. Overall the BG meter failed to adhere to the ≥95% accuracy criterion required by both the 15197:2003 (overall accuracy 92.4%) and 15197:2013 protocol (overall accuracy 86.7%). Inaccurate meters have an inherent risk of over- and/or underestimating the true BG concentration, thereby risking patients to incorrect therapeutic interventions. Our study demonstrates the importance of internally validating the accuracy of BG meters to ensure that its accuracy is accepted by standardized guidelines. PMID:25374434

  13. The Attribute Accuracy Assessment of Land Cover Data in the National Geographic Conditions Survey

    NASA Astrophysics Data System (ADS)

    Ji, X.; Niu, X.

    2014-04-01

    With the widespread national survey of geographic conditions, object-based data has already became the most common data organization pattern in the area of land cover research. Assessing the accuracy of object-based land cover data is related to lots of processes of data production, such like the efficiency of inside production and the quality of final land cover data. Therefore,there are a great deal of requirements of accuracy assessment of object-based classification map. Traditional approaches for accuracy assessment in surveying and mapping are not aimed at land cover data. It is necessary to employ the accuracy assessment in imagery classification. However traditional pixel-based accuracy assessing methods are inadequate for the requirements. The measures we improved are based on error matrix and using objects as sample units, because the pixel sample units are not suitable for assessing the accuracy of object-based classification result. Compared to pixel samples, we realize that the uniformity of object samples has changed. In order to make the indexes generating from error matrix reliable, we using the areas of object samples as the weight to establish the error matrix of object-based image classification map. We compare the result of two error matrixes setting up by the number of object samples and the sum of area of object samples. The error matrix using the sum of area of object sample is proved to be an intuitive, useful technique for reflecting the actual accuracy of object-based imagery classification result.

  14. Assessing the Accuracy of the Precise Point Positioning Technique

    NASA Astrophysics Data System (ADS)

    Bisnath, S. B.; Collins, P.; Seepersad, G.

    2012-12-01

    The Precise Point Positioning (PPP) GPS data processing technique has developed over the past 15 years to become a standard method for growing categories of positioning and navigation applications. The technique relies on single receiver point positioning combined with the use of precise satellite orbit and clock information and high-fidelity error modelling. The research presented here uniquely addresses the current accuracy of the technique, explains the limits of performance, and defines paths to improvements. For geodetic purposes, performance refers to daily static position accuracy. PPP processing of over 80 IGS stations over one week results in few millimetre positioning rms error in the north and east components and few centimetres in the vertical (all one sigma values). Larger error statistics for real-time and kinematic processing are also given. GPS PPP with ambiguity resolution processing is also carried out, producing slight improvements over the float solution results. These results are categorised into quality classes in order to analyse the root error causes of the resultant accuracies: "best", "worst", multipath, site displacement effects, satellite availability and geometry, etc. Also of interest in PPP performance is solution convergence period. Static, conventional solutions are slow to converge, with approximately 35 minutes required for 95% of solutions to reach the 20 cm or better horizontal accuracy. Ambiguity resolution can significantly reduce this period without biasing solutions. The definition of a PPP error budget is a complex task even with the resulting numerical assessment, as unlike the epoch-by-epoch processing in the Standard Position Service, PPP processing involving filtering. An attempt is made here to 1) define the magnitude of each error source in terms of range, 2) transform ranging error to position error via Dilution Of Precision (DOP), and 3) scale the DOP through the filtering process. The result is a deeper

  15. Accuracy, security, and processing time comparisons of biometric fingerprint recognition system using digital and optical enhancements

    NASA Astrophysics Data System (ADS)

    Alsharif, Salim; El-Saba, Aed; Jagapathi, Rajendarreddy

    2011-06-01

    Fingerprint recognition is one of the most commonly used forms of biometrics and has been widely used in daily life due to its feasibility, distinctiveness, permanence, accuracy, reliability, and acceptability. Besides cost, issues related to accuracy, security, and processing time in practical biometric recognition systems represent the most critical factors that makes these systems widely acceptable. Accurate and secure biometric systems often require sophisticated enhancement and encoding techniques that burdens the overall processing time of the system. In this paper we present a comparison between common digital and optical enhancementencoding techniques with respect to their accuracy, security and processing time, when applied to biometric fingerprint systems.

  16. High-accuracy global time and frequency transfer with a space-borne hydrogen maser clock

    NASA Technical Reports Server (NTRS)

    Decher, R.; Allan, D. W.; Alley, C. O.; Baugher, C.; Duncan, B. J.; Vessot, R. F. C.; Winkler, G. M. R.

    1983-01-01

    A proposed system for high-accuracy global time and frequency transfer using a hydrogen maser clock in a space vehicle is discussed. Direct frequency transfer with a accuracy of 10 to the minus 14th power and time transfer with an estimated accuracy of 1 nsec are provided by a 3-link microwave system. A short pulse laser system is included for subnanosecond time transfer and system calibration. The results of studies including operational aspects, error sources, data flow, system configuration, and implementation requirements for an initial demonstration experiment using the Space Shuttle are discussed.

  17. A bootstrap method for assessing classification accuracy and confidence for agricultural land use mapping in Canada

    NASA Astrophysics Data System (ADS)

    Champagne, Catherine; McNairn, Heather; Daneshfar, Bahram; Shang, Jiali

    2014-06-01

    Land cover and land use classifications from remote sensing are increasingly becoming institutionalized framework data sets for monitoring environmental change. As such, the need for robust statements of classification accuracy is critical. This paper describes a method to estimate confidence in classification model accuracy using a bootstrap approach. Using this method, it was found that classification accuracy and confidence, while closely related, can be used in complementary ways to provide additional information on map accuracy and define groups of classes and to inform the future reference sampling strategies. Overall classification accuracy increases with an increase in the number of fields surveyed, where the width of classification confidence bounds decreases. Individual class accuracies and confidence were non-linearly related to the number of fields surveyed. Results indicate that some classes can be estimated accurately and confidently with fewer numbers of samples, whereas others require larger reference data sets to achieve satisfactory results. This approach is an improvement over other approaches for estimating class accuracy and confidence as it uses repetitive sampling to produce a more realistic estimate of the range in classification accuracy and confidence that can be obtained with different reference data inputs.

  18. An accuracy measurement method for star trackers based on direct astronomic observation

    PubMed Central

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-01-01

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers. PMID:26948412

  19. An accuracy measurement method for star trackers based on direct astronomic observation

    NASA Astrophysics Data System (ADS)

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-03-01

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers.

  20. An accuracy measurement method for star trackers based on direct astronomic observation.

    PubMed

    Sun, Ting; Xing, Fei; Wang, Xiaochu; You, Zheng; Chu, Daping

    2016-01-01

    Star tracker is one of the most promising optical attitude measurement devices and it is widely used in spacecraft for its high accuracy. However, how to realize and verify such an accuracy remains a crucial but unsolved issue until now. The authenticity of the accuracy measurement method of a star tracker will eventually determine the satellite performance. A new and robust accuracy measurement method for a star tracker based on the direct astronomical observation is proposed here. In comparison with the conventional method with simulated stars, this method utilizes real navigation stars as observation targets which makes the measurement results more authoritative and authentic. Transformations between different coordinate systems are conducted on the account of the precision movements of the Earth, and the error curves of directional vectors are obtained along the three axes. Based on error analysis and accuracy definitions, a three-axis accuracy evaluation criterion has been proposed in this paper, which could determine pointing and rolling accuracy of a star tracker directly. Experimental measurements confirm that this method is effective and convenient to implement. Such a measurement environment is close to the in-orbit conditions and it can satisfy the stringent requirement for high-accuracy star trackers. PMID:26948412

  1. Speed-accuracy strategy regulations in prefrontal tumor patients

    PubMed Central

    Campanella, Fabio; Skrap, Miran; Vallesi, Antonino

    2016-01-01

    The ability to flexibly switch between fast and accurate decisions is crucial in everyday life. Recent neuroimaging evidence suggested that left lateral prefrontal cortex plays a role in switching from a quick response strategy to an accurate one. However, the causal role of the left prefrontal cortex in this particular, non-verbal, strategy switch has never been demonstrated. To fill this gap, we administered a perceptual decision-making task to neuro-oncological prefrontal patients, in which the requirement to be quick or accurate changed randomly on a trial-by-trial basis. To directly assess hemispheric asymmetries in speed-accuracy regulation, patients were tested a few days before and a few days after surgical excision of a brain tumor involving either the left (N=13) or the right (N=12) lateral frontal brain region. A group of age- and education-matched healthy controls was also recruited. To gain more insight on the component processes implied in the task, performance data (accuracy and speed) were not only analyzed separately but also submitted to a diffusion model analysis. The main findings indicated that the left prefrontal patients were impaired in appropriately adopting stricter response criteria in speed-to-accuracy switching trials with respect to healthy controls and right prefrontal patients, who were not impaired in this condition. This study demonstrates that the prefrontal cortex in the left hemisphere is necessary for flexible behavioral regulations, in particular when setting stricter response criteria is required in order to successfully switch from a speedy strategy to an accurate one. PMID:26772144

  2. Speed-accuracy strategy regulations in prefrontal tumor patients.

    PubMed

    Campanella, Fabio; Skrap, Miran; Vallesi, Antonino

    2016-02-01

    The ability to flexibly switch between fast and accurate decisions is crucial in everyday life. Recent neuroimaging evidence suggested that left lateral prefrontal cortex plays a role in switching from a quick response strategy to an accurate one. However, the causal role of the left prefrontal cortex in this particular, non-verbal, strategy switch has never been demonstrated. To fill this gap, we administered a perceptual decision-making task to neuro-oncological prefrontal patients, in which the requirement to be quick or accurate changed randomly on a trial-by-trial basis. To directly assess hemispheric asymmetries in speed-accuracy regulation, patients were tested a few days before and a few days after surgical excision of a brain tumor involving either the left (N=13) or the right (N=12) lateral frontal brain region. A group of age- and education-matched healthy controls was also recruited. To gain more insight on the component processes implied in the task, performance data (accuracy and speed) were not only analyzed separately but also submitted to a diffusion model analysis. The main findings indicated that the left prefrontal patients were impaired in appropriately adopting stricter response criteria in speed-to-accuracy switching trials with respect to healthy controls and right prefrontal patients, who were not impaired in this condition. This study demonstrates that the prefrontal cortex in the left hemisphere is necessary for flexible behavioral regulations, in particular when setting stricter response criteria is required in order to successfully switch from a speedy strategy to an accurate one. PMID:26772144

  3. Accuracy of wind measurements using an airborne Doppler lidar

    NASA Technical Reports Server (NTRS)

    Carroll, J. J.

    1986-01-01

    Simulated wind fields and lidar data are used to evaluate two sources of airborne wind measurement error. The system is sensitive to ground speed and track angle errors, with accuracy required of the angle to within 0.2 degrees and of the speed to within 1 knot, if the recovered wind field is to be within five percent of the correct direction and 10 percent of the correct speed. It is found that errors in recovered wind speed and direction are dependent on wind direction relative to the flight path. Recovery of accurate wind fields from nonsimultaneous sampling errors requires that the lidar data be displaced to account for advection so that the intersections are defined by air parcels rather than fixed points in space.

  4. Towards high accuracy calibration of electron backscatter diffraction systems.

    PubMed

    Mingard, Ken; Day, Austin; Maurice, Claire; Quested, Peter

    2011-04-01

    For precise orientation and strain measurements, advanced Electron Backscatter Diffraction (EBSD) techniques require both accurate calibration and reproducible measurement of the system geometry. In many cases the pattern centre (PC) needs to be determined to sub-pixel accuracy. The mechanical insertion/retraction, through the Scanning Electron Microscope (SEM) chamber wall, of the electron sensitive part of modern EBSD detectors also causes alignment and positioning problems and requires frequent monitoring of the PC. Optical alignment and lens distortion issues within the scintillator, lens and charge-coupled device (CCD) camera combination of an EBSD detector need accurate measurement for each individual EBSD system. This paper highlights and quantifies these issues and demonstrates the determination of the pattern centre using a novel shadow-casting technique with a precision of ∼10μm or ∼1/3 CCD pixel. PMID:21396526

  5. Accuracy of real time radiography burning rate measurement

    NASA Astrophysics Data System (ADS)

    Olaniyi, Bisola

    The design of a solid propellant rocket motor requires the determination of a propellant's burning-rate and its dependency upon environmental parameters. The requirement that the burning-rate be physically measured, establishes the need for methods and equipment to obtain such data. A literature review reveals that no measurement has provided the desired burning rate accuracy. In the current study, flash x-ray modeling and digitized film-density data were employed to predict motor-port area to length ratio. The pre-fired port-areas and base burning rate were within 2.5% and 1.2% of their known values, respectively. To verify the accuracy of the method, a continuous x-ray and a solid propellant rocket motor model (Plexiglas cylinder) were used. The solid propellant motor model was translated laterally through a real-time radiography system at different speeds simulating different burning rates. X-ray images were captured and the burning-rate was then determined. The measured burning rate was within 1.65% of the known values.

  6. Positional Accuracy Assessment of Googleearth in Riyadh

    NASA Astrophysics Data System (ADS)

    Farah, Ashraf; Algarni, Dafer

    2014-06-01

    Google Earth is a virtual globe, map and geographical information program that is controlled by Google corporation. It maps the Earth by the superimposition of images obtained from satellite imagery, aerial photography and GIS 3D globe. With millions of users all around the globe, GoogleEarth® has become the ultimate source of spatial data and information for private and public decision-support systems besides many types and forms of social interactions. Many users mostly in developing countries are also using it for surveying applications, the matter that raises questions about the positional accuracy of the Google Earth program. This research presents a small-scale assessment study of the positional accuracy of GoogleEarth® Imagery in Riyadh; capital of Kingdom of Saudi Arabia (KSA). The results show that the RMSE of the GoogleEarth imagery is 2.18 m and 1.51 m for the horizontal and height coordinates respectively.

  7. Accuracy control in Monte Carlo radiative calculations

    NASA Technical Reports Server (NTRS)

    Almazan, P. Planas

    1993-01-01

    The general accuracy law that rules the Monte Carlo, ray-tracing algorithms used commonly for the calculation of the radiative entities in the thermal analysis of spacecraft are presented. These entities involve transfer of radiative energy either from a single source to a target (e.g., the configuration factors). or from several sources to a target (e.g., the absorbed heat fluxes). In fact, the former is just a particular case of the latter. The accuracy model is later applied to the calculation of some specific radiative entities. Furthermore, some issues related to the implementation of such a model in a software tool are discussed. Although only the relative error is considered through the discussion, similar results can be derived for the absolute error.

  8. Do saccharide doped PAGAT dosimeters increase accuracy?

    NASA Astrophysics Data System (ADS)

    Berndt, B.; Skyt, P. S.; Holloway, L.; Hill, R.; Sankar, A.; De Deene, Y.

    2015-01-01

    To improve the dosimetric accuracy of normoxic polyacrylamide gelatin (PAGAT) gel dosimeters, the addition of saccharides (glucose and sucrose) has been suggested. An increase in R2-response sensitivity upon irradiation will result in smaller uncertainties in the derived dose if all other uncertainties are conserved. However, temperature variations during the magnetic resonance scanning of polymer gels result in one of the highest contributions to dosimetric uncertainties. The purpose of this project was to study the dose sensitivity against the temperature sensitivity. The overall dose uncertainty of PAGAT gel dosimeters with different concentrations of saccharides (0, 10 and 20%) was investigated. For high concentrations of glucose or sucrose, a clear improvement of the dose sensitivity was observed. For doses up to 6 Gy, the overall dose uncertainty was reduced up to 0.3 Gy for all saccharide loaded gels compared to PAGAT gel. Higher concentrations of glucose and sucrose deteriorate the accuracy of PAGAT dosimeters for doses above 9 Gy.

  9. Accuracy of forecasts in strategic intelligence

    PubMed Central

    Mandel, David R.; Barnes, Alan

    2014-01-01

    The accuracy of 1,514 strategic intelligence forecasts abstracted from intelligence reports was assessed. The results show that both discrimination and calibration of forecasts was very good. Discrimination was better for senior (versus junior) analysts and for easier (versus harder) forecasts. Miscalibration was mainly due to underconfidence such that analysts assigned more uncertainty than needed given their high level of discrimination. Underconfidence was more pronounced for harder (versus easier) forecasts and for forecasts deemed more (versus less) important for policy decision making. Despite the observed underconfidence, there was a paucity of forecasts in the least informative 0.4–0.6 probability range. Recalibrating the forecasts substantially reduced underconfidence. The findings offer cause for tempered optimism about the accuracy of strategic intelligence forecasts and indicate that intelligence producers aim to promote informativeness while avoiding overstatement. PMID:25024176

  10. Accuracy of NHANES periodontal examination protocols.

    PubMed

    Eke, P I; Thornton-Evans, G O; Wei, L; Borgnakke, W S; Dye, B A

    2010-11-01

    This study evaluates the accuracy of periodontitis prevalence determined by the National Health and Nutrition Examination Survey (NHANES) partial-mouth periodontal examination protocols. True periodontitis prevalence was determined in a new convenience sample of 454 adults ≥ 35 years old, by a full-mouth "gold standard" periodontal examination. This actual prevalence was compared with prevalence resulting from analysis of the data according to the protocols of NHANES III and NHANES 2001-2004, respectively. Both NHANES protocols substantially underestimated the prevalence of periodontitis by 50% or more, depending on the periodontitis case definition used, and thus performed below threshold levels for moderate-to-high levels of validity for surveillance. Adding measurements from lingual or interproximal sites to the NHANES 2001-2004 protocol did not improve the accuracy sufficiently to reach acceptable sensitivity thresholds. These findings suggest that NHANES protocols produce high levels of misclassification of periodontitis cases and thus have low validity for surveillance and research. PMID:20858782

  11. Improvement in Rayleigh Scattering Measurement Accuracy

    NASA Technical Reports Server (NTRS)

    Fagan, Amy F.; Clem, Michelle M.; Elam, Kristie A.

    2012-01-01

    Spectroscopic Rayleigh scattering is an established flow diagnostic that has the ability to provide simultaneous velocity, density, and temperature measurements. The Fabry-Perot interferometer or etalon is a commonly employed instrument for resolving the spectrum of molecular Rayleigh scattered light for the purpose of evaluating these flow properties. This paper investigates the use of an acousto-optic frequency shifting device to improve measurement accuracy in Rayleigh scattering experiments at the NASA Glenn Research Center. The frequency shifting device is used as a means of shifting the incident or reference laser frequency by 1100 MHz to avoid overlap of the Rayleigh and reference signal peaks in the interference pattern used to obtain the velocity, density, and temperature measurements, and also to calibrate the free spectral range of the Fabry-Perot etalon. The measurement accuracy improvement is evaluated by comparison of Rayleigh scattering measurements acquired with and without shifting of the reference signal frequency in a 10 mm diameter subsonic nozzle flow.

  12. Marginal accuracy of temporary composite crowns.

    PubMed

    Tjan, A H; Tjan, A H; Grant, B E

    1987-10-01

    An in vitro study was conducted to quantitatively compare the marginal adaptation of temporary crowns made from Protemp material with those made from Scutan, Provisional, and Trim materials. A direct technique was used to make temporary restorations on prepared teeth with an impression as a matrix. Protem, Trim, and Provisional materials produced temporary crowns of comparable accuracy. Crowns made from Scutan material had open margins. PMID:2959770

  13. Measurement Accuracy Limitation Analysis on Synchrophasors

    SciTech Connect

    Zhao, Jiecheng; Zhan, Lingwei; Liu, Yilu; Qi, Hairong; Gracia, Jose R; Ewing, Paul D

    2015-01-01

    This paper analyzes the theoretical accuracy limitation of synchrophasors measurements on phase angle and frequency of the power grid. Factors that cause the measurement error are analyzed, including error sources in the instruments and in the power grid signal. Different scenarios of these factors are evaluated according to the normal operation status of power grid measurement. Based on the evaluation and simulation, the errors of phase angle and frequency caused by each factor are calculated and discussed.

  14. Gravitational model effects on ICBM accuracy

    NASA Astrophysics Data System (ADS)

    Ford, C. T.

    This paper describes methods used to assess the contribution of ICBM gravitational model errors to targeting accuracy. The evolution of gravitational model complexity, in both format and data base development, is summarized. Error analysis methods associated with six identified error sources are presented: geodetic coordinate errors; spherical harmonic potential function errors of commission and omission; and surface gravity anomaly errors of reduction, representation, and omission.

  15. Matter power spectrum and the challenge of percent accuracy

    NASA Astrophysics Data System (ADS)

    Schneider, Aurel; Teyssier, Romain; Potter, Doug; Stadel, Joachim; Onions, Julian; Reed, Darren S.; Smith, Robert E.; Springel, Volker; Pearce, Frazer R.; Scoccimarro, Roman

    2016-04-01

    Future galaxy surveys require one percent precision in the theoretical knowledge of the power spectrum over a large range including very nonlinear scales. While this level of accuracy is easily obtained in the linear regime with perturbation theory, it represents a serious challenge for small scales where numerical simulations are required. In this paper we quantify the precision of present-day N-body methods, identifying main potential error sources from the set-up of initial conditions to the measurement of the final power spectrum. We directly compare three widely used N-body codes, Ramses, Pkdgrav3, and Gadget3 which represent three main discretisation techniques: the particle-mesh method, the tree method, and a hybrid combination of the two. For standard run parameters, the codes agree to within one percent at k<=1 h Mpc‑1 and to within three percent at k<=10 h Mpc‑1. We also consider the bispectrum and show that the reduced bispectra agree at the sub-percent level for k<= 2 h Mpc‑1. In a second step, we quantify potential errors due to initial conditions, box size, and resolution using an extended suite of simulations performed with our fastest code Pkdgrav3. We demonstrate that the simulation box size should not be smaller than L=0.5 h‑1Gpc to avoid systematic finite-volume effects (while much larger boxes are required to beat down the statistical sample variance). Furthermore, a maximum particle mass of Mp=109 h‑1Msolar is required to conservatively obtain one percent precision of the matter power spectrum. As a consequence, numerical simulations covering large survey volumes of upcoming missions such as DES, LSST, and Euclid will need more than a trillion particles to reproduce clustering properties at the targeted accuracy.

  16. Accuracy of pointing a binaural listening array.

    PubMed

    Letowski, T R; Ricard, G L; Kalb, J T; Mermagen, T J; Amrein, K M

    1997-12-01

    We measured the accuracy with which sounds heard over a binaural, end-fire array could be located when the angular separation of the array's two arms was varied. Each individual arm contained nine cardioid electret microphones, the responses of which were combined to produce a unidirectional, band-limited pattern of sensitivity. We assessed the desirable angular separation of these arms by measuring the accuracy with which listeners could point to the source of a target sound presented against high-level background noise. We employed array separations of 30 degrees, 45 degrees, and 60 degrees, and signal-to-noise ratios of +5, -5, and -15 dB. Pointing accuracy was best for a separation of 60 degrees; this performance was indistinguishable from pointing during unaided listening conditions. In addition, the processing of the array was modeled to depict the information that was available for localization. The model indicates that highly directional binaural arrays can be expected to support accurate localization of sources of sound only near the axis of the array. Wider enhanced listening angles may be possible if the forward coverage of the sensor system is made less directional and more similar to that of human listeners. PMID:9473975

  17. Accuracy test procedure for image evaluation techniques.

    PubMed

    Jones, R A

    1968-01-01

    A procedure has been developed to determine the accuracy of image evaluation techniques. In the procedure, a target having orthogonal test arrays is photographed with a high quality optical system. During the exposure, the target is subjected to horizontal linear image motion. The modulation transfer functions of the images in the horizontal and vertical directions are obtained using the evaluation technique. Since all other degradations are symmetrical, the quotient of the two modulation transfer functions represents the modulation transfer function of the experimentally induced linear image motion. In an accurate experiment, any discrepancy between the experimental determination and the true value is due to inaccuracy in the image evaluation technique. The procedure was used to test the Perkin-Elmer automated edge gradient analysis technique over the spatial frequency range of 0-200 c/m. This experiment demonstrated that the edge gradient technique is accurate over this region and that the testing procedure can be controlled with the desired accuracy. Similarly, the test procedure can be used to determine the accuracy of other image evaluation techniques. PMID:20062421

  18. Determination of GPS orbits to submeter accuracy

    NASA Technical Reports Server (NTRS)

    Bertiger, W. I.; Lichten, S. M.; Katsigris, E. C.

    1988-01-01

    Orbits for satellites of the Global Positioning System (GPS) were determined with submeter accuracy. Tests used to assess orbital accuracy include orbit comparisons from independent data sets, orbit prediction, ground baseline determination, and formal errors. One satellite tracked 8 hours each day shows rms error below 1 m even when predicted more than 3 days outside of a 1-week data arc. Differential tracking of the GPS satellites in high Earth orbit provides a powerful relative positioning capability, even when a relatively small continental U.S. fiducial tracking network is used with less than one-third of the full GPS constellation. To demonstrate this capability, baselines of up to 2000 km in North America were also determined with the GPS orbits. The 2000 km baselines show rms daily repeatability of 0.3 to 2 parts in 10 to the 8th power and agree with very long base interferometry (VLBI) solutions at the level of 1.5 parts in 10 to the 8th power. This GPS demonstration provides an opportunity to test different techniques for high-accuracy orbit determination for high Earth orbiters. The best GPS orbit strategies included data arcs of at least 1 week, process noise models for tropospheric fluctuations, estimation of GPS solar pressure coefficients, and combine processing of GPS carrier phase and pseudorange data. For data arc of 2 weeks, constrained process noise models for GPS dynamic parameters significantly improved the situation.

  19. Speed/accuracy tradeoff in force perception.

    PubMed

    Rank, Markus; Di Luca, Massimiliano

    2015-06-01

    There is a well-known tradeoff between speed and accuracy in judgments made under uncertainty. Diffusion models have been proposed to capture the increase in response time for more uncertain decisions and the change in performance due to a prioritization of speed or accuracy in the responses. Experimental paradigms have been confined to the visual modality and model analysis have mostly used quantile-probability (QP) plots--response probability as a function of quantized RTs. Here, we extend diffusion modeling to haptics and test a novel type of analysis for judging model fitting. Participants classified force stimuli applied to the hand as "high" or "low." Data in QP plots indicate that the diffusion model captures well the overall pattern of responses in conditions where either speed or accuracy has been prioritized. To further the analysis, we compute just noticeable difference (JND) values separately for responses delivered with different RTs--we define these plots as JND quantile. The pattern of results evidences that slower responses lead to better force discrimination up to a plateau that is unaffected by prioritization instructions. Instead, the diffusion model predicts two well-separated plateaus depending on the condition. We propose that analyzing the relation between JNDs and response time should be considered in the evaluation of the diffusion model beyond the haptic modality, thus including vision. PMID:25867512

  20. Solving Nonlinear Euler Equations with Arbitrary Accuracy

    NASA Technical Reports Server (NTRS)

    Dyson, Rodger W.

    2005-01-01

    A computer program that efficiently solves the time-dependent, nonlinear Euler equations in two dimensions to an arbitrarily high order of accuracy has been developed. The program implements a modified form of a prior arbitrary- accuracy simulation algorithm that is a member of the class of algorithms known in the art as modified expansion solution approximation (MESA) schemes. Whereas millions of lines of code were needed to implement the prior MESA algorithm, it is possible to implement the present MESA algorithm by use of one or a few pages of Fortran code, the exact amount depending on the specific application. The ability to solve the Euler equations to arbitrarily high accuracy is especially beneficial in simulations of aeroacoustic effects in settings in which fully nonlinear behavior is expected - for example, at stagnation points of fan blades, where linearizing assumptions break down. At these locations, it is necessary to solve the full nonlinear Euler equations, and inasmuch as the acoustical energy is of the order of 4 to 5 orders of magnitude below that of the mean flow, it is necessary to achieve an overall fractional error of less than 10-6 in order to faithfully simulate entropy, vortical, and acoustical waves.

  1. Ground Truth Accuracy Tests of GPS Seismology

    NASA Astrophysics Data System (ADS)

    Elosegui, P.; Oberlander, D. J.; Davis, J. L.; Baena, R.; Ekstrom, G.

    2005-12-01

    As the precision of GPS determinations of site position continues to improve the detection of smaller and faster geophysical signals becomes possible. However, lack of independent measurements of these signals often precludes an assessment of the accuracy of such GPS position determinations. This may be particularly true for high-rate GPS applications. We have built an apparatus to assess the accuracy of GPS position determinations for high-rate applications, in particular the application known as "GPS seismology." The apparatus consists of a bidirectional, single-axis positioning table coupled to a digitally controlled stepping motor. The motor, in turn, is connected to a Field Programmable Gate Array (FPGA) chip that synchronously sequences through real historical earthquake profiles stored in Erasable Programmable Read Only Memory's (EPROM). A GPS antenna attached to this positioning table undergoes the simulated seismic motions of the Earth's surface while collecting high-rate GPS data. Analysis of the time-dependent position estimates can then be compared to the "ground truth," and the resultant GPS error spectrum can be measured. We have made extensive measurements with this system while inducing simulated seismic motions either in the horizontal plane or the vertical axis. A second stationary GPS antenna at a distance of several meters was simultaneously collecting high-rate (5 Hz) GPS data. We will present the calibration of this system, describe the GPS observations and data analysis, and assess the accuracy of GPS for high-rate geophysical applications and natural hazards mitigation.

  2. Piezoresistive position microsensors with ppm-accuracy

    NASA Astrophysics Data System (ADS)

    Stavrov, Vladimir; Shulev, Assen; Stavreva, Galina; Todorov, Vencislav

    2015-05-01

    In this article, the relation between position accuracy and the number of simultaneously measured values, such as coordinates, has been analyzed. Based on this, a conceptual layout of MEMS devices (microsensors) for multidimensional position monitoring comprising a single anchored and a single actuated part has been developed. Both parts are connected with a plurality of micromechanical flexures, and each flexure includes position detecting cantilevers. Microsensors having detecting cantilevers oriented in X and Y direction have been designed and prototyped. Experimentally measured results at characterization of 1D, 2D and 3D position microsensors are reported as well. Exploiting different flexure layouts, a travel range between 50μm and 1.8mm and sensors' sensitivity in the range between 30μV/μm and 5mV/μm@ 1V DC supply voltage have been demonstrated. A method for accurate calculation of all three Cartesian coordinates, based on measurement of at least three microsensors' signals has also been described. The analyses of experimental results prove the capability of position monitoring with ppm-(part per million) accuracy. The technology for fabrication of MEMS devices with sidewall embedded piezoresistors removes restrictions in strong improvement of their usability for position sensing with a high accuracy. The present study is, also a part of a common strategy for developing a novel MEMS-based platform for simultaneous accurate measurement of various physical values when they are transduced to a change of position.

  3. Speed versus accuracy in collective decision making.

    PubMed Central

    Franks, Nigel R; Dornhaus, Anna; Fitzsimmons, Jon P; Stevens, Martin

    2003-01-01

    We demonstrate a speed versus accuracy trade-off in collective decision making. House-hunting ant colonies choose a new nest more quickly in harsh conditions than in benign ones and are less discriminating. The errors that occur in a harsh environment are errors of judgement not errors of omission because the colonies have discovered all of the alternative nests before they initiate an emigration. Leptothorax albipennis ants use quorum sensing in their house hunting. They only accept a nest, and begin rapidly recruiting members of their colony, when they find within it a sufficient number of their nest-mates. Here we show that these ants can lower their quorum thresholds between benign and harsh conditions to adjust their speed-accuracy trade-off. Indeed, in harsh conditions these ants rely much more on individual decision making than collective decision making. Our findings show that these ants actively choose to take their time over judgements and employ collective decision making in benign conditions when accuracy is more important than speed. PMID:14667335

  4. On the Accuracy of Genomic Selection

    PubMed Central

    Rabier, Charles-Elie; Barre, Philippe; Asp, Torben; Charmet, Gilles; Mangin, Brigitte

    2016-01-01

    Genomic selection is focused on prediction of breeding values of selection candidates by means of high density of markers. It relies on the assumption that all quantitative trait loci (QTLs) tend to be in strong linkage disequilibrium (LD) with at least one marker. In this context, we present theoretical results regarding the accuracy of genomic selection, i.e., the correlation between predicted and true breeding values. Typically, for individuals (so-called test individuals), breeding values are predicted by means of markers, using marker effects estimated by fitting a ridge regression model to a set of training individuals. We present a theoretical expression for the accuracy; this expression is suitable for any configurations of LD between QTLs and markers. We also introduce a new accuracy proxy that is free of the QTL parameters and easily computable; it outperforms the proxies suggested in the literature, in particular, those based on an estimated effective number of independent loci (Me). The theoretical formula, the new proxy, and existing proxies were compared for simulated data, and the results point to the validity of our approach. The calculations were also illustrated on a new perennial ryegrass set (367 individuals) genotyped for 24,957 single nucleotide polymorphisms (SNPs). In this case, most of the proxies studied yielded similar results because of the lack of markers for coverage of the entire genome (2.7 Gb). PMID:27322178

  5. Diagnostic Accuracy of Fractional Flow Reserve From Anatomic CT Angiography

    PubMed Central

    Min, James K.; Leipsic, Jonathon; Pencina, Michael J.; Berman, Daniel S.; Koo, Bon-Kwon; van Mieghem, Carlos; Erglis, Andrejs; Lin, Fay Y.; Dunning, Allison M.; Apruzzese, Patricia; Budoff, Matthew J.; Cole, Jason H.; Jaffer, Farouc A.; Leon, Martin B.; Malpeso, Jennifer; John Mancini, G. B.; Park, Seung-Jung; Schwartz, Robert S.; Shaw, Leslee J.; Mauri, Laura

    2014-01-01

    Context Coronary computed tomographic (CT) angiography is a noninvasive anatomic test for diagnosis of coronary stenosis that does not determine whether a stenosis causes ischemia. In contrast, fractional flow reserve (FFR) is a physiologic measure of coronary stenosis expressing the amount of coronary flow still attainable despite the presence of a stenosis, but it requires an invasive procedure. Noninvasive FFR computed from CT (FFRCT) is a novel method for determining the physiologic significance of coronary artery disease (CAD), but its ability to identify ischemia has not been adequately examined to date. Objective To assess the diagnostic performance of FFRCT plus CT for diagnosis of hemodynamically significant coronary stenosis. Design, Setting, and Patients Multicenter diagnostic performance study involving 252 stable patients with suspected or known CAD from 17 centers in 5 countries who underwent CT, invasive coronary angiography (ICA), FFR, and FFRCT between October 2010 and October 2011. Computed tomography, ICA, FFR, and FFRCT were interpreted in blinded fashion by independent core laboratories. Accuracy of FFRCT plus CT for diagnosis of ischemia was compared with an invasive FFR reference standard. Ischemia was defined by an FFR or FFRCT of 0.80 or less, while anatomically obstructive CAD was defined by a stenosis of 50% or larger on CT and ICA. Main Outcome Measures The primary study outcome assessed whether FFRCT plus CT could improve the per-patient diagnostic accuracy such that the lower boundary of the 1-sided 95% confidence interval of this estimate exceeded 70%. Results Among study participants, 137 (54.4%) had an abnormal FFR determined by ICA. On a per-patient basis, diagnostic accuracy, sensitivity, specificity, positive predictive value, and negative predictive value of FFRCT plus CT were 73% (95% CI, 67%–78%), 90% (95% CI, 84%–95%), 54% (95% CI, 46%–83%), 67% (95% CI, 60%–74%), and 84% (95% CI, 74%–90%), respectively. Compared

  6. Parametric Characterization of SGP4 Theory and TLE Positional Accuracy

    NASA Astrophysics Data System (ADS)

    Oltrogge, D.; Ramrath, J.

    2014-09-01

    Two-Line Elements, or TLEs, contain mean element state vectors compatible with General Perturbations (GP) singly-averaged semi-analytic orbit theory. This theory, embodied in the SGP4 orbit propagator, provides sufficient accuracy for some (but perhaps not all) orbit operations and SSA tasks. For more demanding tasks, higher accuracy orbit and force model approaches (i.e. Special Perturbations numerical integration or SP) may be required. In recent times, the suitability of TLEs or GP theory for any SSA analysis has been increasingly questioned. Meanwhile, SP is touted as being of high quality and well-suited for most, if not all, SSA applications. Yet the lack of truth or well-known reference orbits that haven't already been adopted for radar and optical sensor network calibration has typically prevented a truly unbiased assessment of such assertions. To gain better insight into the practical limits of applicability for TLEs, SGP4 and the underlying GP theory, the native SGP4 accuracy is parametrically examined for the statistically-significant range of RSO orbit inclinations experienced as a function of all orbit altitudes from LEO through GEO disposal altitude. For each orbit altitude, reference or truth orbits were generated using full force modeling, time-varying space weather, and AGIs HPOP numerical integration orbit propagator. Then, TLEs were optimally fit to these truth orbits. The resulting TLEs were then propagated and positionally differenced with the truth orbits to determine how well the GP theory was able to fit the truth orbits. Resultant statistics characterizing these empirically-derived accuracies are provided. This TLE fit process of truth orbits was intentionally designed to be similar to the JSpOC process operationally used to generate Enhanced GP TLEs for debris objects. This allows us to draw additional conclusions of the expected accuracies of EGP TLEs. In the real world, Orbit Determination (OD) programs aren't provided with dense optical

  7. 100% Classification Accuracy Considered Harmful: The Normalized Information Transfer Factor Explains the Accuracy Paradox

    PubMed Central

    Valverde-Albacete, Francisco J.; Peláez-Moreno, Carmen

    2014-01-01

    The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. Despite optimizing classification error rate, high accuracy models may fail to capture crucial information transfer in the classification task. We present evidence of this behavior by means of a combinatorial analysis where every possible contingency matrix of 2, 3 and 4 classes classifiers are depicted on the entropy triangle, a more reliable information-theoretic tool for classification assessment. Motivated by this, we develop from first principles a measure of classification performance that takes into consideration the information learned by classifiers. We are then able to obtain the entropy-modulated accuracy (EMA), a pessimistic estimate of the expected accuracy with the influence of the input distribution factored out, and the normalized information transfer factor (NIT), a measure of how efficient is the transmission of information from the input to the output set of classes. The EMA is a more natural measure of classification performance than accuracy when the heuristic to maximize is the transfer of information through the classifier instead of classification error count. The NIT factor measures the effectiveness of the learning process in classifiers and also makes it harder for them to “cheat” using techniques like specialization, while also promoting the interpretability of results. Their use is demonstrated in a mind reading task competition that aims at decoding the identity of a video stimulus based on magnetoencephalography recordings. We show how the EMA and the NIT factor reject rankings based in accuracy, choosing more meaningful and interpretable classifiers. PMID:24427282

  8. 100% classification accuracy considered harmful: the normalized information transfer factor explains the accuracy paradox.

    PubMed

    Valverde-Albacete, Francisco J; Peláez-Moreno, Carmen

    2014-01-01

    The most widely spread measure of performance, accuracy, suffers from a paradox: predictive models with a given level of accuracy may have greater predictive power than models with higher accuracy. Despite optimizing classification error rate, high accuracy models may fail to capture crucial information transfer in the classification task. We present evidence of this behavior by means of a combinatorial analysis where every possible contingency matrix of 2, 3 and 4 classes classifiers are depicted on the entropy triangle, a more reliable information-theoretic tool for classification assessment. Motivated by this, we develop from first principles a measure of classification performance that takes into consideration the information learned by classifiers. We are then able to obtain the entropy-modulated accuracy (EMA), a pessimistic estimate of the expected accuracy with the influence of the input distribution factored out, and the normalized information transfer factor (NIT), a measure of how efficient is the transmission of information from the input to the output set of classes. The EMA is a more natural measure of classification performance than accuracy when the heuristic to maximize is the transfer of information through the classifier instead of classification error count. The NIT factor measures the effectiveness of the learning process in classifiers and also makes it harder for them to "cheat" using techniques like specialization, while also promoting the interpretability of results. Their use is demonstrated in a mind reading task competition that aims at decoding the identity of a video stimulus based on magnetoencephalography recordings. We show how the EMA and the NIT factor reject rankings based in accuracy, choosing more meaningful and interpretable classifiers. PMID:24427282

  9. Intelligence: The Speed and Accuracy Tradeoff in High Aptitude Individuals.

    ERIC Educational Resources Information Center

    Lajoie, Suzanne P.; Shore, Bruce M.

    1986-01-01

    The relative contributions of mental speed and accuracy to Primary Mental Ability (PMA) IQ prediction were studied in 52 high ability grade 10 students. Both speed and accuracy independently predicted IQ, but not speed over and above accuracy. Accuracy was demonstrated to be universally advantageous in IQ performance, but speed varied according to…

  10. Age-related reduction of the confidence-accuracy relationship in episodic memory: effects of recollection quality and retrieval monitoring.

    PubMed

    Wong, Jessica T; Cramer, Stefanie J; Gallo, David A

    2012-12-01

    We investigated age-related reductions in episodic metamemory accuracy. Participants studied pictures and words in different colors and then took forced-choice recollection tests. These tests required recollection of the earlier presentation color, holding familiarity of the response options constant. Metamemory accuracy was assessed for each participant by comparing recollection test accuracy with corresponding confidence judgments. We found that recollection test accuracy was greater in younger than older adults and also for pictures than font color. Metamemory accuracy tracked each of these recollection differences, as well as individual differences in recollection test accuracy within each age group, suggesting that recollection ability affects metamemory accuracy. Critically, the age-related impairment in metamemory accuracy persisted even when the groups were matched on recollection test accuracy, suggesting that metamemory declines were not entirely due to differences in recollection frequency or quantity, but that differences in recollection quality and/or monitoring also played a role. We also found that age-related impairments in recollection and metamemory accuracy were equivalent for pictures and font colors. This result contrasted with previous false recognition findings, which predicted that older adults would be differentially impaired when monitoring memory for less distinctive memories. These and other results suggest that age-related reductions in metamemory accuracy are not entirely attributable to false recognition effects, but also depend heavily on deficient recollection and/or monitoring of specific details associated with studied stimuli. PMID:22449027

  11. On Accuracy of Knowledge Acquisition for Decision Making Processes Acquiring Subjective Information on the Internet

    NASA Astrophysics Data System (ADS)

    Fujimoto, Kazunori; Yamamoto, Yutaka

    This paper presents a mathematical model for decision making processes where the knowledge for the decision is constructed automatically from subjective information on the Internet. This mathematical model enables us to know the required degree of accuracy of knowledge acquisition for constructing decision support systems using two technologies: automated knowledge acquisition from information on the Internet and automated reasoning about the acquired knowledge. The model consists of three elements: knowledge source, which is a set of subjective information on the Internet, knowledge acquisition, which acquires knowledge base within a computer from the knowledge source, and decision rule, which chooses a set of alternatives by using the knowledge base. One of the important features of this model is that the model contains not only decision making processes but also knowledge acquisition processes. This feature enables to analyze the decision processes with the sufficiency of knowledge sources and the accuracy of knowledge acquisition methods. Based on the model, decision processes by which the knowledge source and the knowledge base lead to the same choices are given and the required degree of accuracy of knowledge acquisition is quantified as required accuracy value. In order to show the way to utilize the value for designing the decision support systems, the value is calculated by using some examples of knowledge sources and decision rules. This paper also describes the computational complexity of the required accuracy value calculation and shows a computation principle for reducing the complexity to the polynomial order of the size of knowledge sources.

  12. Accuracy of Consonant-Vowel Syllables in Young Cochlear Implant Recipients and Hearing Children in the Single-Word Period

    ERIC Educational Resources Information Center

    Warner-Czyz, Andrea D.; Davis, Barbara L.; MacNeilage, Peter F.

    2010-01-01

    Purpose: Attaining speech accuracy requires that children perceive and attach meanings to vocal output on the basis of production system capacities. Because auditory perception underlies speech accuracy, profiles for children with hearing loss (HL) differ from those of children with normal hearing (NH). Method: To understand the impact of auditory…

  13. Accuracy of velocities from repeated GPS measurements

    NASA Astrophysics Data System (ADS)

    Akarsu, V.; Sanli, D. U.; Arslan, E.

    2015-04-01

    Today repeated GPS measurements are still in use, because we cannot always employ GPS permanent stations due to a variety of limitations. One area of study that uses velocities/deformation rates from repeated GPS measurements is the monitoring of crustal motion. This paper discusses the quality of the velocities derived using repeated GPS measurements for the aim of monitoring crustal motion. From a global network of International GNSS Service (IGS) stations, we processed GPS measurements repeated monthly and annually spanning nearly 15 years and estimated GPS velocities for GPS baseline components latitude, longitude and ellipsoidal height. We used web-based GIPSY for the processing. Assuming true deformation rates can only be determined from the solutions of 24 h observation sessions, we evaluated the accuracy of the deformation rates from 8 and 12 h sessions. We used statistical hypothesis testing to assess the velocities derived from short observation sessions. In addition, as an alternative control method we checked the accuracy of GPS solutions from short observation sessions against those of 24 h sessions referring to statistical criteria that measure the accuracy of regression models. Results indicate that the velocities of the vertical component are completely affected when repeated GPS measurements are used. The results also reveal that only about 30% of the 8 h solutions and about 40% of 12 h solutions for the horizontal coordinates are acceptable for velocity estimation. The situation is much worse for the vertical component in which none of the solutions from campaign measurements are acceptable for obtaining reliable deformation rates.

  14. Accuracy of abdominal auscultation for bowel obstruction

    PubMed Central

    Breum, Birger Michael; Rud, Bo; Kirkegaard, Thomas; Nordentoft, Tyge

    2015-01-01

    AIM: To investigate the accuracy and inter-observer variation of bowel sound assessment in patients with clinically suspected bowel obstruction. METHODS: Bowel sounds were recorded in patients with suspected bowel obstruction using a Littmann® Electronic Stethoscope. The recordings were processed to yield 25-s sound sequences in random order on PCs. Observers, recruited from doctors within the department, classified the sound sequences as either normal or pathological. The reference tests for bowel obstruction were intraoperative and endoscopic findings and clinical follow up. Sensitivity and specificity were calculated for each observer and compared between junior and senior doctors. Interobserver variation was measured using the Kappa statistic. RESULTS: Bowel sound sequences from 98 patients were assessed by 53 (33 junior and 20 senior) doctors. Laparotomy was performed in 47 patients, 35 of whom had bowel obstruction. Two patients underwent colorectal stenting due to large bowel obstruction. The median sensitivity and specificity was 0.42 (range: 0.19-0.64) and 0.78 (range: 0.35-0.98), respectively. There was no significant difference in accuracy between junior and senior doctors. The median frequency with which doctors classified bowel sounds as abnormal did not differ significantly between patients with and without bowel obstruction (26% vs 23%, P = 0.08). The 53 doctors made up 1378 unique pairs and the median Kappa value was 0.29 (range: -0.15-0.66). CONCLUSION: Accuracy and inter-observer agreement was generally low. Clinical decisions in patients with possible bowel obstruction should not be based on auscultatory assessment of bowel sounds. PMID:26379407

  15. Improvement of focus accuracy on processed wafer

    NASA Astrophysics Data System (ADS)

    Higashibata, Satomi; Komine, Nobuhiro; Fukuhara, Kazuya; Koike, Takashi; Kato, Yoshimitsu; Hashimoto, Kohji

    2013-04-01

    As feature size shrinkage in semiconductor device progress, process fluctuation, especially focus strongly affects device performance. Because focus control is an ongoing challenge in optical lithography, various studies have sought for improving focus monitoring and control. Focus errors are due to wafers, exposure tools, reticles, QCs, and so on. Few studies are performed to minimize the measurement errors of auto focus (AF) sensors of exposure tool, especially when processed wafers are exposed. With current focus measurement techniques, the phase shift grating (PSG) focus monitor 1) has been already proposed and its basic principle is that the intensity of the diffraction light of the mask pattern is made asymmetric by arranging a π/2 phase shift area on a reticle. The resist pattern exposed at the defocus position is shifted on the wafer and shifted pattern can be easily measured using an overlay inspection tool. However, it is difficult to measure shifted pattern for the pattern on the processed wafer because of interruptions caused by other patterns in the underlayer. In this paper, we therefore propose "SEM-PSG" technique, where the shift of the PSG resist mark is measured by employing critical dimension-scanning electron microscope (CD-SEM) to measure the focus error on the processed wafer. First, we evaluate the accuracy of SEM-PSG technique. Second, by applying the SEM-PSG technique and feeding the results back to the exposure, we evaluate the focus accuracy on processed wafers. By applying SEM-PSG feedback, the focus accuracy on the processed wafer was improved from 40 to 29 nm in 3σ.

  16. Accuracy and Precision of an IGRT Solution

    SciTech Connect

    Webster, Gareth J. Rowbottom, Carl G.; Mackay, Ranald I.

    2009-07-01

    Image-guided radiotherapy (IGRT) can potentially improve the accuracy of delivery of radiotherapy treatments by providing high-quality images of patient anatomy in the treatment position that can be incorporated into the treatment setup. The achievable accuracy and precision of delivery of highly complex head-and-neck intensity modulated radiotherapy (IMRT) plans with an IGRT technique using an Elekta Synergy linear accelerator and the Pinnacle Treatment Planning System (TPS) was investigated. Four head-and-neck IMRT plans were delivered to a semi-anthropomorphic head-and-neck phantom and the dose distribution was measured simultaneously by up to 20 microMOSFET (metal oxide semiconductor field-effect transmitter) detectors. A volumetric kilovoltage (kV) x-ray image was then acquired in the treatment position, fused with the phantom scan within the TPS using Syntegra software, and used to recalculate the dose with the precise delivery isocenter at the actual position of each detector within the phantom. Three repeat measurements were made over a period of 2 months to reduce the effect of random errors in measurement or delivery. To ensure that the noise remained below 1.5% (1 SD), minimum doses of 85 cGy were delivered to each detector. The average measured dose was systematically 1.4% lower than predicted and was consistent between repeats. Over the 4 delivered plans, 10/76 measurements showed a systematic error > 3% (3/76 > 5%), for which several potential sources of error were investigated. The error was ultimately attributable to measurements made in beam penumbrae, where submillimeter positional errors result in large discrepancies in dose. The implementation of an image-guided technique improves the accuracy of dose verification, particularly within high-dose gradients. The achievable accuracy of complex IMRT dose delivery incorporating image-guidance is within {+-} 3% in dose over the range of sample points. For some points in high-dose gradients

  17. Accuracy and precision of an IGRT solution.

    PubMed

    Webster, Gareth J; Rowbottom, Carl G; Mackay, Ranald I

    2009-01-01

    Image-guided radiotherapy (IGRT) can potentially improve the accuracy of delivery of radiotherapy treatments by providing high-quality images of patient anatomy in the treatment position that can be incorporated into the treatment setup. The achievable accuracy and precision of delivery of highly complex head-and-neck intensity modulated radiotherapy (IMRT) plans with an IGRT technique using an Elekta Synergy linear accelerator and the Pinnacle Treatment Planning System (TPS) was investigated. Four head-and-neck IMRT plans were delivered to a semi-anthropomorphic head-and-neck phantom and the dose distribution was measured simultaneously by up to 20 microMOSFET (metal oxide semiconductor field-effect transmitter) detectors. A volumetric kilovoltage (kV) x-ray image was then acquired in the treatment position, fused with the phantom scan within the TPS using Syntegra software, and used to recalculate the dose with the precise delivery isocenter at the actual position of each detector within the phantom. Three repeat measurements were made over a period of 2 months to reduce the effect of random errors in measurement or delivery. To ensure that the noise remained below 1.5% (1 SD), minimum doses of 85 cGy were delivered to each detector. The average measured dose was systematically 1.4% lower than predicted and was consistent between repeats. Over the 4 delivered plans, 10/76 measurements showed a systematic error > 3% (3/76 > 5%), for which several potential sources of error were investigated. The error was ultimately attributable to measurements made in beam penumbrae, where submillimeter positional errors result in large discrepancies in dose. The implementation of an image-guided technique improves the accuracy of dose verification, particularly within high-dose gradients. The achievable accuracy of complex IMRT dose delivery incorporating image-guidance is within +/- 3% in dose over the range of sample points. For some points in high-dose gradients

  18. Accuracy of the river discharge measurement

    NASA Astrophysics Data System (ADS)

    Chung Yang, Han

    2013-04-01

    Discharge values recorded for water conservancy and hydrological analysis is a very important work. Flood control projects, watershed remediation and river environmental planning projects quite need the discharge measurement data. In Taiwan, we have 129 rivers, in accordance with the watershed situation, economic development and other factors, divided into 24 major rivers, 29 minor rivers and 79 ordinary rivers. If each river needs to measure and record these discharge values, it will be enormous work. In addition, the characteristics of Taiwan's rivers contain steep slope, flow rapidly and sediment concentration higher, so it really encounters some difficulties in high flow measurement. When the flood hazards come, to seek a solution for reducing the time, manpower and material resources in river discharge measurement is very important. In this study, the river discharge measurement accuracy is used to determine the tolerance percentage to reduce the number of vertical velocity measurements, thereby reducing the time, manpower and material resources in the river discharge measurement. The velocity data sources used in this study form Yang (1998). Yang (1998) used the Fiber-optic Laser Doppler Velocimetery (FLDV) to obtain different velocity data under different experimental conditions. In this study, we use these data to calculate the mean velocity of each vertical line by three different velocity profile formula (that is, the law of the wall, Chiu's theory, Hu's theory), and then multiplied by each sub-area to obtain the discharge measurement values and compared with the true values (obtained by the direct integration mode) to obtain the accuracy of discharge. The research results show that the discharge measurement values obtained by Chiu's theory are closer to the true value, while the maximum error is the law of the wall. The main reason is that the law of the wall can't describe the maximum velocity occurred in underwater. In addition, the results also show

  19. Evaluating the accuracy of transcribed clinical data.

    PubMed Central

    Wilton, R.; Pennisi, A. J.

    1993-01-01

    This study evaluated the accuracy of data transcribed into a computer-stored record from a handwritten listing of pediatric immunizations. The immunization records of 459 children seen in the UCLA Children's Health Center in March, 1993 were transcribed into a clinical computer system on an ongoing basis. Of these records, 27 (5.9%) were subsequently found to be inaccurate. Reasons for inaccuracy in the transcribed records included incomplete written records, incomplete transcription of written records, and unavailability of immunization records from multiple health-care providers. The utility of a computer-stored clinical record may be adversely affected by unavoidable inaccuracies in transcribed clinical data. PMID:8130478

  20. A Visual mining based framework for classification accuracy estimation

    NASA Astrophysics Data System (ADS)

    Arun, Pattathal Vijayakumar

    2013-12-01

    Classification techniques have been widely used in different remote sensing applications and correct classification of mixed pixels is a tedious task. Traditional approaches adopt various statistical parameters, however does not facilitate effective visualisation. Data mining tools are proving very helpful in the classification process. We propose a visual mining based frame work for accuracy assessment of classification techniques using open source tools such as WEKA and PREFUSE. These tools in integration can provide an efficient approach for getting information about improvements in the classification accuracy and helps in refining training data set. We have illustrated framework for investigating the effects of various resampling methods on classification accuracy and found that bilinear (BL) is best suited for preserving radiometric characteristics. We have also investigated the optimal number of folds required for effective analysis of LISS-IV images. Techniki klasyfikacji są szeroko wykorzystywane w różnych aplikacjach teledetekcyjnych, w których poprawna klasyfikacja pikseli stanowi poważne wyzwanie. Podejście tradycyjne wykorzystujące różnego rodzaju parametry statystyczne nie zapewnia efektywnej wizualizacji. Wielce obiecujące wydaje się zastosowanie do klasyfikacji narzędzi do eksploracji danych. W artykule zaproponowano podejście bazujące na wizualnej analizie eksploracyjnej, wykorzystujące takie narzędzia typu open source jak WEKA i PREFUSE. Wymienione narzędzia ułatwiają korektę pół treningowych i efektywnie wspomagają poprawę dokładności klasyfikacji. Działanie metody sprawdzono wykorzystując wpływ różnych metod resampling na zachowanie dokładności radiometrycznej i uzyskując najlepsze wyniki dla metody bilinearnej (BL).

  1. Feasibility, Accuracy, and Repeatability of Suprathreshold Saccadic Vector Optokinetic Perimetry

    PubMed Central

    Murray, Ian C.; Cameron, Lorraine A.; McTrusty, Alice D.; Perperidis, Antonios; Brash, Harry M.; Fleck, Brian W.; Minns, Robert A.

    2016-01-01

    Purpose To evaluate feasibility, accuracy, and repeatability of suprathreshold Saccadic Vector Optokinetic Perimetry (SVOP) by comparison with Humphrey Field Analyzer (HFA) perimetry. Methods The subjects included children with suspected field defects (n = 10, age 5–15 years), adults with field defects (n = 33, age 39–78 years), healthy children (n = 12, age 6–14 years), and healthy adults (n = 30, age 16–61 years). The test protocol comprised repeat suprathreshold SVOP and HFA testing with the C-40 test pattern. Feasibility was assessed by protocol completeness. Sensitivity, specificity, and accuracy of SVOP was established by comparison with reliable HFA tests in two ways: (1) visual field pattern results (normal/abnormal), and (2) individual test point outcomes (seen/unseen). Repeatability of each test type was assessed using Cohen's kappa coefficient. Results Of subjects, 82% completed a full protocol. Poor reliability of HFA testing in child patients limited the robustness of comparisons in this group. Sensitivity, specificity, and accuracy across all groups when analyzing the visual field pattern results was 90.9%, 88.5%, and 89.0%, respectively, and was 69.1%, 96.9%, and 95.0%, respectively, when analyzing the individual test points. Cohen's kappa coefficient for repeatability of SVOP and HFA was excellent (0.87 and 0.88, respectively) when assessing visual field pattern results, and substantial (0.62 and 0.74, respectively) when assessing test point outcomes. Conclusions SVOP was accurate in this group of adults. Further studies are required to assess SVOP in child patient groups. Translational Relevance SVOP technology is still in its infancy but is used in a number of centers. It will undergo iterative improvements and this study provides a benchmark for future iterations. PMID:27617181

  2. Realtime and High Accuracy VLBI in Chinese Lunar Exploration Project

    NASA Astrophysics Data System (ADS)

    Weimin, Zheng

    The Chinese VLBI (Very Long Baseline Interferometry) Network - CVN consists of five radio telescopes and one data processing center. CVN is a powerful tracking and navigation tool in the Chinese lunar exploration projects. To meet the quick response of the CE lunar probes navigation requirements, station observation data must be sent to the VLBI center and processed in the real time mode. CVN has demonstrated its ability in the CE -1 and CE-2 missions. In December 2013, the CE-3 lander was successfully sent to the lunar surface and the Yutu rover was released. The new VLBI center and Tianma antenna came into use. During the mission, the lander carried the special Differential Oneway Range (DOR) beacon instead of the normal continuous spectrum VLBI signals. To get the high-precision result, CVN used the delta-DOR technique to track the lander with very extreme accuracy. VLBI delay residuals after orbit determination was nearly 0.5ns. The accuracy of landing position is better than 100 meters. The e-VLBI technique made the observable turnover time as short as 20~40 seconds. The same beam VLBI was used to determine the relative position between the lander and rover with meter accuracy. In the subsequent lunar missions, the new deep stations will join CVN and extend the baseline length. After the soft landing and sampling, the lander will be launched from the lunar surface and finish rendezvous and docking with the orbiter. The VLBI synthesis mapping method and the same beam VLBI can get the accurate lander location and support the rendezvous and docking procedure.

  3. An Assessment of Citizen Contributed Ground Reference Data for Land Cover Map Accuracy Assessment

    NASA Astrophysics Data System (ADS)

    Foody, G. M.

    2015-08-01

    It is now widely accepted that an accuracy assessment should be part of a thematic mapping programme. Authoritative good or best practices for accuracy assessment have been defined but are often impractical to implement. Key reasons for this situation are linked to the ground reference data used in the accuracy assessment. Typically, it is a challenge to acquire a large sample of high quality reference cases in accordance to desired sampling designs specified as conforming to good practice and the data collected are normally to some degree imperfect limiting their value to an accuracy assessment which implicitly assumes the use of a gold standard reference. Citizen sensors have great potential to aid aspects of accuracy assessment. In particular, they may be able to act as a source of ground reference data that may, for example, reduce sample size problems but concerns with data quality remain. The relative strengths and limitations of citizen contributed data for accuracy assessment are reviewed in the context of the authoritative good practices defined for studies of land cover by remote sensing. The article will highlight some of the ways that citizen contributed data have been used in accuracy assessment as well as some of the problems that require further attention, and indicate some of the potential ways forward in the future.

  4. A review on the processing accuracy of two-photon polymerization

    SciTech Connect

    Zhou, Xiaoqin; Hou, Yihong; Lin, Jieqiong

    2015-03-15

    Two-photon polymerization (TPP) is a powerful and potential technology to fabricate true three-dimensional (3D) micro/nanostructures of various materials with subdiffraction-limit resolution. And it has been applied to microoptics, electronics, communications, biomedicine, microfluidic devices, MEMS and metamaterials. These applications, such as microoptics and photon crystals, put forward rigorous requirements on the processing accuracy of TPP, including the dimensional accuracy, shape accuracy and surface roughness and the processing accuracy influences their performance, even invalidate them. In order to fabricate precise 3D micro/nanostructures, the factors influencing the processing accuracy need to be considered comprehensively and systematically. In this paper, we review the basis of TPP micro/nanofabrication, including mechanism of TPP, experimental set-up for TPP and scaling laws of resolution of TPP. Then, we discuss the factors influencing the processing accuracy. Finally, we summarize the methods reported lately to improve the processing accuracy from improving the resolution and changing spatial arrangement of voxels.

  5. A review on the processing accuracy of two-photon polymerization

    NASA Astrophysics Data System (ADS)

    Zhou, Xiaoqin; Hou, Yihong; Lin, Jieqiong

    2015-03-01

    Two-photon polymerization (TPP) is a powerful and potential technology to fabricate true three-dimensional (3D) micro/nanostructures of various materials with subdiffraction-limit resolution. And it has been applied to microoptics, electronics, communications, biomedicine, microfluidic devices, MEMS and metamaterials. These applications, such as microoptics and photon crystals, put forward rigorous requirements on the processing accuracy of TPP, including the dimensional accuracy, shape accuracy and surface roughness and the processing accuracy influences their performance, even invalidate them. In order to fabricate precise 3D micro/nanostructures, the factors influencing the processing accuracy need to be considered comprehensively and systematically. In this paper, we review the basis of TPP micro/nanofabrication, including mechanism of TPP, experimental set-up for TPP and scaling laws of resolution of TPP. Then, we discuss the factors influencing the processing accuracy. Finally, we summarize the methods reported lately to improve the processing accuracy from improving the resolution and changing spatial arrangement of voxels.

  6. Diagnostic accuracy of the clinical feeding evaluation in detecting aspiration in children: a systematic review.

    PubMed

    Calvo, Irene; Conway, Aifric; Henriques, Filipa; Walshe, Margaret

    2016-06-01

    The aim of this systematic review is to determine the diagnostic accuracy of clinical feeding evaluation (CFE) compared to instrumental assessments in detecting oropharyngeal aspiration (OPA) in children. This is important to support clinical decision-making and to provide safe, cost-effective, higher quality care. All published and unpublished studies in all languages assessing the diagnostic accuracy of CFE compared to videofluoroscopic swallowing study (VFSS) and/or fibre-optic endoscopic examination of swallowing (FEES) in detecting OPA in paediatric populations were sought. Databases were searched from inception to April 2015. Grey literature, citations, and references were also searched. Two independent reviewers extracted and analysed data. Accuracy estimates were calculated. Research reports were translated into English as required. Six studies examining the diagnostic accuracy of CFE using VFSS and/or FEES were eligible for inclusion. Sample sizes, populations studied, and CFE characteristics varied widely. The overall methodological quality of the studies, assessed with QUADAS-2, was considered 'low'. Results suggested that CFEs trialling liquid consistencies might provide better accuracy estimates than CFEs trialling solids exclusively. This systematic review highlights the critical lack of evidence on the accuracy of CFE in detecting OPA in children. Larger well-designed primary diagnostic test accuracy studies in this area are needed to inform dysphagia assessment in paediatrics. PMID:26862075

  7. Millimeter accuracy satellites for two color ranging

    NASA Technical Reports Server (NTRS)

    Degnan, John J.

    1993-01-01

    The principal technical challenge in designing a millimeter accuracy satellite to support two color observations at high altitudes is to provide high optical cross-section simultaneously with minimal pulse spreading. In order to address this issue, we provide, a brief review of some fundamental properties of optical retroreflectors when used in spacecraft target arrays, develop a simple model for a spherical geodetic satellite, and use the model to determine some basic design criteria for a new generation of geodetic satellites capable of supporting millimeter accuracy two color laser ranging. We find that increasing the satellite diameter provides: a larger surface area for additional cube mounting thereby leading to higher cross-sections; and makes the satellite surface a better match for the incoming planar phasefront of the laser beam. Restricting the retroreflector field of view (e.g. by recessing it in its holder) limits the target response to the fraction of the satellite surface which best matches the optical phasefront thereby controlling the amount of pulse spreading. In surveying the arrays carried by existing satellites, we find that European STARLETTE and ERS-1 satellites appear to be the best candidates for supporting near term two color experiments in space.

  8. Curation accuracy of model organism databases.

    PubMed

    Keseler, Ingrid M; Skrzypek, Marek; Weerasinghe, Deepika; Chen, Albert Y; Fulcher, Carol; Li, Gene-Wei; Lemmer, Kimberly C; Mladinich, Katherine M; Chow, Edmond D; Sherlock, Gavin; Karp, Peter D

    2014-01-01

    Manual extraction of information from the biomedical literature-or biocuration-is the central methodology used to construct many biological databases. For example, the UniProt protein database, the EcoCyc Escherichia coli database and the Candida Genome Database (CGD) are all based on biocuration. Biological databases are used extensively by life science researchers, as online encyclopedias, as aids in the interpretation of new experimental data and as golden standards for the development of new bioinformatics algorithms. Although manual curation has been assumed to be highly accurate, we are aware of only one previous study of biocuration accuracy. We assessed the accuracy of EcoCyc and CGD by manually selecting curated assertions within randomly chosen EcoCyc and CGD gene pages and by then validating that the data found in the referenced publications supported those assertions. A database assertion is considered to be in error if that assertion could not be found in the publication cited for that assertion. We identified 10 errors in the 633 facts that we validated across the two databases, for an overall error rate of 1.58%, and individual error rates of 1.82% for CGD and 1.40% for EcoCyc. These data suggest that manual curation of the experimental literature by Ph.D-level scientists is highly accurate. Database URL: http://ecocyc.org/, http://www.candidagenome.org// PMID:24923819

  9. High accuracy electronic material level sensor

    DOEpatents

    McEwan, Thomas E.

    1997-01-01

    The High Accuracy Electronic Material Level Sensor (electronic dipstick) is a sensor based on time domain reflectometry (TDR) of very short electrical pulses. Pulses are propagated along a transmission line or guide wire that is partially immersed in the material being measured; a launcher plate is positioned at the beginning of the guide wire. Reflected pulses are produced at the material interface due to the change in dielectric constant. The time difference of the reflections at the launcher plate and at the material interface are used to determine the material level. Improved performance is obtained by the incorporation of: 1) a high accuracy time base that is referenced to a quartz crystal, 2) an ultrawideband directional sampler to allow operation without an interconnect cable between the electronics module and the guide wire, 3) constant fraction discriminators (CFDs) that allow accurate measurements regardless of material dielectric constants, and reduce or eliminate errors induced by triple-transit or "ghost" reflections on the interconnect cable. These improvements make the dipstick accurate to better than 0.1%.

  10. High accuracy electronic material level sensor

    DOEpatents

    McEwan, T.E.

    1997-03-11

    The High Accuracy Electronic Material Level Sensor (electronic dipstick) is a sensor based on time domain reflectometry (TDR) of very short electrical pulses. Pulses are propagated along a transmission line or guide wire that is partially immersed in the material being measured; a launcher plate is positioned at the beginning of the guide wire. Reflected pulses are produced at the material interface due to the change in dielectric constant. The time difference of the reflections at the launcher plate and at the material interface are used to determine the material level. Improved performance is obtained by the incorporation of: (1) a high accuracy time base that is referenced to a quartz crystal, (2) an ultrawideband directional sampler to allow operation without an interconnect cable between the electronics module and the guide wire, (3) constant fraction discriminators (CFDs) that allow accurate measurements regardless of material dielectric constants, and reduce or eliminate errors induced by triple-transit or ``ghost`` reflections on the interconnect cable. These improvements make the dipstick accurate to better than 0.1%. 4 figs.

  11. Does reader visual fatigue impact interpretation accuracy?

    NASA Astrophysics Data System (ADS)

    Krupinski, Elizabeth A.; Berbaum, Kevin S.

    2010-02-01

    To measure the impact of reader of reader visual fatigue by assessing symptoms, the ability to keep the eye focused on the display and diagnostic accuracy. Twenty radiology residents and 20 radiologists were given a diagnostic performance test containing 60 skeletal radiographic studies, half with fractures, before and after a day of clinical reading. Diagnostic accuracy was measured using area under the proper binormal curve (AUC). Error in visual accommodation was measured before and after each test session and subjects completed the Swedish Occupational Fatigue Inventory (SOFI) and the oculomotor strain subscale of the Simulator Sickness Questionnaire (SSQ) before each session. Average AUC was 0.89 for before work test and 0.85 for the after work test, (F(1,36) = 4.15, p = 0.049 < 0.05). There was significantly greater error in accommodation after the clinical workday (F(1,14829) = 7.81, p = 0.005 < 0.01), and after the reading test (F(1,14829) = 839.33, p < 0.0001). SOFI measures of lack of energy, physical discomfort and sleepiness were higher after a day of clinical reading (p < 0.05). The SSQ measure of oculomotor symptoms (i.e., difficulty focusing, blurred vision) was significantly higher after a day of clinical reading (F(1,75) = 20.38, p < 0.0001). Radiologists are visually fatigued by their clinical reading workday. This reduces their ability to focus on diagnostic images and to accurately interpret them.

  12. Accuracy assessment of landslide prediction models

    NASA Astrophysics Data System (ADS)

    Othman, A. N.; Mohd, W. M. N. W.; Noraini, S.

    2014-02-01

    The increasing population and expansion of settlements over hilly areas has greatly increased the impact of natural disasters such as landslide. Therefore, it is important to developed models which could accurately predict landslide hazard zones. Over the years, various techniques and models have been developed to predict landslide hazard zones. The aim of this paper is to access the accuracy of landslide prediction models developed by the authors. The methodology involved the selection of study area, data acquisition, data processing and model development and also data analysis. The development of these models are based on nine different landslide inducing parameters i.e. slope, land use, lithology, soil properties, geomorphology, flow accumulation, aspect, proximity to river and proximity to road. Rank sum, rating, pairwise comparison and AHP techniques are used to determine the weights for each of the parameters used. Four (4) different models which consider different parameter combinations are developed by the authors. Results obtained are compared to landslide history and accuracies for Model 1, Model 2, Model 3 and Model 4 are 66.7, 66.7%, 60% and 22.9% respectively. From the results, rank sum, rating and pairwise comparison can be useful techniques to predict landslide hazard zones.

  13. Dimensional accuracy of 3D printed vertebra

    NASA Astrophysics Data System (ADS)

    Ogden, Kent; Ordway, Nathaniel; Diallo, Dalanda; Tillapaugh-Fay, Gwen; Aslan, Can

    2014-03-01

    3D printer applications in the biomedical sciences and medical imaging are expanding and will have an increasing impact on the practice of medicine. Orthopedic and reconstructive surgery has been an obvious area for development of 3D printer applications as the segmentation of bony anatomy to generate printable models is relatively straightforward. There are important issues that should be addressed when using 3D printed models for applications that may affect patient care; in particular the dimensional accuracy of the printed parts needs to be high to avoid poor decisions being made prior to surgery or therapeutic procedures. In this work, the dimensional accuracy of 3D printed vertebral bodies derived from CT data for a cadaver spine is compared with direct measurements on the ex-vivo vertebra and with measurements made on the 3D rendered vertebra using commercial 3D image processing software. The vertebra was printed on a consumer grade 3D printer using an additive print process using PLA (polylactic acid) filament. Measurements were made for 15 different anatomic features of the vertebral body, including vertebral body height, endplate width and depth, pedicle height and width, and spinal canal width and depth, among others. It is shown that for the segmentation and printing process used, the results of measurements made on the 3D printed vertebral body are substantially the same as those produced by direct measurement on the vertebra and measurements made on the 3D rendered vertebra.

  14. Dust trajectory sensor: accuracy and data analysis.

    PubMed

    Xie, J; Sternovsky, Z; Grün, E; Auer, S; Duncan, N; Drake, K; Le, H; Horanyi, M; Srama, R

    2011-10-01

    The Dust Trajectory Sensor (DTS) instrument is developed for the measurement of the velocity vector of cosmic dust particles. The trajectory information is imperative in determining the particles' origin and distinguishing dust particles from different sources. The velocity vector also reveals information on the history of interaction between the charged dust particle and the magnetospheric or interplanetary space environment. The DTS operational principle is based on measuring the induced charge from the dust on an array of wire electrodes. In recent work, the DTS geometry has been optimized [S. Auer, E. Grün, S. Kempf, R. Srama, A. Srowig, Z. Sternovsky, and V Tschernjawski, Rev. Sci. Instrum. 79, 084501 (2008)] and a method of triggering was developed [S. Auer, G. Lawrence, E. Grün, H. Henkel, S. Kempf, R. Srama, and Z. Sternovsky, Nucl. Instrum. Methods Phys. Res. A 622, 74 (2010)]. This article presents the method of analyzing the DTS data and results from a parametric study on the accuracy of the measurements. A laboratory version of the DTS has been constructed and tested with particles in the velocity range of 2-5 km/s using the Heidelberg dust accelerator facility. Both the numerical study and the analyzed experimental data show that the accuracy of the DTS instrument is better than about 1% in velocity and 1° in direction. PMID:22047326

  15. Accuracy of the blood pressure measurement.

    PubMed

    Rabbia, F; Del Colle, S; Testa, E; Naso, D; Veglio, F

    2006-08-01

    Blood pressure measurement is the cornerstone for the diagnosis, the treatment and the research on arterial hypertension, and all of the decisions about one of these single aspects may be dramatically influenced by the accuracy of the measurement. Over the past 20 years or so, the accuracy of the conventional Riva-Rocci/Korotkoff technique of blood pressure measurement has been questioned and efforts have been made to improve the technique with automated devices. In the same period, recognition of the phenomenon of white coat hypertension, whereby some individuals with an apparent increase in blood pressure have normal, or reduced, blood pressures when measurement is repeated away from the medical environment, has focused attention on methods of measurement that provide profiles of blood pressure behavior rather than relying on isolated measurements under circumstances that may in themselves influence the level of blood pressure recorded. These methodologies have included repeated measurements of blood pressure using the traditional technique, self-measurement of blood pressure in the home or work place, and ambulatory blood pressure measurement using innovative automated devices. The purpose of this review to serve as a source of practical information about the commonly used methods for blood pressure measurement: the traditional Riva-Rocci method and the automated methods. PMID:17016412

  16. Dust trajectory sensor: Accuracy and data analysis

    NASA Astrophysics Data System (ADS)

    Xie, J.; Sternovsky, Z.; Grün, E.; Auer, S.; Duncan, N.; Drake, K.; Le, H.; Horanyi, M.; Srama, R.

    2011-10-01

    The Dust Trajectory Sensor (DTS) instrument is developed for the measurement of the velocity vector of cosmic dust particles. The trajectory information is imperative in determining the particles' origin and distinguishing dust particles from different sources. The velocity vector also reveals information on the history of interaction between the charged dust particle and the magnetospheric or interplanetary space environment. The DTS operational principle is based on measuring the induced charge from the dust on an array of wire electrodes. In recent work, the DTS geometry has been optimized [S. Auer, E. Grün, S. Kempf, R. Srama, A. Srowig, Z. Sternovsky, and V Tschernjawski, Rev. Sci. Instrum. 79, 084501 (2008), 10.1063/1.2960566] and a method of triggering was developed [S. Auer, G. Lawrence, E. Grün, H. Henkel, S. Kempf, R. Srama, and Z. Sternovsky, Nucl. Instrum. Methods Phys. Res. A 622, 74 (2010), 10.1016/j.nima.2010.06.091]. This article presents the method of analyzing the DTS data and results from a parametric study on the accuracy of the measurements. A laboratory version of the DTS has been constructed and tested with particles in the velocity range of 2-5 km/s using the Heidelberg dust accelerator facility. Both the numerical study and the analyzed experimental data show that the accuracy of the DTS instrument is better than about 1% in velocity and 1° in direction.

  17. Approaching chemical accuracy with quantum Monte Carlo.

    PubMed

    Petruzielo, F R; Toulouse, Julien; Umrigar, C J

    2012-03-28

    A quantum Monte Carlo study of the atomization energies for the G2 set of molecules is presented. Basis size dependence of diffusion Monte Carlo atomization energies is studied with a single determinant Slater-Jastrow trial wavefunction formed from Hartree-Fock orbitals. With the largest basis set, the mean absolute deviation from experimental atomization energies for the G2 set is 3.0 kcal/mol. Optimizing the orbitals within variational Monte Carlo improves the agreement between diffusion Monte Carlo and experiment, reducing the mean absolute deviation to 2.1 kcal/mol. Moving beyond a single determinant Slater-Jastrow trial wavefunction, diffusion Monte Carlo with a small complete active space Slater-Jastrow trial wavefunction results in near chemical accuracy. In this case, the mean absolute deviation from experimental atomization energies is 1.2 kcal/mol. It is shown from calculations on systems containing phosphorus that the accuracy can be further improved by employing a larger active space. PMID:22462844

  18. Impairment in flexible regulation of speed and accuracy in children with ADHD.

    PubMed

    Vallesi, Antonino; D'Agati, Elisa; Pasini, Augusto; Pitzianti, Mariabernarda; Curatolo, Paolo

    2013-10-01

    Attention deficit hyperactivity disorder (ADHD) is characterized by poor adaptation of behavior to environmental demands, including difficulties in flexibly regulating behavior. To understand whether ADHD is associated with a reduction of strategic flexibility in modulating speed and accuracy, we used a perceptual decision-making task that required participants to randomly stress either fast or accurate responding. Thirty-one drug-free boys with ADHD combined-type (mean age: 10.2 years) and 33 healthy control boys (mean age: 10.7 years), matched for age and IQ, participated. Both reaction time and accuracy data were analyzed. Our findings demonstrated significantly lower accuracy in ADHD children than in controls when switching from speed to accuracy instructions. This deficit was directly associated with hyperactivity symptoms but not with inattention. Our results showed that ADHD is associated with a deficit in dynamically switching response strategy according to task demands on a trial-to-trial basis. PMID:24007981

  19. Accuracy Comparison of Vhr Systematic-Ortho Satellite Imageries against Vhr Orthorectified Imageries Using Gcp

    NASA Astrophysics Data System (ADS)

    Widyaningrum, E.; Fajari, M.; Octariady, J.

    2016-06-01

    The Very High Resolution (VHR) satellite imageries such us Pleiades, WorldView-2, GeoEye-1 used for precise mapping purpose must be corrected from any distortion to achieve the expected accuracy. Orthorectification is performed to eliminate geometric errors of the VHR satellite imageries. Orthorectification requires main input data such as Digital Elevation Model (DEM) and Ground Control Point (GCP). The VHR systematic-ortho imageries were generated using SRTM 30m DEM without using any GCP data. The accuracy value differences of VHR systematic-ortho imageries and VHR orthorectified imageries using GCP currently is not exactly defined. This study aimed to identified the accuracy comparison of VHR systematic-ortho imageries against orthorectified imageries using GCP. Orthorectified imageries using GCP created by using Rigorous model. Accuracy evaluation is calculated by using several independent check points.

  20. Evaluating the Accuracy of Health News Publications in a Drug Literature Evaluation Course

    PubMed Central

    Timpe, Erin M.; Eichner, Samantha F.

    2006-01-01

    Objectives To design an assignment for second-professional year pharmacy students to assess the accuracy and quality of health information published in the news. Design Students in a literature evaluation course were assigned a health-related news publication to review and find the original published research article. They then critically evaluated the quality and accuracy of the news publication based on the original research. All students wrote a critique focusing on the quality and accuracy of the news article and potential responses the lay public might have. Assessment Eighty-four percent of students agreed the writing assignment reinforced critical literature evaluation skills, while 90% agreed the assignment contributed to completion of course objectives. Conclusions A writing assignment requiring comparison of a news publication to the original research reinforces critical literature evaluation and communication skills, as well as stimulates thought about the accuracy, quality, and public responses to health information published in the news. PMID:17136202

  1. Consideration for high accuracy radiation efficiency measurements for the Solar Power Satellite (SPS) subarrays

    NASA Technical Reports Server (NTRS)

    Kozakoff, D. J.; Schuchardt, J. M.; Ryan, C. E.

    1980-01-01

    The transmit beam and radiation efficiency for 10 metersquare subarray panels were quantified. Measurement performance potential of far field elevated and ground reflection ranges and near field technique were evaluated. The state-of-the-art of critical components and/or unique facilities required was identified. Relative cost, complexity and performance tradeoffs were performed for techniques capable of achieving accuracy objectives. It is considered that because of the large electrical size of the SPS subarray panels and the requirement for high accuracy measurements, specialized measurement facilities are required. Most critical measurement error sources have been identified for both conventional far field and near field techniques. Although the adopted error budget requires advances in state-of-the-art of microwave instrumentation, the requirements appear feasible based on extrapolation from today's technology. Additional performance and cost tradeoffs need to be completed before the choice of the preferred measurement technique is finalized.

  2. MSTAR: an absolute metrology system with submicrometer accuracy

    NASA Astrophysics Data System (ADS)

    Lay, Oliver P.; Dubovitsky, Serge; Peters, Robert D.; Burger, Johan; Steier, Willian H.; Ahn, Seh-Won; Fetterman, Harrold R.

    2004-10-01

    Laser metrology systems are a key component of stellar interferometers, used to monitor path lengths and dimensions internal to the instrument. Most interferometers use 'relative' metrology, in which the integer number of wavelengths along the path is unknown, and the measurement of length is ambiguous. Changes in the path length can be measured relative to an initial calibration point, but interruption of the metrology beam at any time requires a re-calibration of the system. The MSTAR sensor (Modulation Sideband Technology for Absolute Ranging) is a new system for measuring absolute distance, capable of resolving the integer cycle ambiguity of standard interferometers, and making it possible to measure distance with sub-nanometer accuracy. We describe the design of the system, show results for target distances up to 1 meter, and demonstrate how the system can be scaled to kilometer-scale distances. In recent experiments, we have used white light interferometry to augment the 'truth' measurements and validate the zero-point of the system. MSTAR is a general-purpose tool for conveniently measuring length with much greater accuracy than was previously possible, and has a wide range of possible applications.

  3. Assessing expected accuracy of probe vehicle travel time reports

    SciTech Connect

    Hellinga, B.; Fu, L.

    1999-12-01

    The use of probe vehicles to provide estimates of link travel times has been suggested as a means of obtaining travel times within signalized networks for use in advanced travel information systems. Past research in the literature has proved contradictory conclusions regarding the expected accuracy of these probe-based estimates, and consequently has estimated different levels of market penetration of probe vehicles required to sustain accurate data within an advanced traveler information system. This paper examines the effect of sampling bias on the accuracy of the probe estimates. An analytical expression is derived on the basis of queuing theory to prove that bias in arrival time distributions and/or in the proportion of probes associated with each link departure turning movement will lead to a systematic bias in the sample estimate of the mean delay. Subsequently, the potential for and impact of sampling bias on a signalized link is examined by simulating an arterial corridor. The analytical derivation and the simulation analysis show that the reliability of probe-based average link travel times is highly affected by sampling bias. Furthermore, this analysis shows that the contradictory conclusions of previous research are directly related to the presence of absence of sample bias.

  4. Generalized and Heuristic-Free Feature Construction for Improved Accuracy

    PubMed Central

    Fan, Wei; Zhong, Erheng; Peng, Jing; Verscheure, Olivier; Zhang, Kun; Ren, Jiangtao; Yan, Rong; Yang, Qiang

    2010-01-01

    State-of-the-art learning algorithms accept data in feature vector format as input. Examples belonging to different classes may not always be easy to separate in the original feature space. One may ask: can transformation of existing features into new space reveal significant discriminative information not obvious in the original space? Since there can be infinite number of ways to extend features, it is impractical to first enumerate and then perform feature selection. Second, evaluation of discriminative power on the complete dataset is not always optimal. This is because features highly discriminative on subset of examples may not necessarily be significant when evaluated on the entire dataset. Third, feature construction ought to be automated and general, such that, it doesn't require domain knowledge and its improved accuracy maintains over a large number of classification algorithms. In this paper, we propose a framework to address these problems through the following steps: (1) divide-conquer to avoid exhaustive enumeration; (2) local feature construction and evaluation within subspaces of examples where local error is still high and constructed features thus far still do not predict well; (3) weighting rules based search that is domain knowledge free and has provable performance guarantee. Empirical studies indicate that significant improvement (as much as 9% in accuracy and 28% in AUC) is achieved using the newly constructed features over a variety of inductive learners evaluated against a number of balanced, skewed and high-dimensional datasets. Software and datasets are available from the authors. PMID:21544257

  5. High-accuracy mass spectrometry for fundamental studies.

    PubMed

    Kluge, H-Jürgen

    2010-01-01

    Mass spectrometry for fundamental studies in metrology and atomic, nuclear and particle physics requires extreme sensitivity and efficiency as well as ultimate resolving power and accuracy. An overview will be given on the global status of high-accuracy mass spectrometry for fundamental physics and metrology. Three quite different examples of modern mass spectrometric experiments in physics are presented: (i) the retardation spectrometer KATRIN at the Forschungszentrum Karlsruhe, employing electrostatic filtering in combination with magnetic-adiabatic collimation-the biggest mass spectrometer for determining the smallest mass, i.e. the mass of the electron anti-neutrino, (ii) the Experimental Cooler-Storage Ring at GSI-a mass spectrometer of medium size, relative to other accelerators, for determining medium-heavy masses and (iii) the Penning trap facility, SHIPTRAP, at GSI-the smallest mass spectrometer for determining the heaviest masses, those of super-heavy elements. Finally, a short view into the future will address the GSI project HITRAP at GSI for fundamental studies with highly-charged ions. PMID:20530821

  6. Infrared ear thermometers--parameters influencing their reading and accuracy.

    PubMed

    Pusnik, Igor; Drnovsek, Janko

    2005-12-01

    Infrared ear thermometers (IRETs) are extensively used for measuring the temperature of a human body. For accurate measurements with IRETs they have to be calibrated regularly with an appropriate and traceable calibration system. Such systems are neither widely available nor are there many competent (accredited) laboratories which can provide traceability for IRETs. This paper describes some important influential parameters in the calibration and use of IRETs, which were observed during the extensive research on IRETs and have not been reported in the literature yet. IRET readings, and consequently also their most important metrological characteristics, accuracy and uncertainty of measurement, depend on these influential parameters. According to our findings, we would like to warn users of medical radiation thermometers, not only IRETs but also forehead thermometers and infrared temperature scanning systems, that they should be extremely careful in selection, maintenance and use of medical radiation thermometers. Measurement accuracy, as it is required by several technical standards, is hard to achieve with the majority of currently available medical radiation thermometers. PMID:16311454

  7. Accuracy of locating circular features using machine vision

    NASA Astrophysics Data System (ADS)

    Sklair, Cheryl W.; Hoff, William A.; Gatrell, Lance B.

    1992-03-01

    The ability to automatically locate objects using vision is a key technology for flexible, intelligent robotic operations. The vision task is facilitated by placing optical targets or markings in advance on the objects to be located. A number of researchers have advocated the use of circular target features as the features that can be most accurately located. This paper describes extensive analysis on circle centroid accuracy using both simulations and laboratory measurements. The work was part of an effort to design a video positioning sensor for NASA's Flight Telerobotic Servicer that would meet accuracy requirements. We have analyzed the main contributors to centroid error and have classified them into the following: (1) spatial quantization errors, (2) errors due to signal noise and random timing errors, (3) surface tilt errors, and (4) errors in modeling camera geometry. It is possible to compensate for the errors in (3) given an estimate of the tilt angle, and the errors from (4) by calibrating the intrinsic camera attributes. The errors in (1) and (2) cannot be compensated for, but they can be measured and their effects reduced somewhat. To characterize these error sources, we measured centroid repeatability under various conditions, including synchronization method, signal-to-noise ratio, and frequency attenuation. Although these results are specific to our video system and equipment, they provide a reference point that should be a characteristic of typical CCD cameras and digitization equipment.

  8. Analysis of the Ionospheric Corrections Accuracy of EGNOS System

    NASA Astrophysics Data System (ADS)

    Prats, X.; Orus, R.; Hernandez-Pajares, M.; Juan, M.; Sanz, J.

    2002-01-01

    Satellite Based Augmentation systems (SBAS) provide to Global Navigation Satellite Systems (GNSS) users with an extra set of information, in order to enhance accuracy and integrity levels of GNSS stand alone positioning. The ionosphere is one of the main error component in SBAS. Therefore, the analysis of system performances requires a calibration of the broadcast corrections. In this context, different test methods to analyze the performance of these corrections are presented. The first set of tests involves two of the ionospheric calculations that are applied to the Global Ionospheric Maps (GIM), computed by the IGS Associate Analysis Centers: a TEC TOPEX comparison test and the STEC variations test. The second family of tests provides two very accurate analysis based on large-baselines ambiguity resolution techniques giving accuracies of about 16cm of L1 and few millimeters of L1 in the STEC and double differenced STEC determinations, respectively. Those four analysis have been applied for the EGNOS System Test Bed (ESTB) signal, which is the European SBAS provider.

  9. Evaluating the Accuracy of Hessian Approximations for Direct Dynamics Simulations.

    PubMed

    Zhuang, Yu; Siebert, Matthew R; Hase, William L; Kay, Kenneth G; Ceotto, Michele

    2013-01-01

    Direct dynamics simulations are a very useful and general approach for studying the atomistic properties of complex chemical systems, since an electronic structure theory representation of a system's potential energy surface is possible without the need for fitting an analytic potential energy function. In this paper, recently introduced compact finite difference (CFD) schemes for approximating the Hessian [J. Chem. Phys.2010, 133, 074101] are tested by employing the monodromy matrix equations of motion. Several systems, including carbon dioxide and benzene, are simulated, using both analytic potential energy surfaces and on-the-fly direct dynamics. The results show, depending on the molecular system, that electronic structure theory Hessian direct dynamics can be accelerated up to 2 orders of magnitude. The CFD approximation is found to be robust enough to deal with chaotic motion, concomitant with floppy and stiff mode dynamics, Fermi resonances, and other kinds of molecular couplings. Finally, the CFD approximations allow parametrical tuning of different CFD parameters to attain the best possible accuracy for different molecular systems. Thus, a direct dynamics simulation requiring the Hessian at every integration step may be replaced with an approximate Hessian updating by tuning the appropriate accuracy. PMID:26589009

  10. Accuracy of genomic predictions in Bos indicus (Nellore) cattle

    PubMed Central

    2014-01-01

    Background Nellore cattle play an important role in beef production in tropical systems and there is great interest in determining if genomic selection can contribute to accelerate genetic improvement of production and fertility in this breed. We present the first results of the implementation of genomic prediction in a Bos indicus (Nellore) population. Methods Influential bulls were genotyped with the Illumina Bovine HD chip in order to assess genomic predictive ability for weight and carcass traits, gestation length, scrotal circumference and two selection indices. 685 samples and 320 238 single nucleotide polymorphisms (SNPs) were used in the analyses. A forward-prediction scheme was adopted to predict the genomic breeding values (DGV). In the training step, the estimated breeding values (EBV) of bulls were deregressed (dEBV) and used as pseudo-phenotypes to estimate marker effects using four methods: genomic BLUP with or without a residual polygenic effect (GBLUP20 and GBLUP0, respectively), a mixture model (Bayes C) and Bayesian LASSO (BLASSO). Empirical accuracies of the resulting genomic predictions were assessed based on the correlation between DGV and dEBV for the testing group. Results Accuracies of genomic predictions ranged from 0.17 (navel at weaning) to 0.74 (finishing precocity). Across traits, Bayesian regression models (Bayes C and BLASSO) were more accurate than GBLUP. The average empirical accuracies were 0.39 (GBLUP0), 0.40 (GBLUP20) and 0.44 (Bayes C and BLASSO). Bayes C and BLASSO tended to produce deflated predictions (i.e. slope of the regression of dEBV on DGV greater than 1). Further analyses suggested that higher-than-expected accuracies were observed for traits for which EBV means differed significantly between two breeding subgroups that were identified in a principal component analysis based on genomic relationships. Conclusions Bayesian regression models are of interest for future applications of genomic selection in this population

  11. The Accuracy of Radio Interferometric Measurements of Earth Rotation

    NASA Technical Reports Server (NTRS)

    Eubanks, T. M.; Steppe, J. A.; Spieth, M. A.

    1985-01-01

    The accuracy of very long base interferometry earth rotation (UT1) measurements is examined by intercomparing TEMPO and POLARIS data for 1982 and the first half of 1983. None of these data are simultaneous, and so a proper intercomparison requires accounting for the scatter introduced by the rapid, unpredictable, UT1 variations driven by exchanges of angular momentum with the atmosphere. A statistical model of these variations, based on meteorological estimates of the Atmospheric Angular Momentum is derived, and the optimal linear (Kalman) smoother for this model is constructed. The scatter between smoothed and independent raw data is consistent with the residual formal errors, which do not depend upon the actual scatter of the UT1 data. This represents the first time that an accurate prediction of the scatter between UT1 data sets were possible.

  12. The generalized radon transform: Sampling, accuracy and memoryconsiderations

    SciTech Connect

    Luengo Hendriks, Cris L.; van Ginkel, Michael; Verbeek, Piet W.; van Vliet, Lucas J.

    2004-09-23

    The generalized Radon (or Hough) transform is a well-known tool for detecting parameterized shapes in an image. The Radon transform is a mapping between the image space and a parameter space. The coordinates of a point in the latter correspond to the parameters of a shape in the image. The amplitude at that point corresponds to the amount of evidence for that shape. In this paper we discuss three important aspects of the Radon transform. The first aspect is discretization. Using concepts from sampling theory we derive a set of sampling criteria for the generalized Radon transform. The second aspect is accuracy. For the specific case of the Radon transform for spheres, we examine how well the location of the maxima matches the true parameters. We derive a correction term to reduce the bias in the estimated radii. The third aspect concerns a projection-based algorithm to reduce memory requirements.

  13. LANDSAT Scene-to-scene Registration Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Anderson, J. E.

    1984-01-01

    Initial results obtained from the registration of LANDSAT-4 data to LANDSAT-2 MSS data are documented and compared with results obtained from a LANDSAT-2 MSS-to-LANDSAT-2 scene-to-scene registration (using the same LANDSAT-2 MSS data as the base data set in both procedures). RMS errors calculated on the control points used in the establishment of scene-to-scene mapping equations are compared to error computed from independently chosen verification points. Models developed to estimate actual scene-to-scene registration accuracy based on the use of electrostatic plots are also presented. Analysis of results indicates a statistically significant difference in the RMS errors for the element contribution. Scan line errors were not significantly different. It appears that a modification to the LANDSAT-4 MSS scan mirror coefficients is required to correct the situation.

  14. Remote sensing and the Mississippi high accuracy reference network

    NASA Technical Reports Server (NTRS)

    Mick, Mark; Alexander, Timothy M.; Woolley, Stan

    1994-01-01

    Since 1986, NASA's Commercial Remote Sensing Program (CRSP) at Stennis Space Center has supported commercial remote sensing partnerships with industry. CRSP's mission is to maximize U.S. market exploitation of remote sensing and related space-based technologies and to develop advanced technical solutions for spatial information requirements. Observation, geolocation, and communications technologies are converging and their integration is critical to realize the economic potential for spatial informational needs. Global positioning system (GPS) technology enables a virtual revolution in geopositionally accurate remote sensing of the earth. A majority of states are creating GPS-based reference networks, or high accuracy reference networks (HARN). A HARN can be defined for a variety of local applications and tied to aerial or satellite observations to provide an important contribution to geographic information systems (GIS). This paper details CRSP's experience in the design and implementation of a HARN in Mississippi and the design and support of future applications of integrated earth observations, geolocation, and communications technology.

  15. How Patients Can Improve the Accuracy of their Medical Records

    PubMed Central

    Dullabh, Prashila M.; Sondheimer, Norman K.; Katsh, Ethan; Evans, Michael A.

    2014-01-01

    , pharmacists responded positively to 68 percent of patient requests for medication list changes. (3) Processing patient feedback will requires both software algorithms and human interpretation. For the 107 forms subsample, pharmacists accepted patient input in 51 percent of cases where they could not contact the patient. Where the patient was contacted, they accepted feedback from 68 percent. This suggests there may be opportunities to automate feedback filtering and processing for more efficient (and larger scale) medication-list optimization. (4) A supportive overall e-health environment makes acceptance of an online patient feedback system more likely. Review of Geisinger usage data showed patients who completed the medication feedback form had previously accessed MyGeisinger 2.3 times as often as the average patient and initiated secure messages with a clinician 1.35 times as often as patients not involved in the pilot. Conclusions: Patient feedback, placed in a useful workflow, can improve medical record accuracy. Electronic health record (EHR) vendors and developers need to build appropriate capabilities into applications. Continued research and development is needed for enabling health care organizations to elicit and process patient information most effectively. PMID:25848614

  16. Researches on High Accuracy Prediction Methods of Earth Orientation Parameters

    NASA Astrophysics Data System (ADS)

    Xu, X. Q.

    2015-09-01

    The Earth rotation reflects the coupling process among the solid Earth, atmosphere, oceans, mantle, and core of the Earth on multiple spatial and temporal scales. The Earth rotation can be described by the Earth's orientation parameters, which are abbreviated as EOP (mainly including two polar motion components PM_X and PM_Y, and variation in the length of day ΔLOD). The EOP is crucial in the transformation between the terrestrial and celestial reference systems, and has important applications in many areas such as the deep space exploration, satellite precise orbit determination, and astrogeodynamics. However, the EOP products obtained by the space geodetic technologies generally delay by several days to two weeks. The growing demands for modern space navigation make high-accuracy EOP prediction be a worthy topic. This thesis is composed of the following three aspects, for the purpose of improving the EOP forecast accuracy. (1) We analyze the relation between the length of the basic data series and the EOP forecast accuracy, and compare the EOP prediction accuracy for the linear autoregressive (AR) model and the nonlinear artificial neural network (ANN) method by performing the least squares (LS) extrapolations. The results show that the high precision forecast of EOP can be realized by appropriate selection of the basic data series length according to the required time span of EOP prediction: for short-term prediction, the basic data series should be shorter, while for the long-term prediction, the series should be longer. The analysis also showed that the LS+AR model is more suitable for the short-term forecasts, while the LS+ANN model shows the advantages in the medium- and long-term forecasts. (2) We develop for the first time a new method which combines the autoregressive model and Kalman filter (AR+Kalman) in short-term EOP prediction. The equations of observation and state are established using the EOP series and the autoregressive coefficients

  17. Simultaneous nuclear data target accuracy study for innovative fast reactors.

    SciTech Connect

    Aliberti, G.; Palmiotti, G.; Salvatores, M.; Nuclear Engineering Division; INL; CEA Cadarache

    2007-01-01

    The present paper summarizes the major outcomes of a study conducted within a Nuclear Energy Agency Working Party on Evaluation Cooperation (NEA WPEC) initiative aiming to investigate data needs for future innovative nuclear systems, to quantify them and to propose a strategy to meet them. Within the NEA WPEC Subgroup 26 an uncertainty assessment has been carried out using covariance data recently processed by joint efforts of several US and European Labs. In general, the uncertainty analysis shows that for the wide selection of fast reactor concepts considered, the present integral parameters uncertainties resulting from the assumed uncertainties on nuclear data are probably acceptable in the early phases of design feasibility studies. However, in the successive phase of preliminary conceptual designs and in later design phases of selected reactor and fuel cycle concepts, there will be the need for improved data and methods, in order to reduce margins, both for economic and safety reasons. It is then important to define as soon as possible priority issues, i.e. which are the nuclear data (isotope, reaction type, energy range) that need improvement, in order to quantify target accuracies and to select a strategy to meet the requirements needed (e.g. by some selected new differential measurements and by the use of integral experiments). In this context one should account for the wide range of high accuracy integral experiments already performed and available in national or, better, international data basis, in order to indicate new integral experiments that will be needed to account for new requirements due to innovative design features, and to provide the necessary full integral data base to be used for validation of the design simulation tools.

  18. IRCM spectral signature measurements instrumentation featuring enhanced radiometric accuracy

    NASA Astrophysics Data System (ADS)

    Lantagne, Stéphane; Prel, Florent; Moreau, Louis; Roy, Claude; Willers, Cornelius J.

    2015-10-01

    Hyperspectral Infrared (IR) signature measurements are performed in military applications including aircraft- and -naval vessel stealth characterization, detection/lock-on ranges, and flares efficiency characterization. Numerous military applications require high precision measurement of infrared signature characterization. For instance, Infrared Countermeasure (IRCM) systems and Infrared Counter-Countermeasure (IRCCM) system are continuously evolving. Infrared flares defeated IR guided seekers, IR flares became defeated by intelligent IR guided seekers and Jammers defeated the intelligent IR guided seekers [7]. A precise knowledge of the target infrared signature phenomenology is crucial for the development and improvement of countermeasure and counter-countermeasure systems and so precise quantification of the infrared energy emitted from the targets requires accurate spectral signature measurements. Errors in infrared characterization measurements can lead to weakness in the safety of the countermeasure system and errors in the determination of detection/lock-on range of an aircraft. The infrared signatures are analyzed, modeled, and simulated to provide a good understanding of the signature phenomenology to improve the IRCM and IRCCM technologies efficiency [7,8,9]. There is a growing need for infrared spectral signature measurement technology in order to further improve and validate infrared-based models and simulations. The addition of imagery to Spectroradiometers is improving the measurement capability of complex targets and scenes because all elements in the scene can now be measured simultaneously. However, the limited dynamic range of the Focal Plane Array (FPA) sensors used in these instruments confines the ranges of measurable radiance intensities. This ultimately affects the radiometric accuracy of these complex signatures. We will describe and demonstrate how the ABB hyperspectral imaging spectroradiometer features enhanced the radiometric accuracy

  19. Glacier Mapping With Landsat Tm: Improvements and Accuracy

    NASA Astrophysics Data System (ADS)

    Paul, F.; Huggel, C.; Kaeaeb, A.; Maisch, M.

    The new Swiss Glacier Inventory for the year 2000 (SGI 2000) is presently derived from Landsat TM data. Glacier areas were obtained by segmentation of a ratio image from TM band 4 and 5. This method has proven to be very simple and highly accurate - an essential requirement for world-wide application within the project GLIMS (Global Land Ice Measurements from Space). Mis-classification using TM4 / TM 5 results for lakes, forests and areas with vegetation in cloud shadows. Digital image processing techniques are used to classify these regions separately and eliminate them from the glacier map. Automatic mapping of debris-covered glacier ice is difficult due to the spectral similarity with the surrounding terrain. For the SGI 2000, an attempt has been made to obtain the debris-covered area on glaciers by a combination of pixel- based image classification, digital terrain modelling, an object-oriented procedure and change detection analysis. First results of these improvements are presented. The accuracy of the TM derived glacier outlines is assessed by a comparison with manually derived outlines of higher resolution data sets (pan bands from SPOT, IRS- 1C and Ikonos). The overlay of outlines show very good correspondence (within the georeferencing accuracy) and the comparison of glacier areas reveals differences smaller than 5% for debris-free ice. Since acquisition of IRS-1C and Ikonos imagery is one year before and after the TM scene, respectively, small differences are also a result of glacier retreat. The automatically mapped debris-covered glacier areas are compared to the areas assigned manually on the TM image by visual interpretation. For most glaciers only a few pixels have to be corrected, for some others larger modi- fications are required.

  20. Statistical fitting accuracy in photon correlation spectroscopy

    NASA Technical Reports Server (NTRS)

    Shaumeyer, J. N.; Briggs, Matthew E.; Gammon, Robert W.

    1993-01-01

    Continuing our experimental investigation of the fitting accuracy associated with photon correlation spectroscopy, we collect 150 correlograms of light scattered at 90 deg from a thermostated sample of 91-nm-diameter, polystyrene latex spheres in water. The correlograms are taken with two correlators: one with linearly spaced channels and one with geometrically spaced channels. Decay rates are extracted from the single-exponential correlograms with both nonlinear least-squares fits and second-order cumulant fits. We make several statistical comparisons between the two fitting techniques and verify an earlier result that there is no sample-time dependence in the decay rate errors. We find, however, that the two fitting techniques give decay rates that differ by 1 percent.

  1. Laser focus positioning method with submicrometer accuracy.

    PubMed

    Alexeev, Ilya; Strauss, Johannes; Gröschl, Andreas; Cvecek, Kristian; Schmidt, Michael

    2013-01-20

    Accurate positioning of a sample is one of the primary challenges in laser micromanufacturing. There are a number of methods that allow detection of the surface position; however, only a few of them use the beam of the processing laser as a basis for the measurement. Those methods have an advantage that any changes in the processing laser beam can be inherently accommodated. This work describes a direct, contact-free method to accurately determine workpiece position with respect to the structuring laser beam focal plane based on nonlinear harmonic generation. The method makes workpiece alignment precise and time efficient due to ease of automation and provides the repeatability and accuracy of the surface detection of less than 1 μm. PMID:23338188

  2. Quantitative code accuracy evaluation of ISP33

    SciTech Connect

    Kalli, H.; Miwrrin, A.; Purhonen, H.

    1995-09-01

    Aiming at quantifying code accuracy, a methodology based on the Fast Fourier Transform has been developed at the University of Pisa, Italy. The paper deals with a short presentation of the methodology and its application to pre-test and post-test calculations submitted to the International Standard Problem ISP33. This was a double-blind natural circulation exercise with a stepwise reduced primary coolant inventory, performed in PACTEL facility in Finland. PACTEL is a 1/305 volumetrically scaled, full-height simulator of the Russian type VVER-440 pressurized water reactor, with horizontal steam generators and loop seals in both cold and hot legs. Fifteen foreign organizations participated in ISP33, with 21 blind calculations and 20 post-test calculations, altogether 10 different thermal hydraulic codes and code versions were used. The results of the application of the methodology to nine selected measured quantities are summarized.

  3. Accuracy of lineaments mapping from space

    NASA Technical Reports Server (NTRS)

    Short, Nicholas M.

    1989-01-01

    The use of Landsat and other space imaging systems for lineaments detection is analyzed in terms of their effectiveness in recognizing and mapping fractures and faults, and the results of several studies providing a quantitative assessment of lineaments mapping accuracies are discussed. The cases under investigation include a Landsat image of the surface overlying a part of the Anadarko Basin of Oklahoma, the Landsat images and selected radar imagery of major lineaments systems distributed over much of Canadian Shield, and space imagery covering a part of the East African Rift in Kenya. It is demonstrated that space imagery can detect a significant portion of a region's fracture pattern, however, significant fractions of faults and fractures recorded on a field-produced geological map are missing from the imagery as it is evident in the Kenya case.

  4. High current high accuracy IGBT pulse generator

    SciTech Connect

    Nesterov, V.V.; Donaldson, A.R.

    1995-05-01

    A solid state pulse generator capable of delivering high current triangular or trapezoidal pulses into an inductive load has been developed at SLAC. Energy stored in a capacitor bank of the pulse generator is switched to the load through a pair of insulated gate bipolar transistors (IGBT). The circuit can then recover the remaining energy and transfer it back to the capacitor bank without reversing the capacitor voltage. A third IGBT device is employed to control the initial charge to the capacitor bank, a command charging technique, and to compensate for pulse to pulse power losses. The rack mounted pulse generator contains a 525 {mu}F capacitor bank. It can deliver 500 A at 900V into inductive loads up to 3 mH. The current amplitude and discharge time are controlled to 0.02% accuracy by a precision controller through the SLAC central computer system. This pulse generator drives a series pair of extraction dipoles.

  5. Quantum mechanical calculations to chemical accuracy

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.

    1991-01-01

    The accuracy of current molecular-structure calculations is illustrated with examples of quantum mechanical solutions for chemical problems. Two approaches are considered: (1) the coupled-cluster singles and doubles (CCSD) with a perturbational estimate of the contribution of connected triple excitations, or CCDS(T); and (2) the multireference configuration-interaction (MRCI) approach to the correlation problem. The MRCI approach gains greater applicability by means of size-extensive modifications such as the averaged-coupled pair functional approach. The examples of solutions to chemical problems include those for C-H bond energies, the vibrational frequencies of O3, identifying the ground state of Al2 and Si2, and the Lewis-Rayleigh afterglow and the Hermann IR system of N2. Accurate molecular-wave functions can be derived from a combination of basis-set saturation studies and full configuration-interaction calculations.

  6. Accuracy of the Cloud Integrating Nephelometer

    NASA Technical Reports Server (NTRS)

    Gerber, Hermann E.

    2004-01-01

    Potential error sources for measurements with the Cloud Integrating Nephelometer (CIN) are discussed and analyzed, including systematic errors of the measurement approach, flow and particle-trajectory deviations at flight velocity, ice-crystal breakup on probe surfaces, and errors in calibration and developing scaling constants. It is concluded that errors are minimal, and that the accuracy of the CIN should be close to the systematic behavior of the CIN derived in Gerber et al (2000). Absolute calibration of the CIN with a transmissometer operating co-located in a mountain-top cloud shows that the earlier scaling constant for the optical extinction coefficient obtained by other means is within 5% of the absolute calibration value, and that the CIN measurements on the Citation aircraft flights during the CRYSTAL-FACE study are accurate.

  7. Positioning accuracy of the neurotron 1000

    SciTech Connect

    Cox, R.S.; Murphy, M.J.

    1995-12-31

    The Neuotron 1000 is a novel treatment machine under development for frameless stereotaxic radiosurgery that consists of a compact X-band accelerator mounted on a robotic arm. The therapy beam is guided to the lesion by an imaging system, which included two diagnostic x-ray cameras that view the patient during treatment. Patient position and motion are measured by the imaging system and appropriate corrections are communicated in real time to the robotic arm for beam targeting and motion tracking. The three tests reported here measured the pointing accuracy of the therapy beam and the present capability of the imaging guidance system. The positioning and pointing test measured the ability of the robotic arm to direct the beam through a test isocenter from arbitrary arm positions. The test isocenter was marked by a small light-sensitive crystal and the beam axis was simulated by a laser.

  8. Copper disk pyrheliometer of high accuracy

    SciTech Connect

    Hsieh, C.K.; Wang, X.A.

    1983-01-01

    A copper disk pyrheliometer has been designed and constructed that utilizes a new methodology to measure solar radiation. By operating the shutter of the instrument and measuring the heating and cooling rates of the sensor at the very moment when the sensor is at the same temperature, the solar radiation can be accurately determined with these rates. The method is highly accurate and is shown to be totally independent of the loss coefficient in the measurement. The pyrheliometer has been tested using a standard irradiance lamp in the laboratory. The uncertainty of the instrument is identified to be +- 0.61%. Field testing was also conducted by comparing data with that of a calibrated (Eppley) Normal Incidence Pyrheliometer. This paper spells out details of the construction and testing of the instrument; the analysis underlying the methodology was also covered in detail. Because of the high accuracy, the instrument is considered to be well suited for a bench standard for measurement of solar radiation.

  9. Guiding Center Equations of High Accuracy

    SciTech Connect

    R.B. White, G. Spizzo and M. Gobbin

    2013-03-29

    Guiding center simulations are an important means of predicting the effect of resistive and ideal magnetohydrodynamic instabilities on particle distributions in toroidal magnetically confined thermonuclear fusion research devices. Because saturated instabilities typically have amplitudes of δ B/B of a few times 10-4 numerical accuracy is of concern in discovering the effect of mode particle resonances. We develop a means of following guiding center orbits which is greatly superior to the methods currently in use. In the presence of ripple or time dependent magnetic perturbations both energy and canonical momentum are conserved to better than one part in 1014, and the relation between changes in canonical momentum and energy is also conserved to very high order.

  10. The empirical accuracy of uncertain inference models

    NASA Technical Reports Server (NTRS)

    Vaughan, David S.; Yadrick, Robert M.; Perrin, Bruce M.; Wise, Ben P.

    1987-01-01

    Uncertainty is a pervasive feature of the domains in which expert systems are designed to function. Research design to test uncertain inference methods for accuracy and robustness, in accordance with standard engineering practice is reviewed. Several studies were conducted to assess how well various methods perform on problems constructed so that correct answers are known, and to find out what underlying features of a problem cause strong or weak performance. For each method studied, situations were identified in which performance deteriorates dramatically. Over a broad range of problems, some well known methods do only about as well as a simple linear regression model, and often much worse than a simple independence probability model. The results indicate that some commercially available expert system shells should be used with caution, because the uncertain inference models that they implement can yield rather inaccurate results.

  11. On the Accuracy of the MINC approximation

    SciTech Connect

    Lai, C.H.; Pruess, K.; Bodvarsson, G.S.

    1986-02-01

    The method of ''multiple interacting continua'' is based on the assumption that changes in thermodynamic conditions of rock matrix blocks are primarily controlled by the distance from the nearest fracture. The accuracy of this assumption was evaluated for regularly shaped (cubic and rectangular) rock blocks with uniform initial conditions, which are subjected to a step change in boundary conditions on the surface. Our results show that pressures (or temperatures) predicted from the MINC approximation may deviate from the exact solutions by as much as 10 to 15% at certain points within the blocks. However, when fluid (or heat) flow rates are integrated over the entire block surface, MINC-approximation and exact solution agree to better than 1%. This indicates that the MINC approximation can accurately represent transient inter-porosity flow in fractured porous media, provided that matrix blocks are indeed subjected to nearly uniform boundary conditions at all times.

  12. Hydrodynamic modeling of Singapore's coastal waters: Nesting and model accuracy

    NASA Astrophysics Data System (ADS)

    Hasan, G. M. Jahid; van Maren, Dirk Sebastiaan; Ooi, Seng Keat

    2016-01-01

    The tidal variation in Singapore's coastal waters is influenced by large-scale, complex tidal dynamics (by interaction of the Indian Ocean and the South China Sea) as well as monsoon-driven low frequency variations, requiring a model with large spatial coverage. Close to the shores, the complex topography, influenced by headlands and small islands, requires a high resolution model to simulate tidal dynamics. This can be achieved through direct nesting or multi-scale nesting, involving multiple model grids. In this paper, we investigate the effect of grid resolution and multi-scale nesting on the tidal dynamics in Singapore's coastal waters, by comparing model results with observations using different statistical techniques. The results reveal that the intermediate-scale model is generally sufficiently accurate (equal to or better than the most refined model), but also that the most refined model is only more accurate when nested in the intermediate scale model (requiring multi-scale nesting). This latter is the result of the complex tidal dynamics around Singapore, where the dominantly diurnal tidal currents are decoupled from the semi-diurnal water level variations. Furthermore, different techniques to quantify model accuracy (harmonic analysis, basic statistics and more complex statistics) are inconsistent in determining which model is more accurate.

  13. A hyperspectral imager for high radiometric accuracy Earth climate studies

    NASA Astrophysics Data System (ADS)

    Espejo, Joey; Drake, Ginger; Heuerman, Karl; Kopp, Greg; Lieber, Alex; Smith, Paul; Vermeer, Bill

    2011-10-01

    We demonstrate a visible and near-infrared prototype pushbroom hyperspectral imager for Earth climate studies that is capable of using direct solar viewing for on-orbit cross calibration and degradation tracking. Direct calibration to solar spectral irradiances allow the Earth-viewing instrument to achieve required climate-driven absolute radiometric accuracies of <0.2% (1σ). A solar calibration requires viewing scenes having radiances 105 higher than typical Earth scenes. To facilitate this calibration, the instrument features an attenuation system that uses an optimized combination of different precision aperture sizes, neutral density filters, and variable integration timing for Earth and solar viewing. The optical system consists of a three-mirror anastigmat telescope and an Offner spectrometer. The as-built system has a 12.2° cross track field of view with 3 arcmin spatial resolution and covers a 350-1050 nm spectral range with 10 nm resolution. A polarization compensated configuration using the Offner in an out of plane alignment is demonstrated as a viable approach to minimizing polarization sensitivity. The mechanical design takes advantage of relaxed tolerances in the optical design by using rigid, non-adjustable diamond-turned tabs for optical mount locating surfaces. We show that this approach achieves the required optical performance. A prototype spaceflight unit is also demonstrated to prove the applicability of these solar cross calibration methods to on-orbit environments. This unit is evaluated for optical performance prior to and after GEVS shake, thermal vacuum, and lifecycle tests.

  14. Factors Governing Surface Form Accuracy In Diamond Machined Components

    NASA Astrophysics Data System (ADS)

    Myler, J. K.; Page, D. A.

    1988-10-01

    Manufacturing methods for diamond machined optical surfaces, for application at infrared wavelengths, require that a new set of criteria must be recognised for the specification of surface form. Appropriate surface form parameters are discussed with particular reference to an XY cartesian geometry CNC machine. Methods for reducing surface form errors in diamond machining are discussed for certain areas such as tool wear, tool centring, and the fixturing of the workpiece. Examples of achievable surface form accuracy are presented. Traditionally, optical surfaces have been produced by use of random polishing techniques using polishing compounds and lapping tools. For lens manufacture, the simplest surface which could be created corresponded to a sphere. The sphere is a natural outcome of a random grinding and polishing process. The measurement of the surface form accuracy would most commonly be performed using a contact test gauge plate, polished to a sphere of known radius of curvature. QA would simply be achieved using a diffuse monochromatic source and looking for residual deviations between the polished surface and the test plate. The specifications governing the manufacture of surfaces using these techniques would call for the accuracy to which the generated surface should match the test plate as defined by a spherical deviations from the required curvature and a non spherical astigmatic error. Consequently, optical design software has tolerancing routines which specifically allow the designer to assess the influence of spherical error and astigmatic error on the optical performance. The creation of general aspheric surfaces is not so straightforward using conventional polishing techniques since the surface profile is non spherical and a good approximation to a power series. For infra red applications (X = 8-12p,m) numerically controlled single point diamond turning is an alternative manufacturing technology capable of creating aspheric profiles as well as

  15. Meteor orbit determination with improved accuracy

    NASA Astrophysics Data System (ADS)

    Dmitriev, Vasily; Lupovla, Valery; Gritsevich, Maria

    2015-08-01

    Modern observational techniques make it possible to retrive meteor trajectory and its velocity with high accuracy. There has been a rapid rise in high quality observational data accumulating yearly. This fact creates new challenges for solving the problem of meteor orbit determination. Currently, traditional technique based on including corrections to zenith distance and apparent velocity using well-known Schiaparelli formula is widely used. Alternative approach relies on meteoroid trajectory correction using numerical integration of equation of motion (Clark & Wiegert, 2011; Zuluaga et al., 2013). In our work we suggest technique of meteor orbit determination based on strict coordinate transformation and integration of differential equation of motion. We demonstrate advantage of this method in comparison with traditional technique. We provide results of calculations by different methods for real, recently occurred fireballs, as well as for simulated cases with a priori known retrieval parameters. Simulated data were used to demonstrate the condition, when application of more complex technique is necessary. It was found, that for several low velocity meteoroids application of traditional technique may lead to dramatically delusion of orbit precision (first of all, due to errors in Ω, because this parameter has a highest potential accuracy). Our results are complemented by analysis of sources of perturbations allowing to quantitatively indicate which factors have to be considered in orbit determination. In addition, the developed method includes analysis of observational error propagation based on strict covariance transition, which is also presented.Acknowledgements. This work was carried out at MIIGAiK and supported by the Russian Science Foundation, project No. 14-22-00197.References:Clark, D. L., & Wiegert, P. A. (2011). A numerical comparison with the Ceplecha analytical meteoroid orbit determination method. Meteoritics & Planetary Science, 46(8), pp. 1217

  16. Classification Accuracy Increase Using Multisensor Data Fusion

    NASA Astrophysics Data System (ADS)

    Makarau, A.; Palubinskas, G.; Reinartz, P.

    2011-09-01

    The practical use of very high resolution visible and near-infrared (VNIR) data is still growing (IKONOS, Quickbird, GeoEye-1, etc.) but for classification purposes the number of bands is limited in comparison to full spectral imaging. These limitations may lead to the confusion of materials such as different roofs, pavements, roads, etc. and therefore may provide wrong interpretation and use of classification products. Employment of hyperspectral data is another solution, but their low spatial resolution (comparing to multispectral data) restrict their usage for many applications. Another improvement can be achieved by fusion approaches of multisensory data since this may increase the quality of scene classification. Integration of Synthetic Aperture Radar (SAR) and optical data is widely performed for automatic classification, interpretation, and change detection. In this paper we present an approach for very high resolution SAR and multispectral data fusion for automatic classification in urban areas. Single polarization TerraSAR-X (SpotLight mode) and multispectral data are integrated using the INFOFUSE framework, consisting of feature extraction (information fission), unsupervised clustering (data representation on a finite domain and dimensionality reduction), and data aggregation (Bayesian or neural network). This framework allows a relevant way of multisource data combination following consensus theory. The classification is not influenced by the limitations of dimensionality, and the calculation complexity primarily depends on the step of dimensionality reduction. Fusion of single polarization TerraSAR-X, WorldView-2 (VNIR or full set), and Digital Surface Model (DSM) data allow for different types of urban objects to be classified into predefined classes of interest with increased accuracy. The comparison to classification results of WorldView-2 multispectral data (8 spectral bands) is provided and the numerical evaluation of the method in comparison to

  17. Food Label Accuracy of Common Snack Foods

    PubMed Central

    Jumpertz, Reiner; Venti, Colleen A; Le, Duc Son; Michaels, Jennifer; Parrington, Shannon; Krakoff, Jonathan; Votruba, Susanne

    2012-01-01

    Nutrition labels have raised awareness of the energetic value of foods, and represent for many a pivotal guideline to regulate food intake. However, recent data have created doubts on label accuracy. Therefore we tested label accuracy for energy and macronutrient content of prepackaged energy-dense snack food products. We measured “true” caloric content of 24 popular snack food products in the U.S. and determined macronutrient content in 10 selected items. Bomb calorimetry and food factors were used to estimate energy content. Macronutrient content was determined according to Official Methods of Analysis. Calorimetric measurements were performed in our metabolic laboratory between April 20th and May 18th and macronutrient content was measured between September 28th and October 7th of 2010. Serving size, by weight, exceeded label statements by 1.2% [median] (25th percentile −1.4, 75th percentile 4.3, p=0.10). When differences in serving size were accounted for, metabolizable calories were 6.8 kcal (0.5, 23.5, p=0.0003) or 4.3% (0.2, 13.7, p=0.001) higher than the label statement. In a small convenience sample of the tested snack foods, carbohydrate content exceeded label statements by 7.7% (0.8, 16.7, p=0.01); however fat and protein content were not significantly different from label statements (−12.8% [−38.6, 9.6], p=0.23; 6.1% [−6.1, 17.5], p=0.32). Carbohydrate content explained 40% and serving size an additional 55% of the excess calories. Among a convenience sample of energy-dense snack foods, caloric content is higher than stated on the nutrition labels, but overall well within FDA limits. This discrepancy may be explained by inaccurate carbohydrate content and serving size. PMID:23505182

  18. Combining Multiple Gyroscope Outputs for Increased Accuracy

    NASA Technical Reports Server (NTRS)

    Bayard, David S.

    2003-01-01

    A proposed method of processing the outputs of multiple gyroscopes to increase the accuracy of rate (that is, angular-velocity) readings has been developed theoretically and demonstrated by computer simulation. Although the method is applicable, in principle, to any gyroscopes, it is intended especially for application to gyroscopes that are parts of microelectromechanical systems (MEMS). The method is based on the concept that the collective performance of multiple, relatively inexpensive, nominally identical devices can be better than that of one of the devices considered by itself. The method would make it possible to synthesize the readings of a single, more accurate gyroscope (a virtual gyroscope) from the outputs of a large number of microscopic gyroscopes fabricated together on a single MEMS chip. The big advantage would be that the combination of the MEMS gyroscope array and the processing circuitry needed to implement the method would be smaller, lighter in weight, and less power-hungry, relative to a conventional gyroscope of equal accuracy. The method (see figure) is one of combining and filtering the digitized outputs of multiple gyroscopes to obtain minimum-variance estimates of rate. In the combining-and-filtering operations, measurement data from the gyroscopes would be weighted and smoothed with respect to each other according to the gain matrix of a minimum- variance filter. According to Kalman-filter theory, the gain matrix of the minimum-variance filter is uniquely specified by the filter covariance, which propagates according to a matrix Riccati equation. The present method incorporates an exact analytical solution of this equation.

  19. Improving Accuracy of Image Classification Using GIS

    NASA Astrophysics Data System (ADS)

    Gupta, R. K.; Prasad, T. S.; Bala Manikavelu, P. M.; Vijayan, D.

    The Remote Sensing signal which reaches sensor on-board the satellite is the complex aggregation of signals (in agriculture field for example) from soil (with all its variations such as colour, texture, particle size, clay content, organic and nutrition content, inorganic content, water content etc.), plant (height, architecture, leaf area index, mean canopy inclination etc.), canopy closure status and atmospheric effects, and from this we want to find say, characteristics of vegetation. If sensor on- board the satellite makes measurements in n-bands (n of n*1 dimension) and number of classes in an image are c (f of c*1 dimension), then considering linear mixture modeling the pixel classification problem could be written as n = m* f +, where m is the transformation matrix of (n*c) dimension and therepresents the error vector (noise). The problem is to estimate f by inverting the above equation and the possible solutions for such problem are many. Thus, getting back individual classes from satellite data is an ill-posed inverse problem for which unique solution is not feasible and this puts limit to the obtainable classification accuracy. Maximum Likelihood (ML) is the constraint mostly practiced in solving such a situation which suffers from the handicaps of assumed Gaussian distribution and random nature of pixels (in-fact there is high auto-correlation among the pixels of a specific class and further high auto-correlation among the pixels in sub- classes where the homogeneity would be high among pixels). Due to this, achieving of very high accuracy in the classification of remote sensing images is not a straight proposition. With the availability of the GIS for the area under study (i) a priori probability for different classes could be assigned to ML classifier in more realistic terms and (ii) the purity of training sets for different thematic classes could be better ascertained. To what extent this could improve the accuracy of classification in ML classifier

  20. A Kalman filter algorithm for terminal-area navigation with sensors of moderate accuracy

    NASA Technical Reports Server (NTRS)

    Cicolani, L. S.; Kanning, G.; Schmidt, S. F.

    1983-01-01

    Translational state estimation in terminal area operations, using a set of commonly available position, air data, nd acceleration sensors, is described. Kalman filtering is applied to obtain maximum estimation occuracy from the sensors but feasibility in real-time computations requires a variety of approximations and devices aimed at minimizing the required computation time with only negligible loss of accuracy. Accuracy behavior throughout the terminal area, its relation to sensor accuracy, its effect on trajectory tracking errors and control activity in an automatic flight control system, and its adequacy in terms of existing criteria for various terminal area operations are examined. The principal investigative tool is a simulation of the system. Previously announced in STAR as N83-29193

  1. Matters of Accuracy and Conventionality: Prior Accuracy Guides Children's Evaluations of Others' Actions

    ERIC Educational Resources Information Center

    Scofield, Jason; Gilpin, Ansley Tullos; Pierucci, Jillian; Morgan, Reed

    2013-01-01

    Studies show that children trust previously reliable sources over previously unreliable ones (e.g., Koenig, Clement, & Harris, 2004). However, it is unclear from these studies whether children rely on accuracy or conventionality to determine the reliability and, ultimately, the trustworthiness of a particular source. In the current study, 3- and…

  2. Considerations for high accuracy radiation efficiency measurements for the Solar Power Satellite (SPS) subarrays

    NASA Technical Reports Server (NTRS)

    Kozakoff, D. J.; Schuchardt, J. M.; Ryan, C. E.

    1980-01-01

    The relatively large apertures to be used in SPS, small half-power beamwidths, and the desire to accurately quantify antenna performance dictate the requirement for specialized measurements techniques. Objectives include the following: (1) For 10-meter square subarray panels, quantify considerations for measuring power in the transmit beam and radiation efficiency to + or - 1 percent (+ or - 0.04 dB) accuracy. (2) Evaluate measurement performance potential of far-field elevated and ground reflection ranges and near-field techniques. (3) Identify the state-of-the-art of critical components and/or unique facilities required. (4) Perform relative cost, complexity and performance tradeoffs for techniques capable of achieving accuracy objectives. the precision required by the techniques discussed below are not obtained by current methods which are capable of + or - 10 percent (+ or - dB) performance. In virtually every area associated with these planned measurements, advances in state-of-the-art are required.

  3. Morphological Awareness and Children's Writing: Accuracy, Error, and Invention

    ERIC Educational Resources Information Center

    McCutchen, Deborah; Stull, Sara

    2015-01-01

    This study examined the relationship between children's morphological awareness and their ability to produce accurate morphological derivations in writing. Fifth-grade US students (n = 175) completed two writing tasks that invited or required morphological manipulation of words. We examined both accuracy and error, specifically errors in…

  4. Understanding vs. Competency: The Case of Accuracy Checking Dispensed Medicines in Pharmacy

    ERIC Educational Resources Information Center

    James, K. Lynette; Davies, J. Graham; Kinchin, Ian; Patel, Jignesh P.; Whittlesea, Cate

    2010-01-01

    Ensuring the competence of healthcare professionals' is core to undergraduate and post-graduate education. Undergraduate pharmacy students and pre-registration graduates are required to demonstrate competence at dispensing and accuracy checking medicines. However, competence differs from understanding. This study determined the competence and…

  5. Evaluation of Application Accuracy and Performance of a Hydraulically Operated Variable-Rate Aerial Application System

    Technology Transfer Automated Retrieval System (TEKTRAN)

    An aerial variable-rate application system consisting of a DGPS-based guidance system, automatic flow controller, and hydraulically controlled pump/valve was evaluated for response time to rapidly changing flow requirements and accuracy of application. Spray deposition position error was evaluated ...

  6. Screening Accuracy of Level 2 Autism Spectrum Disorder Rating Scales: A Review of Selected Instruments

    ERIC Educational Resources Information Center

    Norris, Megan; Lecavalier, Luc

    2010-01-01

    The goal of this review was to examine the state of Level 2, caregiver-completed rating scales for the screening of Autism Spectrum Disorders (ASDs) in individuals above the age of three years. We focused on screening accuracy and paid particular attention to comparison groups. Inclusion criteria required that scales be developed post ICD-10, be…

  7. 12 CFR 740.2 - Accuracy of advertising.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Accuracy of advertising. 740.2 Section 740.2 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS ACCURACY OF ADVERTISING AND NOTICE OF INSURED STATUS § 740.2 Accuracy of advertising. No insured credit union may use...

  8. Nuclear Data Target Accuracies for Generation-IV Systems Based on the use of New Covariance Data

    SciTech Connect

    G. Palmiotti; M. Salvatores; M. Assawaroongruengchot; M. Herman; P. Oblozinsky; C. Mattoon

    2010-04-01

    A target accuracy assessment using new available covariance data, the AFCI 1.2 covariance data, has been carried out. At the same time, the more theoretical issue of taking into account correlation terms in target accuracy assessment studies has been deeply investigated. The impact of correlation terms is very significant in target accuracy assessment evaluation and can produce very stringent requirements on nuclear data. For this type of study a broader energy group structure should be used, in order to smooth out requirements and provide better feedback information to evaluators and cross section measurement experts. The main difference in results between using BOLNA or AFCI 1.2 covariance data are related to minor actinides, minor Pu isotopes, structural materials (in particular Fe56), and coolant isotopes (Na23) accuracy requirements.

  9. Nuclear Data Target Accuracies for Generation-IV Systems Based on the Use of New Covariance Data

    SciTech Connect

    Palmiotti, G.; Herman, M.; Palmiotti,G.; Assawaroongruengchot,M.; Salvatores,M.; Herman,M.; Oblozinsky,P.; Mattoon,C.; Pigni,M.

    2011-08-01

    A target accuracy assessment using new available covariance data, the AFCI 1.2 covariance data, has been carried out. At the same time, the more theoretical issue of taking into account correlation terms in target accuracy assessment studies has been deeply investigated. The impact of correlation terms is very significant in target accuracy assessment evaluation and can produce very stringent requirements on nuclear data. For this type of study a broader energy group structure should be used, in order to smooth out requirements and provide better feedback information to evaluators and cross section measurement experts. The main difference in results between using BOLNA or AFCI 1.2 covariance data are related to minor actinides, minor Pu isotopes, structural materials (in particular Fe56), and coolant isotopes (Na23) accuracy requirements.

  10. Geographic stacking: Decision fusion to increase global land cover map accuracy

    NASA Astrophysics Data System (ADS)

    Clinton, Nicholas; Yu, Le; Gong, Peng

    2015-05-01

    Techniques to combine multiple classifier outputs is an established sub-discipline in data mining, referred to as "stacking," "ensemble classification," or "meta-learning." Here we describe how stacking of geographically allocated classifications can create a map composite of higher accuracy than any of the individual classifiers. We used both voting algorithms and trainable classifiers with a set of validation data to combine individual land cover maps. We describe the generality of this setup in terms of existing algorithms and accuracy assessment procedures. This method has the advantage of not requiring posterior probabilities or level of support for predicted class labels. We demonstrate the technique using Landsat based, 30-meter land cover maps, the highest resolution, globally available product of this kind. We used globally distributed validation samples to composite the maps and compute accuracy. We show that geographic stacking can improve individual map accuracy by up to 6.6%. The voting methods can also achieve higher accuracy than the best of the input classifications. Accuracies from different classifiers, input data, and output type are compared. The results are illustrated on a Landsat scene in California, USA. The compositing technique described here has broad applicability in remote sensing based map production and geographic classification.

  11. The accuracy of portable peak flow meters.

    PubMed Central

    Miller, M R; Dickinson, S A; Hitchings, D J

    1992-01-01

    BACKGROUND: The variability of peak expiratory flow (PEF) is now commonly used in the diagnosis and management of asthma. It is essential for PEF meters to have a linear response in order to obtain an unbiased measurement of PEF variability. As the accuracy and linearity of portable PEF meters have not been rigorously tested in recent years this aspect of their performance has been investigated. METHODS: The response of several portable PEF meters was tested with absolute standards of flow generated by a computer driven, servo controlled pump and their response was compared with that of a pneumotachograph. RESULTS: For each device tested the readings were highly repeatable to within the limits of accuracy with which the pointer position can be assessed by eye. The between instrument variation in reading for six identical devices expressed as a 95% confidence limit was, on average across the range of flows, +/- 8.5 l/min for the Mini-Wright, +/- 7.9 l/min for the Vitalograph, and +/- 6.4 l/min for the Ferraris. PEF meters based on the Wright meter all had similar error profiles with overreading of up to 80 l/min in the mid flow range from 300 to 500 l/min. This overreading was greatest for the Mini-Wright and Ferraris devices, and less so for the original Wright and Vitalograph meters. A Micro-Medical Turbine meter was accurate up to 400 l/min and then began to underread by up to 60 l/min at 720 l/min. For the low range devices the Vitalograph device was accurate to within 10 l/min up to 200 l/min, with the Mini-Wright overreading by up to 30 l/min above 150 l/min. CONCLUSION: Although the Mini-Wright, Ferraris, and Vitalograph meters gave remarkably repeatable results their error profiles for the full range meters will lead to important errors in recording PEF variability. This may lead to incorrect diagnosis and bias in implementing strategies of asthma treatment based on PEF measurement. PMID:1465746

  12. Accuracy of commercial geocoding: assessment and implications

    PubMed Central

    Whitsel, Eric A; Quibrera, P Miguel; Smith, Richard L; Catellier, Diane J; Liao, Duanping; Henley, Amanda C; Heiss, Gerardo

    2006-01-01

    Background Published studies of geocoding accuracy often focus on a single geographic area, address source or vendor, do not adjust accuracy measures for address characteristics, and do not examine effects of inaccuracy on exposure measures. We addressed these issues in a Women's Health Initiative ancillary study, the Environmental Epidemiology of Arrhythmogenesis in WHI. Results Addresses in 49 U.S. states (n = 3,615) with established coordinates were geocoded by four vendors (A-D). There were important differences among vendors in address match rate (98%; 82%; 81%; 30%), concordance between established and vendor-assigned census tracts (85%; 88%; 87%; 98%) and distance between established and vendor-assigned coordinates (mean ρ [meters]: 1809; 748; 704; 228). Mean ρ was lowest among street-matched, complete, zip-coded, unedited and urban addresses, and addresses with North American Datum of 1983 or World Geodetic System of 1984 coordinates. In mixed models restricted to vendors with minimally acceptable match rates (A-C) and adjusted for address characteristics, within-address correlation, and among-vendor heteroscedasticity of ρ, differences in mean ρ were small for street-type matches (280; 268; 275), i.e. likely to bias results relying on them about equally for most applications. In contrast, differences between centroid-type matches were substantial in some vendor contrasts, but not others (5497; 4303; 4210) pinteraction < 10-4, i.e. more likely to bias results differently in many applications. The adjusted odds of an address match was higher for vendor A versus C (odds ratio = 66, 95% confidence interval: 47, 93), but not B versus C (OR = 1.1, 95% CI: 0.9, 1.3). That of census tract concordance was no higher for vendor A versus C (OR = 1.0, 95% CI: 0.9, 1.2) or B versus C (OR = 1.1, 95% CI: 0.9, 1.3). Misclassification of a related exposure measure – distance to the nearest highway – increased with mean ρ and in the absence of confounding, non

  13. Accuracy of clinical diagnosis in knee arthroscopy.

    PubMed Central

    Brooks, Stuart; Morgan, Mamdouh

    2002-01-01

    A prospective study of 238 patients was performed in a district general hospital to assess current diagnostic accuracy rates and to ascertain the use and the effectiveness of magnetic resonance imaging (MRI) scanning in reducing the number of negative arthroscopies. The pre-operative diagnosis of patients listed for knee arthroscopy was medial meniscus tear 94 (40%) and osteoarthritis 59 (25%). MRI scans were requested in 57 patients (24%) with medial meniscus tear representing 65% (37 patients). The correlation study was done between pre-operative diagnosis, MRI and arthroscopic diagnosis. Clinical diagnosis was as accurate as the MRI with 79% agreement between the preoperative diagnosis and arthroscopy compared to 77% agreement between MRI scan and arthroscopy. There was no evidence, in this study, that MRI scan can reduce the number of negative arthroscopies. Four normal MRI scans had positive arthroscopic diagnosis (two torn medial meniscus, one torn lateral meniscus and one chondromalacia patella). Out of 240 arthroscopies, there were only 10 normal knees (negative arthroscopy) representing 4% of the total number of knee arthroscopies; one patient of those 10 cases had MRI scan with ACL rupture diagnosis. Images Figure 1 Figure 2 PMID:12215031

  14. Improving DNA sequencing accuracy and throughput

    SciTech Connect

    Nelson, D.O. |

    1996-12-31

    LLNL is beginning to explore statistical approaches to the problem of determining the DNA sequence underlying data obtained from fluorescence-based gel electrophoresis. Among the features of this problem that make it interesting to statisticians include: (1) the underlying mechanics of electrophoresis is quite complex and still not completely understood; (2) the yield of fragments of any given size can be quite small and variable; (3) the mobility of fragments of a given size can depend on the terminating base; (4) the data consists of samples from one or more continuous, non-stationary signals; (5) boundaries between segments generated by distinct elements of the underlying sequence are ill-defined or nonexistent in the signal; and (6) the sampling rate of the signal greatly exceeds the rate of evolution of the underlying discrete sequence. Current approaches to base calling address only some of these issues, and usually in a heuristic, ad hoc way. In this article we describe some of our initial efforts towards increasing base calling accuracy and throughput by providing a rational, statistical foundation to the process of deducing sequence from signal. 31 refs., 12 figs.

  15. High accuracy wall thickness loss monitoring

    NASA Astrophysics Data System (ADS)

    Gajdacsi, Attila; Cegla, Frederic

    2014-02-01

    Ultrasonic inspection of wall thickness in pipes is a standard technique applied widely in the petrochemical industry. The potential precision of repeat measurements with permanently installed ultrasonic sensors however significantly surpasses that of handheld sensors as uncertainties associated with coupling fluids and positional offsets are eliminated. With permanently installed sensors the precise evaluation of very small wall loss rates becomes feasible in a matter of hours. The improved accuracy and speed of wall loss rate measurements can be used to evaluate and develop more effective mitigation strategies. This paper presents an overview of factors causing variability in the ultrasonic measurements which are then systematically addressed and an experimental setup with the best achievable stability based on these considerations is presented. In the experimental setup galvanic corrosion is used to induce predictable and very small wall thickness loss. Furthermore, it is shown that the experimental measurements can be used to assess the effect of reduced wall loss that is produced by the injection of corrosion inhibitor. The measurements show an estimated standard deviation of about 20nm, which in turn allows us to evaluate the effect and behaviour of corrosion inhibitors within less than an hour.

  16. The accuracy of a voice vote

    PubMed Central

    Titze, Ingo R.; Palaparthi, Anil

    2014-01-01

    The accuracy of a voice vote was addressed by systematically varying group size, individual voter loudness, and words that are typically used to express agreement or disagreement. Five judges rated the loudness of two competing groups in A-B comparison tasks. Acoustic analysis was performed to determine the sound energy level of each word uttered by each group. Results showed that individual voter differences in energy level can grossly alter group loudness and bias the vote. Unless some control is imposed on the sound level of individual voters, it is difficult to establish even a two-thirds majority, much less a simple majority. There is no symmetry in the bias created by unequal sound production of individuals. Soft voices do not bias the group loudness much, but loud voices do. The phonetic balance of the two words chosen (e.g., “yea” and “nay” as opposed to “aye” and “no”) seems to be less of an issue. PMID:24437776

  17. Time and position accuracy using codeless GPS

    NASA Technical Reports Server (NTRS)

    Dunn, C. E.; Jefferson, D. C.; Lichten, S. M.; Thomas, J. B.; Vigue, Y.; Young, L. E.

    1994-01-01

    The Global Positioning System has allowed scientists and engineers to make measurements having accuracy far beyond the original 15 meter goal of the system. Using global networks of P-Code capable receivers and extensive post-processing, geodesists have achieved baseline precision of a few parts per billion, and clock offsets have been measured at the nanosecond level over intercontinental distances. A cloud hangs over this picture, however. The Department of Defense plans to encrypt the P-Code (called Anti-Spoofing, or AS) in the fall of 1993. After this event, geodetic and time measurements will have to be made using codeless GPS receivers. However, there appears to be a silver lining to the cloud. In response to the anticipated encryption of the P-Code, the geodetic and GPS receiver community has developed some remarkably effective means of coping with AS without classified information. We will discuss various codeless techniques currently available and the data noise resulting from each. We will review some geodetic results obtained using only codeless data, and discuss the implications for time measurements. Finally, we will present the status of GPS research at JPL in relation to codeless clock measurements.

  18. Quantitative evaluation of three-dimensional facial scanners measurement accuracy for facial deformity

    NASA Astrophysics Data System (ADS)

    Zhao, Yi-jiao; Xiong, Yu-xue; Sun, Yu-chun; Yang, Hui-fang; Lyu, Pei-jun; Wang, Yong

    2015-07-01

    Objective: To evaluate the measurement accuracy of three-dimensional (3D) facial scanners for facial deformity patients from oral clinic. Methods: 10 patients in different types of facial deformity from oral clinical were included. Three 3D digital face models for each patient were obtained by three facial scanners separately (line laser scanner from Faro for reference, stereophotography scanner from 3dMD and structured light scanner from FaceScan for test). For each patient, registration based on Iterative Closest Point (ICP) algorithm was executed to align two test models (3dMD data & Facescan data) to the reference models (Faro data in high accuracy) respectively. The same boundaries on each pair models (one test and one reference models) were obtained by projection function in Geomagic Stuido 2012 software for trimming overlapping region, then 3D average measurement errors (3D errors) were calculated for each pair models also by the software. Paired t-test analysis was adopted to compare the 3D errors of two test facial scanners (10 data for each group). 3D profile measurement accuracy (3D accuracy) that is integrated embodied by average value and standard deviation of 10 patients' 3D errors were obtained by surveying analysis for each test scanner finally. Results: 3D accuracies of 2 test facial scanners in this study for facial deformity were 0.44+/-0.08 mm and 0.43+/-0.05 mm. The result of structured light scanner was slightly better than stereophotography scanner. No statistical difference between them. Conclusions: Both test facial scanners could meet the accuracy requirement (0.5mm) of 3D facial data acquisition for oral clinic facial deformity patients in this study. Their practical measurement accuracies were all slightly lower than their nominal accuracies.

  19. Understanding vs. competency: the case of accuracy checking dispensed medicines in pharmacy.

    PubMed

    James, K Lynette; Davies, J Graham; Kinchin, Ian; Patel, Jignesh P; Whittlesea, Cate

    2010-12-01

    Ensuring the competence of healthcare professionals' is core to undergraduate and post-graduate education. Undergraduate pharmacy students and pre-registration graduates are required to demonstrate competence at dispensing and accuracy checking medicines. However, competence differs from understanding. This study determined the competence and understanding of undergraduate students and pharmacists at accuracy checking dispensed medicines. Third year undergraduate pharmacy students and first year post-graduate diploma pharmacists participated in the study, which involved an accuracy checking task and concept mapping exercise. Participants accuracy checked eight medicines which contained 13 dispensing errors and then constructed a concept map illustrating their understanding of the accuracy checking process. The error detection rates and types of dispensing errors detected by undergraduates and pharmacists were compared using Mann-Whitney and chi-square, respectively. Statistical significance was p ≤ 0.05. Concept maps were qualitatively analysed to identify structural typologies. Forty-one undergraduates and 78 pharmacists participated in the study. Pharmacists detected significantly more dispensing errors (85%) compared to the undergraduates (77%, p ≤ 0.001). Only one undergraduate and seven pharmacists detected all dispensing errors. The majority of concept maps were chains (undergraduates = 46%, n = 19; pharmacists = 45%, n = 35) and spokes (undergraduates = 54%, n = 22; pharmacists = 54%, n = 42) indicating surface learning. One pharmacist, who detected all dispensing errors in the accuracy checking exercise, created a networked map characteristic of deep learning. Undergraduate students and pharmacists demonstrated a degree of operational competence at detecting dispensing errors without fully understanding the accuracy checking process. Accuracy checking training should be improved at undergraduate and post-graduate level so that pharmacists are equipped

  20. The Good Judge of Personality: Characteristics, Behaviors, and Observer Accuracy

    PubMed Central

    Letzring, Tera D.

    2008-01-01

    Personality characteristics and behaviors related to judgmental accuracy following unstructured interactions among previously unacquainted triads were examined. Judgmental accuracy was related to social skill, agreeableness, and adjustment. Accuracy of observers of the interactions was positively related to the number of good judges in the interaction, which implies that the personality and behaviors of the judge are important for creating a situation in which targets will reveal relevant personality cues. Furthermore, the finding that observer accuracy was positively related to the number of good judge partners suggests that judgmental accuracy is based on more than detection and utilization skills of the judge. PMID:19649134

  1. Accuracy of schemes with nonuniform meshes for compressible fluid flows

    NASA Technical Reports Server (NTRS)

    Turkel, E.

    1985-01-01

    The accuracy of the space discretization for time-dependent problems when a nonuniform mesh is used is considered. Many schemes reduce to first-order accuracy while a popular finite volume scheme is even inconsistent for general grids. This accuracy is based on physical variables. However, when accuracy is measured in computational variables then second-order accuracy can be obtained. This is meaningful only if the mesh accurately reflects the properties of the solution. In addition, the stability properties of some improved accurate schemes are analyzed and it can be shown that they also allow for larger time steps when Runge-Kutta type methods are used to advance in time.

  2. Managing satellite pointing accuracy - A systems engineering approach

    NASA Astrophysics Data System (ADS)

    Marley, R.; Dungate, D. G.

    1992-02-01

    The accuracies with which the attitude of a satellite (notably the payload) must be controlled and measured influence the engineering of the Guidance, Navigation and Control (GNC) subsystem, payload and structure. They also drive requirements for ground-based calibration and attitude reconstruction software. By optimizing the allocation of margins to the various subsystems during the initial development phase, there is scope for improving the satellite design and reducing the cost, complexity, and development risk. This process, supported by dedicated software tools, can subsequently be iterated to update the design as the project matures. The performance at subsystem and system level, during later development phases, may be predicted in terms of component errors and compared with requirements. The scope of this paper is to describe how the system-level methods adopted in the ESA Handbook must be generalized to deal with diverse subsystems. Statistical methods for evaluating pointing and measurement performance are further developed, and the application of a software tool for design and validation is described.

  3. Stabilized high-accuracy optical tracking system (SHOTS)

    NASA Astrophysics Data System (ADS)

    Ruffatto, Donald; Brown, H. Donald; Pohle, Richard H.; Reiley, Michael F.; Haddock, Delmar D.

    2001-08-01

    This paper describes an 0.75 meter aperture, Stabilized High-accuracy Optical Tracking System (SHOTS), two of which are being developed by Textron Systems Corporation, under contract to the Navy's Space and Naval Warfare Systems Center, San Diego (SPAWAR-SD). The SHOTS design is optimized to meet the requirements of the Navy's Theater Ballistic Missile Defense (TBMD) testing program being conducted at the Kauai Pacific Missile Range Facility (PMRF). The SHOTS utilizes a high-precision, GPS aided inertial navigation unit (INU) coupled with a 3-axis, rate gyro stabilized mount which allows precision pointing to be achieved on either land or sea-based platforms. The SHOTS mount control system architecture, acquisition, tracking and pointing (ATP) functionality and methodology which allows the system to meet the TBMD mission data collection requirements are discussed. High frame rate visible and MWIR sensors are incorporated into the system design to provide the capability of capturing short duration events, e.g., missile-target intercepts. These sensors along with the supporting high speed data acquisition, recording and control subsystems are described. Simulations of the SHOTS imaging performance in TBMD measurement scenarios are presented along with an example of the image improvement being achieved with post-processing image reconstruction algorithms.

  4. Monitoring techniques for high accuracy interference fit assembly processes

    NASA Astrophysics Data System (ADS)

    Liuti, A.; Vedugo, F. Rodriguez; Paone, N.; Ungaro, C.

    2016-06-01

    In the automotive industry, there are many assembly processes that require a high geometric accuracy, in the micrometer range; generally open-loop controllers cannot meet these requirements. This results in an increased defect rate and high production costs. This paper presents an experimental study of interference fit process, aimed to evaluate the aspects which have the most impact on the uncertainty in the final positioning. The press-fitting process considered, consists in a press machine operating with a piezoelectric actuator to press a plug into a sleeve. Plug and sleeve are designed and machined to obtain a known interference fit. Differential displacement and velocity measurements of the plug with respect to the sleeve are measured by a fiber optic differential laser Doppler vibrometer. Different driving signals of the piezo actuator allow to have an insight into the differences between a linear and a pulsating press action. The paper highlights how the press-fit assembly process is characterized by two main phases: the first is an elastic deformation of the plug and sleeve, which produces a reversible displacement, the second is a sliding of the plug with respect to the sleeve, which results in an irreversible displacement and finally realizes the assembly. The simultaneous measurements of the displacement and the force have permitted to define characteristic features in the signal useful to identify the start of the irreversible movement. These indicators could be used to develop a control logic in a press assembly process.

  5. Tetroon evaluation program. [volume accuracies under superpressure

    NASA Technical Reports Server (NTRS)

    Beemer, J. D.; Markhardt, T. W.

    1977-01-01

    The actual volume of a constant volume superpressured tetrahedron shaped balloon changes as the amount of superpressure is changed. The experimental methods used to measure these changes in volume are described and results are presented. The basic equations used to determine the amount of inflation gas required for a tetroon to float at a predetermined flight level are presented and inflation techniques discussed.

  6. 40 CFR Table 7 to Subpart Hhhhhhh... - Calibration and Accuracy Requirements for Continuous Parameter Monitoring Systems

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... Liquid flow rate ±2 percent of the normal range of flow a. Every 12 months.b. You must select a measurement location where swirling flow or abnormal velocity distributions due to upstream and downstream disturbances at the point of measurement do not exist. 4. Gas flow rate ±5 percent of the flow rate or 10...

  7. 40 CFR Table 7 to Subpart Hhhhhhh... - Calibration and Accuracy Requirements for Continuous Parameter Monitoring Systems

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... CPMS for physical and operational integrity and all electrical connections for oxidation and galvanic... percent of normal range Every 12 months. 8. Pressure ±5 percent or 0.12 kilopascals (0.5 inches of water... for integrity, oxidation and galvanic corrosion if CPMS is not equipped with a redundant...

  8. Recommended documentation of evapotranspiration measurements and associated weather data and a review of requirements for accuracy

    Technology Transfer Automated Retrieval System (TEKTRAN)

    More and more evapotranspiration (ET) models, ET crop coefficients, and associated measurements of ET are reported in the literature. These measurements base from a range of measurement systems including lysimeters, eddy covariance, Bowen ratio, water balance (gravimetric, neutron meter, other soil ...

  9. Optimizing Tsunami Forecast Model Accuracy

    NASA Astrophysics Data System (ADS)

    Whitmore, P.; Nyland, D. L.; Huang, P. Y.

    2015-12-01

    Recent tsunamis provide a means to determine the accuracy that can be expected of real-time tsunami forecast models. Forecast accuracy using two different tsunami forecast models are compared for seven events since 2006 based on both real-time application and optimized, after-the-fact "forecasts". Lessons learned by comparing the forecast accuracy determined during an event to modified applications of the models after-the-fact provide improved methods for real-time forecasting for future events. Variables such as source definition, data assimilation, and model scaling factors are examined to optimize forecast accuracy. Forecast accuracy is also compared for direct forward modeling based on earthquake source parameters versus accuracy obtained by assimilating sea level data into the forecast model. Results show that including assimilated sea level data into the models increases accuracy by approximately 15% for the events examined.

  10. Wind Prediction Accuracy for Air Traffic Management Decision Support Tools

    NASA Technical Reports Server (NTRS)

    Cole, Rod; Green, Steve; Jardin, Matt; Schwartz, Barry; Benjamin, Stan

    2000-01-01

    The performance of Air Traffic Management and flight deck decision support tools depends in large part on the accuracy of the supporting 4D trajectory predictions. This is particularly relevant to conflict prediction and active advisories for the resolution of conflicts and the conformance with of traffic-flow management flow-rate constraints (e.g., arrival metering / required time of arrival). Flight test results have indicated that wind prediction errors may represent the largest source of trajectory prediction error. The tests also discovered relatively large errors (e.g., greater than 20 knots), existing in pockets of space and time critical to ATM DST performance (one or more sectors, greater than 20 minutes), are inadequately represented by the classic RMS aggregate prediction-accuracy studies of the past. To facilitate the identification and reduction of DST-critical wind-prediction errors, NASA has lead a collaborative research and development activity with MIT Lincoln Laboratories and the Forecast Systems Lab of the National Oceanographic and Atmospheric Administration (NOAA). This activity, begun in 1996, has focussed on the development of key metrics for ATM DST performance, assessment of wind-prediction skill for state of the art systems, and development/validation of system enhancements to improve skill. A 13 month study was conducted for the Denver Center airspace in 1997. Two complementary wind-prediction systems were analyzed and compared to the forecast performance of the then standard 60 km Rapid Update Cycle - version 1 (RUC-1). One system, developed by NOAA, was the prototype 40-km RUC-2 that became operational at NCEP in 1999. RUC-2 introduced a faster cycle (1 hr vs. 3 hr) and improved mesoscale physics. The second system, Augmented Winds (AW), is a prototype en route wind application developed by MITLL based on the Integrated Terminal Wind System (ITWS). AW is run at a local facility (Center) level, and updates RUC predictions based on an

  11. Kinematics of a striking task: accuracy and speed-accuracy considerations.

    PubMed

    Parrington, Lucy; Ball, Kevin; MacMahon, Clare

    2015-01-01

    Handballing in Australian football (AF) is the most efficient passing method, yet little research exists examining technical factors associated with accuracy. This study had three aims: (a) To explore the kinematic differences between accurate and inaccurate handballers, (b) to compare within-individual successful (hit target) and unsuccessful (missed target) handballs and (c) to assess handballing when both accuracy and speed of ball-travel were combined using a novel approach utilising canonical correlation analysis. Three-dimensional data were collected on 18 elite AF players who performed handballs towards a target. More accurate handballers exhibited a significantly straighter hand-path, slower elbow angular velocity and smaller elbow range of motion (ROM) compared to the inaccurate group. Successful handballs displayed significantly larger trunk ROM, maximum trunk rotation velocity and step-angle and smaller elbow ROM in comparison to the unsuccessful handballs. The canonical model explained 73% of variance shared between the variable sets, with a significant relationship found between hand-path, elbow ROM and maximum elbow angular velocity (predictors) and hand-speed and accuracy (dependant variables). Interestingly, not all parameters were the same across each of the analyses, with technical differences between inaccurate and accurate handballers different from those between successful and unsuccessful handballs in the within-individual analysis. PMID:25079111

  12. Sensitivity and accuracy of whole effluent toxicity (WET) tests

    SciTech Connect

    Chapman, G.; Lussier, S.; Norberg-King, T.; Poucher, S.; Thursby, G.

    1995-12-31

    Direct measurement of effluent toxicity is a critical tool in controlling ambient toxicity. Lack of water quality criteria for many commonly discharged chemicals and complicated toxicological interactions in complex effluents are common. Complex effluent toxicity tests should provide limits as protective as National Criteria for single chemicals. One way to evaluate WET test sensitivity and accuracy is to compare WET test results with single chemicals to the National Criteria for these chemicals. A study of eight criteria chemicals (ammonia, analine, cadmium, carbaryl, copper, lead, methyl parathion, and zinc), two freshwater WET tests (Pimephales and Ceriodaphnia), and four marine WET tests (Arbacia, Champia, Menidia, and Mysidopsis) was conducted to provide this comparison. The most sensitive of the freshwater and marine WET tests with each chemical were generally less protective than the National Final Chronic Value (FCV) concentration by factors ranging from 1.09 to 44. Less-sensitive WET tests with each chemical represented significant underestimation of chronic toxicity by factors often in the range of 100--10,000. In two instances WET test results (copper and Champia, zinc and Ceriodaphnia) were below FCV by factors of 1.85 and 3.4, respectively. It is recognized that Champia is extremely sensitive to copper and that Daphnids may not be protected by the current National zinc criterion, thus these results are not surprising. Adequate accuracy for WET tests usually requires that the most sensitive WET-test species by utilized. Although the bias of WET tests to underestimate chronic toxicity is relatively small, it should be considered in the selection of test endpoints and application of WET data.

  13. Accuracy evaluation of an X-ray microtomography system.

    PubMed

    Fernandes, Jaquiel S; Appoloni, Carlos R; Fernandes, Celso P

    2016-06-01

    Microstructural parameter evaluation of reservoir rocks is of great importance to petroleum production companies. In this connection, X-ray computed microtomography (μ-CT) has proven to be a quite useful method for the assessment of rocks, as it provides important microstructural parameters, such as porosity, permeability, pore size distribution and porous phase of the sample. X-ray computed microtomography is a non-destructive technique that enables the reuse of samples already measured and also yields 2-D cross-sectional images of the sample as well as volume rendering. This technique offers an additional advantage, as it does not require sample preparation, of reducing the measurement time, which is approximately one to three hours, depending on the spatial resolution used. Although this technique is extensively used, accuracy verification of measurements is hard to obtain because the existing calibrated samples (phantoms) have large volumes and are assessed in medical CT scanners with millimeter spatial resolution. Accordingly, this study aims to determine the accuracy of an X-ray computed microtomography system using a Skyscan 1172 X-ray microtomograph. To accomplish this investigation, it was used a nylon thread set with known appropriate diameter inserted into a glass tube. The results for porosity size and phase distribution by X-ray microtomography were very close to the geometrically calculated values. The geometrically calculated porosity and the porosity determined by the methodology using the μ-CT was 33.4±3.4% and 31.0±0.3%, respectively. The outcome of this investigation was excellent. It was also observed a small variability in the results along all 401 sections of the analyzed image. Minimum and maximum porosity values between the cross sections were 30.9% and 31.1%, respectively. A 3-D image representing the actual structure of the sample was also rendered from the 2-D images. PMID:27064197

  14. Accuracy of genome-enabled prediction exploring purebred and crossbred pig populations.

    PubMed

    Veroneze, R; Lopes, M S; Hidalgo, A M; Guimarães, S E F; Silva, F F; Harlizius, B; Lopes, P S; Knol, E F; M van Arendonk, J A; Bastiaansen, J W M

    2015-10-01

    Pig breeding companies keep relatively small populations of pure sire and dam lines that are selected to improve the performance of crossbred animals. This design of the pig breeding industry presents challenges to the implementation of genomic selection, which requires large data sets to obtain highly accurate genomic breeding values. The objective of this study was to evaluate the impact of different reference sets (across population and multipopulation) on the accuracy of genomic breeding values in 3 purebred pig populations and to assess the potential of using crossbreed performance in genomic prediction. Data consisted of phenotypes and genotypes on animals from 3 purebred populations (sire line [SL] 1, = 1,146; SL2, = 682; and SL3, = 1,264) and 3 crossbred pig populations (Terminal cross [TER] 1, = 183; TER2, = 106; and TER3, = 177). Animals were genotyped using the Illumina Porcine SNP60 Beadchip. For each purebred population, within-, across-, and multipopulation predictions were considered. In addition, data from the paternal purebred populations were used as a reference set to predict the performance of crossbred animals. Backfat thickness phenotypes were precorrected for fixed effects and subsequently included in the genomic BLUP model. A genomic relationship matrix that accounted for the differences in allele frequencies between lines was implemented. Accuracies of genomic EBV obtained within the 3 different sire lines varied considerably. For within-population prediction, SL1 showed higher values (0.80) than SL2 (0.61) and SL3 (0.67). Multipopulation predictions had accuracies similar to within-population accuracies for the validation in SL1. For SL2 and SL3, the accuracies of multipopulation prediction were similar to the within-population prediction when the reference set was composed by 900 animals (600 of the target line plus 300 of another line). For across-population predictions, the accuracy was mostly close to zero. The accuracies of predicting

  15. Required Reading

    ERIC Educational Resources Information Center

    Janko, Edmund

    2002-01-01

    In this article, the author insists that those seeking public office prove their literary mettle. As an English teacher, he does have a litmus test for all public officials, judges and senators included--a reading litmus test. He would require that all candidates and nominees have read and reflected on a nucleus of works whose ideas and insights…

  16. The measurement of linear and angular displacements in prototype aircraft - Instrumentation, calibration and operational accuracy

    NASA Astrophysics Data System (ADS)

    Storm van Leeuwen, Sam

    The design and development of angular displacement transducers for flight test instrumentation systems are considered. Calibration tools, developed to meet the accuracy requirements, allowed in situ calibration with short turn around times. The design of the control surface deflection measurement channels for the Fokker 100 prototype aircraft is discussed in detail. It is demonstrated that a bellows coupling provides accurate results, and that the levers and push-pull rod drive mechanisms perform well. The results suggest that a complex mechanical drive mechanism reduces the system accuracy.

  17. Accuracy analysis of the space shuttle solid rocket motor profile measuring device

    NASA Technical Reports Server (NTRS)

    Estler, W. Tyler

    1989-01-01

    The Profile Measuring Device (PMD) was developed at the George C. Marshall Space Flight Center following the loss of the Space Shuttle Challenger. It is a rotating gauge used to measure the absolute diameters of mating features of redesigned Solid Rocket Motor field joints. Diameter tolerance of these features are typically + or - 0.005 inches and it is required that the PMD absolute measurement uncertainty be within this tolerance. In this analysis, the absolute accuracy of these measurements were found to be + or - 0.00375 inches, worst case, with a potential accuracy of + or - 0.0021 inches achievable by improved temperature control.

  18. Algorithm for recognition and measurement position of pitches on invar scale with submicron accuracy

    NASA Astrophysics Data System (ADS)

    Lashmanov, Oleg; Korotaev, Valery

    2015-05-01

    High precision optical encoders are used for many high end computerized numerical control machines. Main requirement for such systems are accuracy and time of measurement, therefore image processing are often performed by FPGA or DSP. This article will describe image processing algorithm for detecting and measuring pitch position on invar scale, which can be easily implemented on specified target hardware. The paper proposed to use a one-dimensional approach for pitch recognition and measure its position on the image. This algorithm is well suited for implementation on FPGA and DSP and provide accuracy 0.07 pixel.

  19. Effect of atmospherics on beamforming accuracy

    NASA Technical Reports Server (NTRS)

    Alexander, Richard M.

    1990-01-01

    Two mathematical representations of noise due to atmospheric turbulence are presented. These representations are derived and used in computer simulations of the Bartlett Estimate implementation of beamforming. Beamforming is an array processing technique employing an array of acoustic sensors used to determine the bearing of an acoustic source. Atmospheric wind conditions introduce noise into the beamformer output. Consequently, the accuracy of the process is degraded and the bearing of the acoustic source is falsely indicated or impossible to determine. The two representations of noise presented here are intended to quantify the effects of mean wind passing over the array of sensors and to correct for these effects. The first noise model is an idealized case. The effect of the mean wind is incorporated as a change in the propagation velocity of the acoustic wave. This yields an effective phase shift applied to each term of the spatial correlation matrix in the Bartlett Estimate. The resultant error caused by this model can be corrected in closed form in the beamforming algorithm. The second noise model acts to change the true direction of propagation at the beginning of the beamforming process. A closed form correction for this model is not available. Efforts to derive effective means to reduce the contributions of the noise have not been successful. In either case, the maximum error introduced by the wind is a beam shift of approximately three degrees. That is, the bearing of the acoustic source is indicated at a point a few degrees from the true bearing location. These effects are not quite as pronounced as those seen in experimental results. Sidelobes are false indications of acoustic sources in the beamformer output away from the true bearing angle. The sidelobes that are observed in experimental results are not caused by these noise models. The effects of mean wind passing over the sensor array as modeled here do not alter the beamformer output as

  20. Accuracy of quantitative visual soil assessment

    NASA Astrophysics Data System (ADS)

    van Leeuwen, Maricke; Heuvelink, Gerard; Stoorvogel, Jetse; Wallinga, Jakob; de Boer, Imke; van Dam, Jos; van Essen, Everhard; Moolenaar, Simon; Verhoeven, Frank; Stoof, Cathelijne

    2016-04-01

    Visual soil assessment (VSA) is a method to assess soil quality visually, when standing in the field. VSA is increasingly used by farmers, farm organisations and companies, because it is rapid and cost-effective, and because looking at soil provides understanding about soil functioning. Often VSA is regarded as subjective, so there is a need to verify VSA. Also, many VSAs have not been fine-tuned for contrasting soil types. This could lead to wrong interpretation of soil quality and soil functioning when contrasting sites are compared to each other. We wanted to assess accuracy of VSA, while taking into account soil type. The first objective was to test whether quantitative visual field observations, which form the basis in many VSAs, could be validated with standardized field or laboratory measurements. The second objective was to assess whether quantitative visual field observations are reproducible, when used by observers with contrasting backgrounds. For the validation study, we made quantitative visual observations at 26 cattle farms. Farms were located at sand, clay and peat soils in the North Friesian Woodlands, the Netherlands. Quantitative visual observations evaluated were grass cover, number of biopores, number of roots, soil colour, soil structure, number of earthworms, number of gley mottles and soil compaction. Linear regression analysis showed that four out of eight quantitative visual observations could be well validated with standardized field or laboratory measurements. The following quantitative visual observations correlated well with standardized field or laboratory measurements: grass cover with classified images of surface cover; number of roots with root dry weight; amount of large structure elements with mean weight diameter; and soil colour with soil organic matter content. Correlation coefficients were greater than 0.3, from which half of the correlations were significant. For the reproducibility study, a group of 9 soil scientists and 7

  1. Testing the accuracy of synthetic stellar libraries

    NASA Astrophysics Data System (ADS)

    Martins, Lucimara P.; Coelho, Paula

    2007-11-01

    One of the main ingredients of stellar population synthesis models is a library of stellar spectra. Both empirical and theoretical libraries are used for this purpose, and the question about which one is preferable is still debated in the literature. Empirical and theoretical libraries are being improved significantly over the years, and many libraries have become available lately. However, it is not clear in the literature what are the advantages of using each of these new libraries, and how far behind models are compared to observations. Here we compare in detail some of the major theoretical libraries available in the literature with observations, aiming at detecting weaknesses and strengths from the stellar population modelling point of view. Our test is twofold: we compared model predictions and observations for broad-band colours and for high-resolution spectral features. Concerning the broad-band colours, we measured the stellar colour given by three recent sets of model atmospheres and flux distributions, and compared them with a recent UBVRIJHK calibration which is mostly based on empirical data. We found that the models can reproduce with reasonable accuracy the stellar colours for a fair interval in effective temperatures and gravities. The exceptions are (1) the U - B colour, where the models are typically redder than the observations, and (2) the very cool stars in general (V - K >~ 3). Castelli & Kurucz is the set of models that best reproduce the bluest colours (U - B, B - V) while Gustafsson et al. and Brott & Hauschildt more accurately predict the visual colours. The three sets of models perform in a similar way for the infrared colours. Concerning the high-resolution spectral features, we measured 35 spectral indices defined in the literature on three high-resolution synthetic libraries, and compared them with the observed measurements given by three empirical libraries. The measured indices cover the wavelength range from ~3500 to ~8700Å. We

  2. Accuracy and Resolution of Kinect Depth Data for Indoor Mapping Applications

    PubMed Central

    Khoshelham, Kourosh; Elberink, Sander Oude

    2012-01-01

    Consumer-grade range cameras such as the Kinect sensor have the potential to be used in mapping applications where accuracy requirements are less strict. To realize this potential insight into the geometric quality of the data acquired by the sensor is essential. In this paper we discuss the calibration of the Kinect sensor, and provide an analysis of the accuracy and resolution of its depth data. Based on a mathematical model of depth measurement from disparity a theoretical error analysis is presented, which provides an insight into the factors influencing the accuracy of the data. Experimental results show that the random error of depth measurement increases with increasing distance to the sensor, and ranges from a few millimeters up to about 4 cm at the maximum range of the sensor. The quality of the data is also found to be influenced by the low resolution of the depth measurements. PMID:22438718

  3. Effect of Flexural Rigidity of Tool on Machining Accuracy during Microgrooving by Ultrasonic Vibration Cutting Method

    NASA Astrophysics Data System (ADS)

    Furusawa, Toshiaki

    2010-12-01

    It is necessary to form fine holes and grooves by machining in the manufacture of equipment in the medical or information field and the establishment of such a machining technology is required. In micromachining, the use of the ultrasonic vibration cutting method is expected and examined. In this study, I experimentally form microgrooves in stainless steel SUS304 by the ultrasonic vibration cutting method and examine the effects of the shape and material of the tool on the machining accuracy. As a result, the following are clarified. The evaluation of the machining accuracy of the straightness of the finished surface revealed that there is an optimal rake angle of the tools related to the increase in cutting resistance as a result of increases in work hardening and the cutting area. The straightness is improved by using a tool with low flexural rigidity. In particular, Young's modulus more significantly affects the cutting accuracy than the shape of the tool.

  4. Determining the accuracies of sea-surface temperatures derived from measurements of MODIS and VIIRS

    NASA Astrophysics Data System (ADS)

    Minnett, P. J.; Kilpatrick, K. A.; Podesta, G. P.; Izaguirre, M.; Williams, E.; Walsh, S.

    2015-12-01

    The appropriate application of sea-surface temperatures (SSTs) derived from MODIS and VIIRS requires knowledge of the errors and uncertainties of the SST fields. The accuracies of the SSTs are determined by comparison with independent measurements, usually derived from drifting and moored buoys, and ship-board radiometers. By using similar cloud detection and clear-sky atmospheric correction algorithms to derived SST from both MODIS's on Terra and Aqua, and the VIIRS on S-NPP a consistent time series of SSTs can be derived from the first useful Terra MODIS data in 2000 to the present, and by using the same approach to assess their accuracies, a consistent set of errors and uncertainties can also be derived. The presentation will provide a summary of recently modified algorithms used to derive SSTs from the MODIS's and VIIRS, and discuss the accuracies of the derived fields, including recent improvements to the VIIRS atmospheric correction algorithm to reduce the effects of instrumental artifacts.

  5. New integration techniques for chemical kinetic rate equations. II - Accuracy comparison

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    1986-01-01

    A comparison of the accuracy of several techniques recently developed for solving stiff differential equations is presented. The techniques examined include two general purpose codes EEPISODE and LSODE developed for an arbitrary system of ordinary differential equations, and three specialized codes CHEMEQ, CREKID, and GCKP84 developed specifically to solve chemical kinetic rate equations. The accuracy comparisons are made by applying these solution procedures to two practical combustion kinetics problems. Both problems describe adiabatic, homogeneous, gas phase chemical reactions at constant pressure, and include all three combustion regimes: induction heat release, and equilibration. The comparisons show that LSODE is the most efficient code - in the sense that it requires the least computational work to attain a specified accuracy level. An important finding is that an iterative solution of the algebraic enthalpy conservation equation for the temperature can be more accurate and efficient than computing the temperature by integrating its time derivative.

  6. New integration techniques for chemical kinetic rate equations. 2: Accuracy comparison

    NASA Technical Reports Server (NTRS)

    Radhakrishnan, K.

    1985-01-01

    A comparison of the accuracy of several techniques recently developed for solving stiff differential equations is presented. The techniques examined include two general purpose codes EEPISODE and LSODE developed for an arbitrary system of ordinary differential equations, and three specialized codes CHEMEQ, CREKID, and GCKP84 developed specifically to solve chemical kinetic rate equations. The accuracy comparisons are made by applying these solution procedures to two practical combustion kinetics problems. Both problems describe adiabatic, homogeneous, gas phase chemical reactions at constant pressure, and include all three combustion regimes: induction, heat release, and equilibration. The comparisons show that LSODE is the most efficient code - in the sense that it requires the least computational work to attain a specified accuracy level. An important finding is that an iterative solution of the algebraic enthalpy conservation equation for the temperature can be more accurate and efficient than computing the temperature by integrating its time derivative.

  7. Height Accuracy Based on Different Rtk GPS Method for Ultralight Aircraft Images

    NASA Astrophysics Data System (ADS)

    Tahar, K. N.

    2015-08-01

    Height accuracy is one of the important elements in surveying work especially for control point's establishment which requires an accurate measurement. There are many methods can be used to acquire height value such as tacheometry, leveling and Global Positioning System (GPS). This study has investigated the effect on height accuracy based on different observations which are single based and network based GPS methods. The GPS network is acquired from the local network namely Iskandar network. This network has been setup to provide real-time correction data to rover GPS station while the single network is based on the known GPS station. Nine ground control points were established evenly at the study area. Each ground control points were observed about two and ten minutes. It was found that, the height accuracy give the different result for each observation.

  8. Accuracy analysis of TDRSS demand forecasts

    NASA Astrophysics Data System (ADS)

    Stern, Daniel C.; Levine, Allen J.; Pitt, Karl J.

    1994-11-01

    This paper reviews Space Network (SN) demand forecasting experience over the past 16 years and describes methods used in the forecasts. The paper focuses on the Single Access (SA) service, the most sought-after resource in the Space Network. Of the ten years of actual demand data available, only the last five years (1989 to 1993) were considered predictive due to the extensive impact of the Challenger accident of 1986. NASA's Space Network provides tracking and communications services to user spacecraft such as the Shuttle and the Hubble Space Telescope. Forecasting the customer requirements is essential to planning network resources and to establishing service commitments to future customers. The lead time to procure Tracking and Data Relay Satellites (TDRS's) requires demand forecasts ten years in the future a planning horizon beyond the funding commitments for missions to be supported. The long range forecasts are shown to have had a bias toward underestimation in the 1991 -1992 period. The trend of underestimation can be expected to be replaced by overestimation for a number of years starting with 1998. At that time demand from new missions slated for launch will be larger than the demand from ongoing missions, making the potential for delay the dominant factor. If the new missions appear as scheduled, the forecasts are likely to be moderately underestimated. The SN commitment to meet the negotiated customer's requirements calls for conservatism in the forecasting. Modification of the forecasting procedure to account for a delay bias is, therefore, not advised. Fine tuning the mission model to more accurately reflect the current actual demand is recommended as it may marginally improve the first year forecasting.

  9. Accuracy analysis of TDRSS demand forecasts

    NASA Technical Reports Server (NTRS)

    Stern, Daniel C.; Levine, Allen J.; Pitt, Karl J.

    1994-01-01

    This paper reviews Space Network (SN) demand forecasting experience over the past 16 years and describes methods used in the forecasts. The paper focuses on the Single Access (SA) service, the most sought-after resource in the Space Network. Of the ten years of actual demand data available, only the last five years (1989 to 1993) were considered predictive due to the extensive impact of the Challenger accident of 1986. NASA's Space Network provides tracking and communications services to user spacecraft such as the Shuttle and the Hubble Space Telescope. Forecasting the customer requirements is essential to planning network resources and to establishing service commitments to future customers. The lead time to procure Tracking and Data Relay Satellites (TDRS's) requires demand forecasts ten years in the future a planning horizon beyond the funding commitments for missions to be supported. The long range forecasts are shown to have had a bias toward underestimation in the 1991 -1992 period. The trend of underestimation can be expected to be replaced by overestimation for a number of years starting with 1998. At that time demand from new missions slated for launch will be larger than the demand from ongoing missions, making the potential for delay the dominant factor. If the new missions appear as scheduled, the forecasts are likely to be moderately underestimated. The SN commitment to meet the negotiated customer's requirements calls for conservatism in the forecasting. Modification of the forecasting procedure to account for a delay bias is, therefore, not advised. Fine tuning the mission model to more accurately reflect the current actual demand is recommended as it may marginally improve the first year forecasting.

  10. SUPERFISH accuracy dependence on mesh size

    NASA Astrophysics Data System (ADS)

    Merson, J. L.; Boicourt, G. P.

    1989-02-01

    The RF cavity code SUPERFISH is extensively used for the design of drift-tube linac (DTL), radio-frequency quadrupole (RFQ), and coupled-cavity linac (CCL) structures. It has been known for some time that considerably finer meshes are required near the nose of a drift tube to ensure accurate calculation of the resonant frequency and related secondary quantities. This paper discusses the results of numerical experiments designed to provide rules to set proper mesh sizes for DTL, RFQ, and CCL problems. During this work, SUPERFISH problems involving more than 100,000 mesh points were solved.

  11. Evaluating the accuracy of technicians and pharmacists in checking unit dose medication cassettes.

    PubMed

    Ambrose, Peter J; Saya, Frank G; Lovett, Larry T; Tan, Sandy; Adams, Dale W; Shane, Rita

    2002-06-15

    The accuracy rates of board-registered pharmacy technicians and pharmacists in checking unit dose medication cassettes in the inpatient setting at two separate institutions were examined. Cedars-Sinai Medical Center and Long Beach Memorial Medical Center, both in Los Angeles county, petitioned the California State Board of Pharmacy to approve a waiver of the California Code of Regulations to conduct an experimental program to compare the accuracy of unit dose medication cassettes checked by pharmacists with that of cassettes checked by trained, certified pharmacy technicians. The study consisted of three parts: assessing pharmacist baseline checking accuracy (Phase I), developing a technician-training program and certifying technicians who completed the didactic and practical training (Phase II), and evaluating the accuracy of certified technicians checking unit dose medication cassettes as a daily function (Phase III). Twenty-nine pharmacists and 41 technicians (3 of whom were pharmacy interns) participated in the study. Of the technicians, all 41 successfully completed the didactic and practical training, 39 successfully completed the audits and became certified checkers, and 2 (including 1 of the interns) did not complete the certification audits because they were reassigned to another work area or had resigned. In Phase II, the observed accuracy rate and its lower confidence limit exceeded the predetermined minimum requirement of 99.8% for a certified checker. The mean accuracy rates for technicians were identical at the two institutions (p = 1.0). The difference in mean accuracy rates between pharmacists (99.52%; 95% confidence interval [CI] 99.44-99.58%) and technicians, (99.89%; 95% CI 99.87-99.90%) was significant (p < 0.0001). Inpatient technicians who had been trained and certified in a closely supervised program that incorporated quality assurance mechanisms could safely and accurately check unit dose medication cassettes filled by other technicians

  12. Cone beam CT guidance provides superior accuracy for complex needle paths compared with CT guidance

    PubMed Central

    Braak, S J; Fütterer, J J; van Strijen, M J L; Hoogeveen, Y L; de Lange, F; Schultze Kool, L J

    2013-01-01

    Objective: To determine the accuracy of cone beam CT (CBCT) guidance and CT guidance in reaching small targets in relation to needle path complexity in a phantom. Methods: CBCT guidance combines three-dimensional CBCT imaging with fluoroscopy overlay and needle planning software to provide real-time needle guidance. The accuracy of needle positioning, quantified as deviation from a target, was assessed for inplane, angulated and double angulated needle paths. Four interventional radiologists reached four targets along the three paths using CBCT and CT guidance. Accuracies were compared between CBCT and CT for each needle path and between the three approaches within both modalities. The effect of user experience in CBCT guidance was also assessed. Results: Accuracies for CBCT were significantly better than CT for the double angulated needle path (2.2 vs 6.7 mm, p<0.001) for all radiologists. CBCT guidance showed no significant differences between the three approaches. For CT, deviations increased with increasing needle path complexity from 3.3 mm for the inplane placements to 4.4 mm (p=0.007) and 6.7 mm (p<0.001) for the angulated and double angulated CT-guided needle placements, respectively. For double angulated needle paths, experienced CBCT users showed consistently higher accuracies than trained users [1.8 mm (range 1.2–2.2) vs 3.3 mm (range 2.1–7.2) deviation from target, respectively; p=0.003]. Conclusion: In terms of accuracy, CBCT is the preferred modality, irrespective of the level of user experience, for more difficult guidance procedures requiring double angulated needle paths as in oncological interventions. Advances in knowledge: Accuracy of CBCT guidance has not been discussed before. CBCT guidance allows accurate needle placement irrespective of needle path complexity. For angulated and double-angulated needle paths, CBCT is more accurate than CT guidance. PMID:23913308

  13. Physiological and Movement Demands of Rugby League Referees: Influence on Penalty Accuracy.

    PubMed

    Emmonds, Stacey; OʼHara, John; Till, Kevin; Jones, Ben; Brightmore, Amy; Cooke, Carlton

    2015-12-01

    Research into the physiological and movement demands of Rugby League (RL) referees is limited, with only 1 study in the European Super League (SL). To date, no studies have considered decision making in RL referees. The purpose of this study was to quantify penalty accuracy scores of RL referees and to determine the relationship between penalty accuracy and total distance covered (TD), high-intensity running (HIR), and heart rate per 10-minute period of match play. Time motion analysis was undertaken on 8 referees over 148 European SL games during the 2012 season using 10-Hz global positioning system analysis and heart rate monitors. The number and timing of penalties awarded was quantified using Opta Stats. Referees awarded the correct decision on 74 ± 5% of occasions. Lowest accuracy was observed in the last 10-minute period of the game (67 ± 13%), with a moderate drop (effect size = 0.86) in accuracy observed between 60-70 minutes and 70-80 minutes. Despite this, there were only small correlations observed between mean heart rate, TD, HIR efforts, and penalty accuracy. Although a moderate correlation was observed between maximum velocity and accuracy. Despite only small correlations observed, it would be rash to assume that physiological and movement demands of refereeing have no influence on decision making. More likely, other confounding variables influence referee decision-making accuracy, requiring further investigation. Findings can be used by referees and coaches to inform training protocols, ensuring training is specific to both cognitive and physical match demands. PMID:25970494

  14. Accuracy Studies of a Magnetometer-Only Attitude-and-Rate-Determination System

    NASA Technical Reports Server (NTRS)

    Challa, M. (Editor); Wheeler, C. (Editor)

    1996-01-01

    A personal computer based system was recently prototyped that uses measurements from a three axis magnetometer (TAM) to estimate the attitude and rates of a spacecraft using no a priori knowledge of the spacecraft's state. Past studies using in-flight data from the Solar, Anomalous, and Magnetospheric Particles Explorer focused on the robustness of the system and demonstrated that attitude and rate estimates could be obtained accurately to 1.5 degrees (deg) and 0.01 deg per second (deg/sec), respectively, despite limitations in the data and in the accuracies of te truth models. This paper studies the accuracy of the Kalman filter in the system using several orbits of in-flight Earth Radiation Budget Satellite (ERBS) data and attitude and rate truth models obtained from high precision sensors to demonstrate the practical capabilities. This paper shows the following: Using telemetered TAM data, attitude accuracies of 0.2 to 0.4 deg and rate accuracies of 0.002 to 0.005 deg/sec (within ERBS attitude control requirements of 1 deg and 0.0005 deg/sec) can be obtained with minimal tuning of the filter; Replacing the TAM data in the telemetry with simulated TAM data yields corresponding accuracies of 0.1 to 0.2 deg and 0.002 to 0.005 deg/sec, thus demonstrating that the filter's accuracy can be significantly enhanced by further calibrating the TAM. Factors affecting the fillter's accuracy and techniques for tuning the system's Kalman filter are also presented.

  15. A method which can enhance the optical-centering accuracy

    NASA Astrophysics Data System (ADS)

    Zhang, Xue-min; Zhang, Xue-jun; Dai, Yi-dan; Yu, Tao; Duan, Jia-you; Li, Hua

    2014-09-01

    Optical alignment machining is an effective method to ensure the co-axiality of optical system. The co-axiality accuracy is determined by optical-centering accuracy of single optical unit, which is determined by the rotating accuracy of lathe and the optical-centering judgment accuracy. When the rotating accuracy of 0.2um can be achieved, the leading error can be ignored. An axis-determination tool which is based on the principle of auto-collimation can be used to determine the only position of centerscope is designed. The only position is the position where the optical axis of centerscope is coincided with the rotating axis of the lathe. Also a new optical-centering judgment method is presented. A system which includes the axis-determination tool and the new optical-centering judgment method can enhance the optical-centering accuracy to 0.003mm.

  16. Nationwide forestry applications program. Analysis of forest classification accuracy

    NASA Technical Reports Server (NTRS)

    Congalton, R. G.; Mead, R. A.; Oderwald, R. G.; Heinen, J. (Principal Investigator)

    1981-01-01

    The development of LANDSAT classification accuracy assessment techniques, and of a computerized system for assessing wildlife habitat from land cover maps are considered. A literature review on accuracy assessment techniques and an explanation for the techniques development under both projects are included along with listings of the computer programs. The presentations and discussions at the National Working Conference on LANDSAT Classification Accuracy are summarized. Two symposium papers which were published on the results of this project are appended.

  17. Effects of facial transformations on accuracy of recognition.

    PubMed

    Terry, R L

    1994-08-01

    Alterations of facial features between the initial phase of a memory task and a later recognition test lower identification accuracy. The effects of leaving on, leaving off, adding, or removing targets' eyeglasses or beards on identification accuracy were examined in two experiments with American undergraduates. The removal of eyeglasses and either type of beard transformation, especially the addition of a beard, lowered identification accuracy. PMID:7967551

  18. Wavelength Calibration Accuracy for the STIS CCD and MAMA Modes

    NASA Astrophysics Data System (ADS)

    Pascucci, Ilaria; Hodge, Phil; Proffitt, Charles R.; Ayres, T.

    2011-03-01

    Two calibration programs were carried out to determine the accuracy of the wavelength solutions for the most used STIS CCD and MAMA modes after Servicing Mission 4. We report here on the analysis of this dataset and show that the STIS wavelength solution has not changed after SM4. We also show that a typical accuracy for the absolute wavelength zero-points is 0.1 pixels while the relative wavelength accuracy is 0.2 pixels.

  19. Airborne Topographic Mapper Calibration Procedures and Accuracy Assessment

    NASA Technical Reports Server (NTRS)

    Martin, Chreston F.; Krabill, William B.; Manizade, Serdar S.; Russell, Rob L.; Sonntag, John G.; Swift, Robert N.; Yungel, James K.

    2012-01-01

    Description of NASA Airborn Topographic Mapper (ATM) lidar calibration procedures including analysis of the accuracy and consistancy of various ATM instrument parameters and the resulting influence on topographic elevation measurements. The ATM elevations measurements from a nominal operating altitude 500 to 750 m above the ice surface was found to be: Horizontal Accuracy 74 cm, Horizontal Precision 14 cm, Vertical Accuracy 6.6 cm, Vertical Precision 3 cm.

  20. Accuracy assessment of novel two-axes rotating and single-axis translating calibration equipment

    NASA Astrophysics Data System (ADS)

    Liu, Bo; Ye, Dong; Che, Rensheng

    2009-11-01

    There is a new method that the rocket nozzle 3D motion is measured by a motion tracking system based on the passive optical markers. However, an important issue is required to resolve-how to assess the accuracy of rocket nozzle motion test. Therefore, calibration equipment is designed and manufactured for generating the truth of nozzle model motion such as translation, angle, velocity, angular velocity, etc. It consists of a base, a lifting platform, a rotary table and a rocket nozzle model with precise geometry size. The nozzle model associated with the markers is installed on the rotary table, which can translate or rotate at the known velocity. The general accuracy of rocket nozzle motion test is evaluated by comparing the truth value with the static and dynamic test data. This paper puts emphasis on accuracy assessment of novel two-axes rotating and single-axis translating calibration equipment. By substituting measured value of the error source into error model, the pointing error reaches less than 0.005deg, rotation center position error reaches 0.08mm, and the rate stability is less than 10-3. The calibration equipment accuracy is much higher than the accuracy of nozzle motion test system, thus the former can be used to assess and calibrate the later.

  1. Accuracy Assessment of Underwater Photogrammetric Three Dimensional Modelling for Coral Reefs

    NASA Astrophysics Data System (ADS)

    Guo, T.; Capra, A.; Troyer, M.; Gruen, A.; Brooks, A. J.; Hench, J. L.; Schmitt, R. J.; Holbrook, S. J.; Dubbini, M.

    2016-06-01

    Recent advances in automation of photogrammetric 3D modelling software packages have stimulated interest in reconstructing highly accurate 3D object geometry in unconventional environments such as underwater utilizing simple and low-cost camera systems. The accuracy of underwater 3D modelling is affected by more parameters than in single media cases. This study is part of a larger project on 3D measurements of temporal change of coral cover in tropical waters. It compares the accuracies of 3D point clouds generated by using images acquired from a system camera mounted in an underwater housing and the popular GoPro cameras respectively. A precisely measured calibration frame was placed in the target scene in order to provide accurate control information and also quantify the errors of the modelling procedure. In addition, several objects (cinder blocks) with various shapes were arranged in the air and underwater and 3D point clouds were generated by automated image matching. These were further used to examine the relative accuracy of the point cloud generation by comparing the point clouds of the individual objects with the objects measured by the system camera in air (the best possible values). Given a working distance of about 1.5 m, the GoPro camera can achieve a relative accuracy of 1.3 mm in air and 2.0 mm in water. The system camera achieved an accuracy of 1.8 mm in water, which meets our requirements for coral measurement in this system.

  2. The measurement of pointing accuracy of two-dimensional scan mirror

    NASA Astrophysics Data System (ADS)

    Xing, Hui; An, Chao; Song, Junru; He, Xuhua

    2015-10-01

    The observation accuracy of space camera targeted on ground objects is directly affected by the pointing deviation of the two dimensional scan mirror. A plane model of the scan mirror's normal trajectory is established when scan mirror is rotating along the rolling axis while the pitching axis remains still. The pointing accuracy of scan mirror cross the track direction is measured with the plane model. A cone model of the scan mirror's normal trajectory is established when scan mirror is rotating along the pitching axis while the rolling axis remains still. The pointing accuracy of scan mirror along the track direction is measured with the plane model. The nonorthogonality of shafting of the rolling axis and the pitching axis is measured with the two models. Data processing results are feedback to pointing controller to correct the input signal of resolver, until the pointing accuracy of scan mirror meets the requirement. The experimental results indicate that the models of measuring the pointing accuracy of scan mirror are accurate and the data processing algorithm is feasible. The testing precision reached 10-3 second.

  3. Accuracy of Protein-Protein Binding Sites in High-Throughput Template-Based Modeling

    PubMed Central

    Kundrotas, Petras J.; Vakser, Ilya A.

    2010-01-01

    The accuracy of protein structures, particularly their binding sites, is essential for the success of modeling protein complexes. Computationally inexpensive methodology is required for genome-wide modeling of such structures. For systematic evaluation of potential accuracy in high-throughput modeling of binding sites, a statistical analysis of target-template sequence alignments was performed for a representative set of protein complexes. For most of the complexes, alignments containing all residues of the interface were found. The full interface alignments were obtained even in the case of poor alignments where a relatively small part of the target sequence (as low as 40%) aligned to the template sequence, with a low overall alignment identity (<30%). Although such poor overall alignments might be considered inadequate for modeling of whole proteins, the alignment of the interfaces was strong enough for docking. In the set of homology models built on these alignments, one third of those ranked 1 by a simple sequence identity criteria had RMSD<5 Å, the accuracy suitable for low-resolution template free docking. Such models corresponded to multi-domain target proteins, whereas for single-domain proteins the best models had 5 Åaccuracy suitable for less sensitive structure-alignment methods. Overall, ∼50% of complexes with the interfaces modeled by high-throughput techniques had accuracy suitable for meaningful docking experiments. This percentage will grow with the increasing availability of co-crystallized protein-protein complexes. PMID:20369011

  4. Impact of fiducial arrangement and registration sequence on target accuracy using a phantom frameless stereotactic navigation model.

    PubMed

    Smith, Timothy R; Mithal, Divakar S; Stadler, James A; Asgarian, Camelia; Muro, Kenji; Rosenow, Joshua M

    2014-11-01

    Modern frameless stereotactic techniques utilize scalp fiducial markers for registration. Anecdotal reports from surgeons indicate a variety of methods for improving accuracy using different fiducial arrangements and registration sequences. The few published studies on registration accuracy do not provide a simple and systematic method for determining target accuracy. Nine different arrangements of ten fiducial markers were attached to a model. Ten separate markers were designated as targets for evaluation of registration accuracy. We systematically registered each of the arrangements over multiple trials, in one of four sequences, and then measured the targets. The target coordinates were compared against the established target values, and a root-mean-square deviation (RMSD) was derived. A systematic multivariate analysis determined the effects of different variables on the RMSD. We found no correlation between the "Registration Accuracy" provided by Medtronic (Medtronic Navigation, Louisville, CO, USA) and our RMSD representing targeting accuracy (R=0.008). RMSD did vary for different fiducial arrangements. We found no significant difference between the various sequences of fiducial arrangement. Thus, regardless of fiducial arrangement, registration sequence has no impact on accuracy. Fiducial arrangements distributed optimally across the skull, however, allowed for significantly improved accuracy. Further studies are required to determine which different arrangements of fiducials are relevant for specific procedures. PMID:24957630

  5. Accuracy of dental radiographs for caries detection.

    PubMed

    Keenan, James R; Keenan, Analia Veitz

    2016-06-01

    Data sourcesMedline, Embase, Cochrane Central and grey literature, complemented by cross-referencing from bibliographies. Diagnostic reviews were searched using the Medion database.Study selectionStudies reporting on the accuracy (sensitivity/specificity) of radiographic detection of primary carious lesions under clinical (in vivo) or in vitro conditions were included. The outcome of interest was caries detection using radiographs. The study also assessed the effect of the histologic lesion stage and included articles to assess the differences between primary or permanent teeth, if there had been improvements recently due to technical advances or radiographic methods, or if there are variations within studies (between examiners or applied radiographic techniques).Data extraction and synthesisData extraction was done by one reviewer first, using a piloted electronic spreadsheet and repeated independently by a second reviewer. Consensus was achieved by discussion. Data extraction followed guidelines from the Cochrane Collaboration. Risk of bias was assessed using QUADAS-2. Pooled sensitivity, specificity and diagnostic odds ratios (DORs) were calculated using random effects meta-analysis. Analyses were performed separately for occlusal and proximal lesions. Dentine lesions and cavitated lesions were analysed separately.Results947 articles were identified with the searches and 442 were analysed full text. 117 studies (13,375 teeth, 19,108 surfaces) were included. All studies were published in English. 24 studies were in vivo and 93 studies were in vitro. Risk of bias was found to be low in 23 and high in 94 studies. The pooled sensitivity for detecting any kind of occlusal carious lesions was 0.35 (95% CI : 0.31/40) and 0.41 (0.39/0.44) in clinical and in vitro studies respectively while the pooled specificity was 0.78 (0.73/0.83) and 0.70 (0.76/0.84). For the detection of any kind of proximal lesion the sensitivity in the clinical studies was 0.24 (CI 0.21/0/26) and

  6. Differential effects of self-monitoring attention, accuracy, and productivity.

    PubMed Central

    Maag, J W; Reid, R; DiGangi, S A

    1993-01-01

    Effects of self-monitoring on-task behavior, academic productivity, and academic accuracy were assessed with 6 elementary-school students with learning disabilities in their general education classroom using a mathematics task. Following baseline, the three self-monitoring conditions were introduced using a multiple schedule design during independent practice sessions. Although all three interventions yielded improvements in either arithmetic productivity, accuracy, or on-task behavior, self-monitoring academic productivity or accuracy was generally superior. Differential results were obtained across age groups: fourth graders' mathematics performance improved most when self-monitoring productivity, whereas sixth graders' performance improved most when self-monitoring accuracy. PMID:8407682

  7. Assessment of the Thematic Accuracy of Land Cover Maps

    NASA Astrophysics Data System (ADS)

    Höhle, J.

    2015-08-01

    Several land cover maps are generated from aerial imagery and assessed by different approaches. The test site is an urban area in Europe for which six classes (`building', `hedge and bush', `grass', `road and parking lot', `tree', `wall and car port') had to be derived. Two classification methods were applied (`Decision Tree' and `Support Vector Machine') using only two attributes (height above ground and normalized difference vegetation index) which both are derived from the images. The assessment of the thematic accuracy applied a stratified design and was based on accuracy measures such as user's and producer's accuracy, and kappa coefficient. In addition, confidence intervals were computed for several accuracy measures. The achieved accuracies and confidence intervals are thoroughly analysed and recommendations are derived from the gained experiences. Reliable reference values are obtained using stereovision, false-colour image pairs, and positioning to the checkpoints with 3D coordinates. The influence of the training areas on the results is studied. Cross validation has been tested with a few reference points in order to derive approximate accuracy measures. The two classification methods perform equally for five classes. Trees are classified with a much better accuracy and a smaller confidence interval by means of the decision tree method. Buildings are classified by both methods with an accuracy of 99% (95% CI: 95%-100%) using independent 3D checkpoints. The average width of the confidence interval of six classes was 14% of the user's accuracy.

  8. Thermocouple Calibration and Accuracy in a Materials Testing Laboratory

    NASA Technical Reports Server (NTRS)

    Lerch, B. A.; Nathal, M. V.; Keller, D. J.

    2002-01-01

    A consolidation of information has been provided that can be used to define procedures for enhancing and maintaining accuracy in temperature measurements in materials testing laboratories. These studies were restricted to type R and K thermocouples (TCs) tested in air. Thermocouple accuracies, as influenced by calibration methods, thermocouple stability, and manufacturer's tolerances were all quantified in terms of statistical confidence intervals. By calibrating specific TCs the benefits in accuracy can be as great as 6 C or 5X better compared to relying on manufacturer's tolerances. The results emphasize strict reliance on the defined testing protocol and on the need to establish recalibration frequencies in order to maintain these levels of accuracy.

  9. Sound source localization identification accuracy: Level and duration dependencies.

    PubMed

    Yost, William A

    2016-07-01

    Sound source localization accuracy for noises was measured for sources in the front azimuthal open field mainly as a function of overall noise level and duration. An identification procedure was used in which listeners identify which loudspeakers presented a sound. Noises were filtered and differed in bandwidth and center frequency. Sound source localization accuracy depended on the bandwidth of the stimuli, and for the narrow bandwidths, accuracy depended on the filter's center frequency. Sound source localization accuracy did not depend on overall level or duration. PMID:27475204

  10. Accuracy assessment of NLCD 2006 land cover and impervious surface

    USGS Publications Warehouse

    Wickham, James D.; Stehman, Stephen V.; Gass, Leila; Dewitz, Jon; Fry, Joyce A.; Wade, Timothy G.

    2013-01-01

    Release of NLCD 2006 provides the first wall-to-wall land-cover change database for the conterminous United States from Landsat Thematic Mapper (TM) data. Accuracy assessment of NLCD 2006 focused on four primary products: 2001 land cover, 2006 land cover, land-cover change between 2001 and 2006, and impervious surface change between 2001 and 2006. The accuracy assessment was conducted by selecting a stratified random sample of pixels with the reference classification interpreted from multi-temporal high resolution digital imagery. The NLCD Level II (16 classes) overall accuracies for the 2001 and 2006 land cover were 79% and 78%, respectively, with Level II user's accuracies exceeding 80% for water, high density urban, all upland forest classes, shrubland, and cropland for both dates. Level I (8 classes) accuracies were 85% for NLCD 2001 and 84% for NLCD 2006. The high overall and user's accuracies for the individual dates translated into high user's accuracies for the 2001–2006 change reporting themes water gain and loss, forest loss, urban gain, and the no-change reporting themes for water, urban, forest, and agriculture. The main factor limiting higher accuracies for the change reporting themes appeared to be difficulty in distinguishing the context of grass. We discuss the need for more research on land-cover change accuracy assessment.

  11. Improving the accuracy of central difference schemes

    NASA Technical Reports Server (NTRS)

    Turkel, Eli

    1988-01-01

    General difference approximations to the fluid dynamic equations require an artificial viscosity in order to converge to a steady state. This artificial viscosity serves two purposes. One is to suppress high frequency noise which is not damped by the central differences. The second purpose is to introduce an entropy-like condition so that shocks can be captured. These viscosities need a coefficient to measure the amount of viscosity to be added. In the standard scheme, a scalar coefficient is used based on the spectral radius of the Jacobian of the convective flux. However, this can add too much viscosity to the slower waves. Hence, it is suggested that a matrix viscosity be used. This gives an appropriate viscosity for each wave component. With this matrix valued coefficient, the central difference scheme becomes closer to upwind biased methods.

  12. A high-accuracy optical linear algebra processor for finite element applications

    NASA Technical Reports Server (NTRS)

    Casasent, D.; Taylor, B. K.

    1984-01-01

    Optical linear processors are computationally efficient computers for solving matrix-matrix and matrix-vector oriented problems. Optical system errors limit their dynamic range to 30-40 dB, which limits their accuray to 9-12 bits. Large problems, such as the finite element problem in structural mechanics (with tens or hundreds of thousands of variables) which can exploit the speed of optical processors, require the 32 bit accuracy obtainable from digital machines. To obtain this required 32 bit accuracy with an optical processor, the data can be digitally encoded, thereby reducing the dynamic range requirements of the optical system (i.e., decreasing the effect of optical errors on the data) while providing increased accuracy. This report describes a new digitally encoded optical linear algebra processor architecture for solving finite element and banded matrix-vector problems. A linear static plate bending case study is described which quantities the processor requirements. Multiplication by digital convolution is explained, and the digitally encoded optical processor architecture is advanced.

  13. Balancing Accuracy and Computational Efficiency for Ternary Gas Hydrate Systems

    NASA Astrophysics Data System (ADS)

    White, M. D.

    2011-12-01

    Geologic accumulations of natural gas hydrates hold vast organic carbon reserves, which have the potential of meeting global energy needs for decades. Estimates of vast amounts of global natural gas hydrate deposits make them an attractive unconventional energy resource. As with other unconventional energy resources, the challenge is to economically produce the natural gas fuel. The gas hydrate challenge is principally technical. Meeting that challenge will require innovation, but more importantly, scientific research to understand the resource and its characteristics in porous media. Producing natural gas from gas hydrate deposits requires releasing CH4 from solid gas hydrate. The conventional way to release CH4 is to dissociate the hydrate by changing the pressure and temperature conditions to those where the hydrate is unstable. The guest-molecule exchange technology releases CH4 by replacing it with a more thermodynamically stable molecule (e.g., CO2, N2). This technology has three advantageous: 1) it sequesters greenhouse gas, 2) it releases energy via an exothermic reaction, and 3) it retains the hydraulic and mechanical stability of the hydrate reservoir. Numerical simulation of the production of gas hydrates from geologic deposits requires accounting for coupled processes: multifluid flow, mobile and immobile phase appearances and disappearances, heat transfer, and multicomponent thermodynamics. The ternary gas hydrate system comprises five components (i.e., H2O, CH4, CO2, N2, and salt) and the potential for six phases (i.e., aqueous, liquid CO2, gas, hydrate, ice, and precipitated salt). The equation of state for ternary hydrate systems has three requirements: 1) phase occurrence, 2) phase composition, and 3) phase properties. Numerical simulation of the production of geologic accumulations of gas hydrates have historically suffered from relatively slow execution times, compared with other multifluid, porous media systems, due to strong nonlinearities and

  14. Geolocation and Pointing Accuracy Analysis for the WindSat Sensor

    NASA Technical Reports Server (NTRS)

    Meissner, Thomas; Wentz, Frank J.; Purdy, William E.; Gaiser, Peter W.; Poe, Gene; Uliana, Enzo A.

    2006-01-01

    Geolocation and pointing accuracy analyses of the WindSat flight data are presented. The two topics were intertwined in the flight data analysis and will be addressed together. WindSat has no unusual geolocation requirements relative to other sensors, but its beam pointing knowledge accuracy is especially critical to support accurate polarimetric radiometry. Pointing accuracy was improved and verified using geolocation analysis in conjunction with scan bias analysis. nvo methods were needed to properly identify and differentiate between data time tagging and pointing knowledge errors. Matchups comparing coastlines indicated in imagery data with their known geographic locations were used to identify geolocation errors. These coastline matchups showed possible pointing errors with ambiguities as to the true source of the errors. Scan bias analysis of U, the third Stokes parameter, and of vertical and horizontal polarizations provided measurement of pointing offsets resolving ambiguities in the coastline matchup analysis. Several geolocation and pointing bias sources were incfementally eliminated resulting in pointing knowledge and geolocation accuracy that met all design requirements.

  15. Design consideration for nano-accuracy long trace profiler at BSRF

    NASA Astrophysics Data System (ADS)

    Yang, Fugui; Wang, Lichao; Tang, Shanzhi; Wang, Qiushi; Li, Ming

    2014-09-01

    The third generation synchrotron radiation source like High Energy Photon Source (HEPS, Beijing) requires X-ray optics surface with high accuracy. It is crucial to develop advanced optics surface metrology instrument. The Long Trace Profiler (LTP) is an instrument which measures slope in the long dimension of an optical surface. In order to meet the accuracy requirements for synchrotron optics, a number of researches have been carried out to improve the LTP during the last decades. Many variations have been installed worldwide. As a part of the advanced research of HEPS, the metrology laboratory at Beijing Synchrotron Radiation Facility (BSRF, Beijing) has been conducting work of building a new LTP since 2012. The accuracy of the instrument is expected to be <0.1μrad rms for component up to 1m in length. In this paper, we present some design consideration for nano-accuracy LTP. Two error sources, including the deformation of the granite structure and imperfect optical surface, are studied. We report our optimized configuration of the granite structure and the dependences of the measurement error on the surface error. The results are considered as an important instruction for the proper choice of each component in the profiler. We expect to bring the profiler into operation in 2015.

  16. Accuracy of Binary Black Hole Waveform Models for Advanced LIGO

    NASA Astrophysics Data System (ADS)

    Kumar, Prayush; Fong, Heather; Barkett, Kevin; Bhagwat, Swetha; Afshari, Nousha; Chu, Tony; Brown, Duncan; Lovelace, Geoffrey; Pfeiffer, Harald; Scheel, Mark; Szilagyi, Bela; Simulating Extreme Spacetimes (SXS) Team

    2016-03-01

    Coalescing binaries of compact objects, such as black holes and neutron stars, are the primary targets for gravitational-wave (GW) detection with Advanced LIGO. Accurate modeling of the emitted GWs is required to extract information about the binary source. The most accurate solution to the general relativistic two-body problem is available in numerical relativity (NR), which is however limited in application due to computational cost. Current searches use semi-analytic models that are based in post-Newtonian (PN) theory and calibrated to NR. In this talk, I will present comparisons between contemporary models and high-accuracy numerical simulations performed using the Spectral Einstein Code (SpEC), focusing at the questions: (i) How well do models capture binary's late-inspiral where they lack a-priori accurate information from PN or NR, and (ii) How accurately do they model binaries with parameters outside their range of calibration. These results guide the choice of templates for future GW searches, and motivate future modeling efforts.

  17. Doppler lidar sampling strategies and accuracies: Regional scale

    NASA Technical Reports Server (NTRS)

    Emmitt, G. D.

    1985-01-01

    It has been proposed that a Doppler lidar be placed in a polar orbit and scanned to provide estimates of lower tropospheric winds twice per day and with a spatial resolution of 300 km. Initial feasibility studies conducted primarily by NOAA and NASA presented an optimistic outlook for a space based lidar. The technology appeared within reach and initial computer simulations suggested that acceptable accuracies could be obtained. Those early studies exposed, however, several potential problem areas which included: (1) the algorithms for computing the wind vectors did not perform well when there were coherent gradients in the wind fields; and (2) the lifetime and power requirements of the lidar put severe restrictions on the pulse repetition frequency (PRF). These two basic problems are currently being addressed by a Doppler lidar simulation study focussed upon three primary objectives: (1) to develop optimum scan parameters and shot patterns for a satellite-based Doppler lidar; (2) to develop robust algorithms for computing wind vectors from lidar returns; and (3) to evaluate the impact of coherent mesoscale structures (wind gradients, clouds, aerosols) on up-scale wind estimates. An overview is provided of the simulation efforts with particular emphasis upon rationale and methodology. Since this research is currently underway, any results shown are meant only as evidence of progress.

  18. The Third Gravitational Lensing Accuracy Testing (GREAT3) Challenge Handbook

    NASA Astrophysics Data System (ADS)

    Mandelbaum, Rachel; Rowe, Barnaby; Bosch, James; Chang, Chihway; Courbin, Frederic; Gill, Mandeep; Jarvis, Mike; Kannawadi, Arun; Kacprzak, Tomasz; Lackner, Claire; Leauthaud, Alexie; Miyatake, Hironao; Nakajima, Reiko; Rhodes, Jason; Simet, Melanie; Zuntz, Joe; Armstrong, Bob; Bridle, Sarah; Coupon, Jean; Dietrich, Jörg P.; Gentile, Marc; Heymans, Catherine; Jurling, Alden S.; Kent, Stephen M.; Kirkby, David; Margala, Daniel; Massey, Richard; Melchior, Peter; Peterson, John; Roodman, Aaron; Schrabback, Tim

    2014-05-01

    The GRavitational lEnsing Accuracy Testing 3 (GREAT3) challenge is the third in a series of image analysis challenges, with a goal of testing and facilitating the development of methods for analyzing astronomical images that will be used to measure weak gravitational lensing. This measurement requires extremely precise estimation of very small galaxy shape distortions, in the presence of far larger intrinsic galaxy shapes and distortions due to the blurring kernel caused by the atmosphere, telescope optics, and instrumental effects. The GREAT3 challenge is posed to the astronomy, machine learning, and statistics communities, and includes tests of three specific effects that are of immediate relevance to upcoming weak lensing surveys, two of which have never been tested in a community challenge before. These effects include many novel aspects including realistically complex galaxy models based on high-resolution imaging from space; a spatially varying, physically motivated blurring kernel; and a combination of multiple different exposures. To facilitate entry by people new to the field, and for use as a diagnostic tool, the simulation software for the challenge is publicly available, though the exact parameters used for the challenge are blinded. Sample scripts to analyze the challenge data using existing methods will also be provided. See http://great3challenge.info and http://great3.projects.phys.ucl.ac.uk/leaderboard/ for more information.

  19. Improving the Accuracy of Stamping Analyses Including Springback Deformations

    NASA Astrophysics Data System (ADS)

    Firat, Mehmet; Karadeniz, Erdal; Yenice, Mustafa; Kaya, Mesut

    2013-02-01

    An accurate prediction of sheet metal deformation including springback is one of the main issues in an efficient finite element (FE) simulation in automotive and stamping industries. Considering tooling design for newer class of high-strength steels, in particular, this requirement became an important aspect for springback compensation practices today. The sheet deformation modeling accounting Bauschinger effect is considered to be a key factor affecting the accuracy of FE simulations in this context. In this article, a rate-independent cyclic plasticity model is presented and implemented into LS-Dyna software for an accurate modeling of sheet metal deformation in stamping simulations. The proposed model uses Hill's orthotropic yield surface in the description of yield loci of planar and transversely anisotropic sheets. The strain-hardening behavior is calculated based on an additive backstress form of the nonlinear kinematic hardening rule. The proposed model is applied in stamping simulations of a dual-phase steel automotive part, and comparisons are presented in terms of part strain and thickness distributions calculated with isotropic plasticity and the proposed model. It is observed that both models produce similar plastic strain and thickness distributions; however, there appeared to be considerable differences in computed springback deformations. Part shapes computed with both plasticity models were evaluated with surface scanning of manufactured parts. A comparison of FE computed geometries with manufactured parts proved the improved performance of proposed model over isotropic plasticity for this particular stamping application.

  20. Modeling versus accuracy in EEG and MEG data

    SciTech Connect

    Mosher, J.C.; Huang, M.; Leahy, R.M.; Spencer, M.E.

    1997-07-30

    The widespread availability of high-resolution anatomical information has placed a greater emphasis on accurate electroencephalography and magnetoencephalography (collectively, E/MEG) modeling. A more accurate representation of the cortex, inner skull surface, outer skull surface, and scalp should lead to a more accurate forward model and hence improve inverse modeling efforts. The authors examine a few topics in this paper that highlight some of the problems of forward modeling, then discuss the impacts these results have on the inverse problem. The authors begin by assuming a perfect head model, that of the sphere, then show the lower bounds on localization accuracy of dipoles within this perfect forward model. For more realistic anatomy, the boundary element method (BEM) is a common numerical technique for solving the boundary integral equations. For a three-layer BEM, the computational requirements can be too intensive for many inverse techniques, so they examine a few simplifications. They quantify errors in generating this forward model by defining a regularized percentage error metric. The authors then apply this metric to a single layer boundary element solution, a multiple sphere approach, and the common single sphere model. They conclude with an MEG localization demonstration on a novel experimental human phantom, using both BEM and multiple spheres.

  1. Accuracy of selected techniques for estimating ice-affected streamflow

    USGS Publications Warehouse

    Walker, John F.

    1991-01-01

    This paper compares the accuracy of selected techniques for estimating streamflow during ice-affected periods. The techniques are classified into two categories - subjective and analytical - depending on the degree of judgment required. Discharge measurements have been made at three streamflow-gauging sites in Iowa during the 1987-88 winter and used to established a baseline streamflow record for each site. Using data based on a simulated six-week field-tip schedule, selected techniques are used to estimate discharge during the ice-affected periods. For the subjective techniques, three hydrographers have independently compiled each record. Three measures of performance are used to compare the estimated streamflow records with the baseline streamflow records: the average discharge for the ice-affected period, and the mean and standard deviation of the daily errors. Based on average ranks for three performance measures and the three sites, the analytical and subjective techniques are essentially comparable. For two of the three sites, Kruskal-Wallis one-way analysis of variance detects significant differences among the three hydrographers for the subjective methods, indicating that the subjective techniques are less consistent than the analytical techniques. The results suggest analytical techniques may be viable tools for estimating discharge during periods of ice effect, and should be developed further and evaluated for sites across the United States.

  2. A High-accuracy Micro-deformation Measurement Method

    NASA Astrophysics Data System (ADS)

    Jiang, Li

    2016-07-01

    The requirement for ever-increasing-resolution space cameras drives focal length and diameter of optical lenses be increasing. High-frequency vibration in the process of launching and complex environmental conditions of the outer space generate micro deformation in components of space cameras. As a result, images from the space cameras are blurred. Therefore, it is necessary to measure the micro deformations in components of space cameras in various experiment conditions. This paper presents a high-accuracy micro deformation measurement method. The method is implemented as follows: (1) fix Tungsten-steel balls onto a space camera being measured and measure the coordinate for each ball under the standard condition; (2) simulate high-frequency vibrations and environmental conditions like the outer space to measure coordinates for each ball under each combination of test conditions; and (3) compute the deviation of a coordinate of a ball under a test condition combination from the coordinate of the ball under the standard condition and the deviation is the micro deformation of the space camera component associated with the ball. This method was applied to micro deformation measurement for space cameras of different models. Measurement data for these space cameras validated the proposed method.

  3. Image accuracy improvements in microwave tomographic thermometry: phantom experience.

    PubMed

    Meaney, P M; Paulsen, K D; Fanning, M W; Li, D; Fang, Q

    2003-01-01

    Evaluation of a laboratory-scale microwave imaging system for non-invasive temperature monitoring has previously been reported with good results in terms of both spatial and temperature resolution. However, a new formulation of the reconstruction algorithm in terms of the log-magnitude and phase of the electric fields has dramatically improved the ability of the system to track the temperature-dependent electrical conductivity distribution. This algorithmic enhancement was originally implemented as a way of improving overall imaging capability in cases of large, high contrast permittivity scatterers, but has also proved to be sensitive to subtle conductivity changes as required in thermal imaging. Additional refinements in the regularization procedure have strengthened the reliability and robustness of image convergence. Imaging experiments were performed for a single heated target consisting of a 5.1 cm diameter PVC tube located within 15 and 25 cm diameter monopole antenna arrays, respectively. The performance of both log-magnitude/phase and complex-valued reconstructions when subjected to four different regularization schemes has been compared based on this experimental data. The results demonstrate a significant accuracy improvement (to 0.2 degrees C as compared with 1.6 degrees C for the previously published approach) in tracking thermal changes in phantoms where electrical properties vary linearly with temperature over a range relevant to hyperthermia cancer therapy. PMID:12944168

  4. ASRM accuracy improvement with error isolation

    NASA Astrophysics Data System (ADS)

    Watson, T. J.; Jordan, F. W.

    1993-11-01

    The Aerojet Aerotherm and Ballistics Group uses a technique on the Advanced Solid Rocket Motor (ASRM) program called Error Isolation to verify data measurements. This technique requires two basic parts: 1) a reference data set and 2) a set of redundant equations. It is primarily used in verifying ballistics data used to obtain accurate solid propellant burn rates. Hence, the reference data set may be a block of sub-scale test motors cast from a single propellant batch or cast concurrently with an ASRM segment. The set of redundant equations are those normally used to predict or analyze solid propellant rocket motor ballistics performance. Although the concept is universal and can be used to evaluate any set of data subject to prediction by a set of redundant mathematical expressions, it is used in this paper only in the evaluation of data collected for sub-scale test motors. The mathematics consist of a set of equations used to predict interior ballistics for those motors. The sub-scale test motor contains a five inch diameter center perforated (5 inch CP) grain that burns on the bore and both ends but not on the outside surface. This motor configuration is variously called the 5C3-9 or 5 inch CP.

  5. Improving Localization Accuracy: Successive Measurements Error Modeling

    PubMed Central

    Abu Ali, Najah; Abu-Elkheir, Mervat

    2015-01-01

    Vehicle self-localization is an essential requirement for many of the safety applications envisioned for vehicular networks. The mathematical models used in current vehicular localization schemes focus on modeling the localization error itself, and overlook the potential correlation between successive localization measurement errors. In this paper, we first investigate the existence of correlation between successive positioning measurements, and then incorporate this correlation into the modeling positioning error. We use the Yule Walker equations to determine the degree of correlation between a vehicle’s future position and its past positions, and then propose a p-order Gauss–Markov model to predict the future position of a vehicle from its past p positions. We investigate the existence of correlation for two datasets representing the mobility traces of two vehicles over a period of time. We prove the existence of correlation between successive measurements in the two datasets, and show that the time correlation between measurements can have a value up to four minutes. Through simulations, we validate the robustness of our model and show that it is possible to use the first-order Gauss–Markov model, which has the least complexity, and still maintain an accurate estimation of a vehicle’s future location over time using only its current position. Our model can assist in providing better modeling of positioning errors and can be used as a prediction tool to improve the performance of classical localization algorithms such as the Kalman filter. PMID:26140345

  6. Reliability and Accuracy of Surgical Resident Peer Ratings.

    ERIC Educational Resources Information Center

    Lutsky, Larry A.; And Others

    1993-01-01

    Reliability and accuracy of peer ratings by 32, 28, 33 general surgery residents over 3 years were examined. Peer ratings were found highly reliable, with high level of test-retest reliability replicated across three years. Halo effects appear to pose greatest threat to rater accuracy, though chief residents tended to exhibit less halo effect than…

  7. Developing a Weighted Measure of Speech Sound Accuracy

    ERIC Educational Resources Information Center

    Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.

    2011-01-01

    Purpose: To develop a system for numerically quantifying a speaker's phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, the authors describe a system for differentially weighting speech sound errors on the basis of various levels of phonetic accuracy using a Weighted Speech Sound…

  8. Assessment Of Accuracies Of Remote-Sensing Maps

    NASA Technical Reports Server (NTRS)

    Card, Don H.; Strong, Laurence L.

    1992-01-01

    Report describes study of accuracies of classifications of picture elements in map derived by digital processing of Landsat-multispectral-scanner imagery of coastal plain of Arctic National Wildlife Refuge. Accuracies of portions of map analyzed with help of statistical sampling procedure called "stratified plurality sampling", in which all picture elements in given cluster classified in stratum to which plurality of them belong.

  9. 40 CFR 1502.24 - Methodology and scientific accuracy.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 33 2014-07-01 2014-07-01 false Methodology and scientific accuracy... STATEMENT § 1502.24 Methodology and scientific accuracy. Agencies shall insure the professional integrity, including scientific integrity, of the discussions and analyses in environmental impact statements....

  10. 40 CFR 1502.24 - Methodology and scientific accuracy.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 32 2010-07-01 2010-07-01 false Methodology and scientific accuracy... STATEMENT § 1502.24 Methodology and scientific accuracy. Agencies shall insure the professional integrity, including scientific integrity, of the discussions and analyses in environmental impact statements....

  11. 40 CFR 1502.24 - Methodology and scientific accuracy.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 34 2012-07-01 2012-07-01 false Methodology and scientific accuracy... STATEMENT § 1502.24 Methodology and scientific accuracy. Agencies shall insure the professional integrity, including scientific integrity, of the discussions and analyses in environmental impact statements....

  12. 40 CFR 1502.24 - Methodology and scientific accuracy.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 33 2011-07-01 2011-07-01 false Methodology and scientific accuracy... STATEMENT § 1502.24 Methodology and scientific accuracy. Agencies shall insure the professional integrity, including scientific integrity, of the discussions and analyses in environmental impact statements....

  13. 40 CFR 1502.24 - Methodology and scientific accuracy.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 34 2013-07-01 2013-07-01 false Methodology and scientific accuracy... STATEMENT § 1502.24 Methodology and scientific accuracy. Agencies shall insure the professional integrity, including scientific integrity, of the discussions and analyses in environmental impact statements....

  14. EFFECTS OF LANDSCAPE CHARACTERISTICS ON LAND-COVER CLASS ACCURACY

    EPA Science Inventory



    Utilizing land-cover data gathered as part of the National Land-Cover Data (NLCD) set accuracy assessment, several logistic regression models were formulated to analyze the effects of patch size and land-cover heterogeneity on classification accuracy. Specific land-cover ...

  15. Prediction of Rate Constants for Catalytic Reactions with Chemical Accuracy.

    PubMed

    Catlow, C Richard A

    2016-08-01

    Ex machina: A computational method for predicting rate constants for reactions within microporous zeolite catalysts with chemical accuracy has recently been reported. A key feature of this method is a stepwise QM/MM approach that allows accuracy to be achieved while using realistic models with accessible computer resources. PMID:27329206

  16. Task-Based Variability in Children's Singing Accuracy

    ERIC Educational Resources Information Center

    Nichols, Bryan E.

    2013-01-01

    The purpose of this study was to explore task-based variability in children's singing accuracy performance. The research questions were: Does children's singing accuracy vary based on the nature of the singing assessment employed? Is there a hierarchy of difficulty and discrimination ability among singing assessment tasks? What is the…

  17. 26 CFR 1.6662-2 - Accuracy-related penalty.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 26 Internal Revenue 13 2011-04-01 2011-04-01 false Accuracy-related penalty. 1.6662-2 Section 1.6662-2 Internal Revenue INTERNAL REVENUE SERVICE, DEPARTMENT OF THE TREASURY (CONTINUED) INCOME TAX (CONTINUED) INCOME TAXES (CONTINUED) Additions to the Tax, Additional Amounts, and Assessable Penalties § 1.6662-2 Accuracy-related penalty. (a)...

  18. 29 CFR 501.8 - Accuracy of information, statements, data.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 3 2014-07-01 2014-07-01 false Accuracy of information, statements, data. 501.8 Section... REGULATIONS ENFORCEMENT OF CONTRACTUAL OBLIGATIONS FOR TEMPORARY ALIEN AGRICULTURAL WORKERS ADMITTED UNDER SECTION 218 OF THE IMMIGRATION AND NATIONALITY ACT General Provisions § 501.8 Accuracy of...

  19. 29 CFR 502.7 - Accuracy of information, statements, data.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 29 Labor 3 2014-07-01 2014-07-01 false Accuracy of information, statements, data. 502.7 Section... REGULATIONS ENFORCEMENT OF CONTRACTUAL OBLIGATIONS FOR TEMPORARY ALIEN AGRICULTURAL WORKERS ADMITTED UNDER... Accuracy of information, statements, data. Information, statements and data submitted in compliance...

  20. 29 CFR 502.7 - Accuracy of information, statements, data.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 3 2012-07-01 2012-07-01 false Accuracy of information, statements, data. 502.7 Section... REGULATIONS ENFORCEMENT OF CONTRACTUAL OBLIGATIONS FOR TEMPORARY ALIEN AGRICULTURAL WORKERS ADMITTED UNDER... Accuracy of information, statements, data. Information, statements and data submitted in compliance...

  1. 29 CFR 501.8 - Accuracy of information, statements, data.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 29 Labor 3 2012-07-01 2012-07-01 false Accuracy of information, statements, data. 501.8 Section... REGULATIONS ENFORCEMENT OF CONTRACTUAL OBLIGATIONS FOR TEMPORARY ALIEN AGRICULTURAL WORKERS ADMITTED UNDER SECTION 218 OF THE IMMIGRATION AND NATIONALITY ACT General Provisions § 501.8 Accuracy of...

  2. Models of Accuracy in Repeated-Measures Designs

    ERIC Educational Resources Information Center

    Dixon, Peter

    2008-01-01

    Accuracy is often analyzed using analysis of variance techniques in which the data are assumed to be normally distributed. However, accuracy data are discrete rather than continuous, and proportion correct are constrained to the range 0-1. Monte Carlo simulations are presented illustrating how this can lead to distortions in the pattern of means.…

  3. 40 CFR 86.338-79 - Exhaust measurement accuracy.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 18 2010-07-01 2010-07-01 false Exhaust measurement accuracy. 86.338... Regulations for New Gasoline-Fueled and Diesel-Fueled Heavy-Duty Engines; Gaseous Exhaust Test Procedures § 86.338-79 Exhaust measurement accuracy. (a) The analyzers must be operated between 15 percent and...

  4. 20 CFR 404.1643 - Performance accuracy standard.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 20 Employees' Benefits 2 2014-04-01 2014-04-01 false Performance accuracy standard. 404.1643 Section 404.1643 Employees' Benefits SOCIAL SECURITY ADMINISTRATION FEDERAL OLD-AGE, SURVIVORS AND DISABILITY INSURANCE (1950- ) Determinations of Disability Performance Standards § 404.1643 Performance accuracy standard. (a) General. Performance...

  5. The Effect of Knowledge and Strategy Training on Monitoring Accuracy.

    ERIC Educational Resources Information Center

    Nietfeld, John L.; Schraw, Gregory

    2002-01-01

    Investigated the effect of prior knowledge and strategy training on monitoring accuracy among college students, comparing debilitative, no-impact, and facilitative hypotheses. Overall, knowledge acquired through brief strategy training improved performance, confidence, and monitoring accuracy independent of general ability and general mathematics…

  6. 12 CFR 740.2 - Accuracy of advertising.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 12 Banks and Banking 6 2011-01-01 2011-01-01 false Accuracy of advertising. 740.2 Section 740.2... ADVERTISING AND NOTICE OF INSURED STATUS § 740.2 Accuracy of advertising. No insured credit union may use any advertising (which includes print, electronic, or broadcast media, displays and signs, stationery, and...

  7. Alaska national hydrography dataset positional accuracy assessment study

    USGS Publications Warehouse

    Arundel, Samantha; Yamamoto, Kristina H.; Constance, Eric; Mantey, Kim; Vinyard-Houx, Jeremy

    2013-01-01

    Initial visual assessments Wide range in the quality of fit between features in NHD and these new image sources. No statistical analysis has been performed to actually quantify accuracy Determining absolute accuracy is cost prohibitive (must collect independent, well defined test points) Quantitative analysis of relative positional error is feasible.

  8. 10 CFR 63.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 10 Energy 2 2012-01-01 2012-01-01 false Completeness and accuracy of information. 63.10 Section 63.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA General Provisions § 63.10 Completeness and accuracy...

  9. 10 CFR 63.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 2 2010-01-01 2010-01-01 false Completeness and accuracy of information. 63.10 Section 63.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA General Provisions § 63.10 Completeness and accuracy...

  10. 10 CFR 63.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 10 Energy 2 2013-01-01 2013-01-01 false Completeness and accuracy of information. 63.10 Section 63.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA General Provisions § 63.10 Completeness and accuracy...

  11. 10 CFR 63.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 2 2011-01-01 2011-01-01 false Completeness and accuracy of information. 63.10 Section 63.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA General Provisions § 63.10 Completeness and accuracy...

  12. 10 CFR 63.10 - Completeness and accuracy of information.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 10 Energy 2 2014-01-01 2014-01-01 false Completeness and accuracy of information. 63.10 Section 63.10 Energy NUCLEAR REGULATORY COMMISSION (CONTINUED) DISPOSAL OF HIGH-LEVEL RADIOACTIVE WASTES IN A GEOLOGIC REPOSITORY AT YUCCA MOUNTAIN, NEVADA General Provisions § 63.10 Completeness and accuracy...

  13. 40 CFR 86.338-79 - Exhaust measurement accuracy.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 19 2012-07-01 2012-07-01 false Exhaust measurement accuracy. 86.338....338-79 Exhaust measurement accuracy. (a) The analyzers must be operated between 15 percent and 100 percent of full-scale chart deflection during the measurement of the emissions for each mode....

  14. 40 CFR 86.1338-84 - Emission measurement accuracy.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 20 2013-07-01 2013-07-01 false Emission measurement accuracy. 86.1338... Procedures § 86.1338-84 Emission measurement accuracy. (a) Measurement accuracy—Bag sampling. (1) Good... using the calibration data obtained with both calibration gases. (b) Measurement...

  15. 40 CFR 86.1338-84 - Emission measurement accuracy.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 20 2012-07-01 2012-07-01 false Emission measurement accuracy. 86.1338... Procedures § 86.1338-84 Emission measurement accuracy. (a) Measurement accuracy—Bag sampling. (1) Good... using the calibration data obtained with both calibration gases. (b) Measurement...

  16. 40 CFR 86.338-79 - Exhaust measurement accuracy.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 19 2013-07-01 2013-07-01 false Exhaust measurement accuracy. 86.338....338-79 Exhaust measurement accuracy. (a) The analyzers must be operated between 15 percent and 100 percent of full-scale chart deflection during the measurement of the emissions for each mode....

  17. Students' Accuracy of Measurement Estimation: Context, Units, and Logical Thinking

    ERIC Educational Resources Information Center

    Jones, M. Gail; Gardner, Grant E.; Taylor, Amy R.; Forrester, Jennifer H.; Andre, Thomas

    2012-01-01

    This study examined students' accuracy of measurement estimation for linear distances, different units of measure, task context, and the relationship between accuracy estimation and logical thinking. Middle school students completed a series of tasks that included estimating the length of various objects in different contexts and completed a test…

  18. Exploring a Three-Level Model of Calibration Accuracy

    ERIC Educational Resources Information Center

    Schraw, Gregory; Kuch, Fred; Gutierrez, Antonio P.; Richmond, Aaron S.

    2014-01-01

    We compared 5 different statistics (i.e., G index, gamma, "d'", sensitivity, specificity) used in the social sciences and medical diagnosis literatures to assess calibration accuracy in order to examine the relationship among them and to explore whether one statistic provided a best fitting general measure of accuracy. College…

  19. Developing a Weighted Measure of Speech Sound Accuracy

    PubMed Central

    Preston, Jonathan L.; Ramsdell, Heather L.; Oller, D. Kimbrough; Edwards, Mary Louise; Tobin, Stephen J.

    2010-01-01

    Purpose The purpose is to develop a system for numerically quantifying a speaker’s phonetic accuracy through transcription-based measures. With a focus on normal and disordered speech in children, we describe a system for differentially weighting speech sound errors based on various levels of phonetic accuracy with a Weighted Speech Sound Accuracy (WSSA) score. We then evaluate the reliability and validity of this measure. Method Phonetic transcriptions are analyzed from several samples of child speech, including preschoolers and young adolescents with and without speech sound disorders and typically developing toddlers. The new measure of phonetic accuracy is compared to existing measures, is used to discriminate typical and disordered speech production, and is evaluated to determine whether it is sensitive to changes in phonetic accuracy over time. Results Initial psychometric data indicate that WSSA scores correlate with other measures of phonetic accuracy as well as listeners’ judgments of severity of a child’s speech disorder. The measure separates children with and without speech sound disorders. WSSA scores also capture growth in phonetic accuracy in toddler’s speech over time. Conclusion Results provide preliminary support for the WSSA as a valid and reliable measure of phonetic accuracy in children’s speech. PMID:20699344

  20. Concept Mapping Improves Metacomprehension Accuracy among 7th Graders

    ERIC Educational Resources Information Center

    Redford, Joshua S.; Thiede, Keith W.; Wiley, Jennifer; Griffin, Thomas D.

    2012-01-01

    Two experiments explored concept map construction as a useful intervention to improve metacomprehension accuracy among 7th grade students. In the first experiment, metacomprehension was marginally better for a concept mapping group than for a rereading group. In the second experiment, metacomprehension accuracy was significantly greater for a…