Sample records for error range ad

  1. A 1000-year record of dry conditions in the eastern Canadian prairies reconstructed from oxygen and carbon isotope measurements on Lake Winnipeg sediment organics

    USGS Publications Warehouse

    Buhay, W.M.; Simpson, S.; Thorleifson, H.; Lewis, M.; King, J.; Telka, A.; Wilkinson, Philip M.; Babb, J.; Timsic, S.; Bailey, D.

    2009-01-01

    A short sediment core (162 cm), covering the period AD 920-1999, was sampled from the south basin of Lake Winnipeg for a suite of multi-proxy analyses leading towards a detailed characterisation of the recent millennial lake environment and hydroclimate of southern Manitoba, Canada. Information on the frequency and duration of major dry periods in southern Manitoba, in light of the changes that are likely to occur as a result of an increasingly warming atmosphere, is of specific interest in this study. Intervals of relatively enriched lake sediment cellulose oxygen isotope values (??18Ocellulose) were found to occur from AD 1180 to 1230 (error range: AD 1104-1231 to 1160-1280), 1610-1640 (error range: AD 1571-1634 to 1603-1662), 1670-1720 (error range: AD 1643-1697 to 1692-1738) and 1750-1780 (error range: AD 1724-1766 to 1756-1794). Regional water balance, inferred from calculated Lake Winnipeg water oxygen isotope values (??18Oinf-lw), suggest that the ratio of lake evaporation to catchment input may have been 25-40% higher during these isotopically distinct periods. Associated with the enriched d??18Ocellulose intervals are some depleted carbon isotope values associated with more abundantly preserved sediment organic matter (d??13COM). These suggest reduced microbial oxidation of terrestrially derived organic matter and/or subdued lake productivity during periods of minimised input of nutrients from the catchment area. With reference to other corroborating evidence, it is suggested that the AD 1180-1230, 1610-1640, 1670-1720 and 1750-1780 intervals represent four distinctly drier periods (droughts) in southern Manitoba, Canada. Additionally, lower-magnitude and duration dry periods may have also occurred from 1320 to 1340 (error range: AD 1257-1363), 1530-1540 (error range: AD 1490-1565 to 1498-1572) and 1570-1580 (error range: AD 1531-1599 to 1539-1606). ?? 2009 John Wiley & Sons, Ltd.

  2. Predictors of driving safety in early Alzheimer disease.

    PubMed

    Dawson, J D; Anderson, S W; Uc, E Y; Dastrup, E; Rizzo, M

    2009-02-10

    To measure the association of cognition, visual perception, and motor function with driving safety in Alzheimer disease (AD). Forty drivers with probable early AD (mean Mini-Mental State Examination score 26.5) and 115 elderly drivers without neurologic disease underwent a battery of cognitive, visual, and motor tests, and drove a standardized 35-mile route in urban and rural settings in an instrumented vehicle. A composite cognitive score (COGSTAT) was calculated for each subject based on eight neuropsychological tests. Driving safety errors were noted and classified by a driving expert based on video review. Drivers with AD committed an average of 42.0 safety errors/drive (SD = 12.8), compared to an average of 33.2 (SD = 12.2) for drivers without AD (p < 0.0001); the most common errors were lane violations. Increased age was predictive of errors, with a mean of 2.3 more errors per drive observed for each 5-year age increment. After adjustment for age and gender, COGSTAT was a significant predictor of safety errors in subjects with AD, with a 4.1 increase in safety errors observed for a 1 SD decrease in cognitive function. Significant increases in safety errors were also found in subjects with AD with poorer scores on Benton Visual Retention Test, Complex Figure Test-Copy, Trail Making Subtest-A, and the Functional Reach Test. Drivers with Alzheimer disease (AD) exhibit a range of performance on tests of cognition, vision, and motor skills. Since these tests provide additional predictive value of driving performance beyond diagnosis alone, clinicians may use these tests to help predict whether a patient with AD can safely operate a motor vehicle.

  3. Predictors of driving safety in early Alzheimer disease

    PubMed Central

    Dawson, J D.; Anderson, S W.; Uc, E Y.; Dastrup, E; Rizzo, M

    2009-01-01

    Objective: To measure the association of cognition, visual perception, and motor function with driving safety in Alzheimer disease (AD). Methods: Forty drivers with probable early AD (mean Mini-Mental State Examination score 26.5) and 115 elderly drivers without neurologic disease underwent a battery of cognitive, visual, and motor tests, and drove a standardized 35-mile route in urban and rural settings in an instrumented vehicle. A composite cognitive score (COGSTAT) was calculated for each subject based on eight neuropsychological tests. Driving safety errors were noted and classified by a driving expert based on video review. Results: Drivers with AD committed an average of 42.0 safety errors/drive (SD = 12.8), compared to an average of 33.2 (SD = 12.2) for drivers without AD (p < 0.0001); the most common errors were lane violations. Increased age was predictive of errors, with a mean of 2.3 more errors per drive observed for each 5-year age increment. After adjustment for age and gender, COGSTAT was a significant predictor of safety errors in subjects with AD, with a 4.1 increase in safety errors observed for a 1 SD decrease in cognitive function. Significant increases in safety errors were also found in subjects with AD with poorer scores on Benton Visual Retention Test, Complex Figure Test-Copy, Trail Making Subtest-A, and the Functional Reach Test. Conclusion: Drivers with Alzheimer disease (AD) exhibit a range of performance on tests of cognition, vision, and motor skills. Since these tests provide additional predictive value of driving performance beyond diagnosis alone, clinicians may use these tests to help predict whether a patient with AD can safely operate a motor vehicle. GLOSSARY AD = Alzheimer disease; AVLT = Auditory Verbal Learning Test; Blocks = Block Design subtest; BVRT = Benton Visual Retention Test; CFT = Complex Figure Test; CI = confidence interval; COWA = Controlled Oral Word Association; CS = contrast sensitivity; FVA = far visual acuity; JLO = Judgment of Line Orientation; MCI = mild cognitive impairment; MMSE = Mini-Mental State Examination; NVA = near visual acuity; SFM = structure from motion; TMT = Trail-Making Test; UFOV = Useful Field of View. PMID:19204261

  4. Impact of exposure measurement error in air pollution epidemiology: effect of error type in time-series studies.

    PubMed

    Goldman, Gretchen T; Mulholland, James A; Russell, Armistead G; Strickland, Matthew J; Klein, Mitchel; Waller, Lance A; Tolbert, Paige E

    2011-06-22

    Two distinctly different types of measurement error are Berkson and classical. Impacts of measurement error in epidemiologic studies of ambient air pollution are expected to depend on error type. We characterize measurement error due to instrument imprecision and spatial variability as multiplicative (i.e. additive on the log scale) and model it over a range of error types to assess impacts on risk ratio estimates both on a per measurement unit basis and on a per interquartile range (IQR) basis in a time-series study in Atlanta. Daily measures of twelve ambient air pollutants were analyzed: NO2, NOx, O3, SO2, CO, PM10 mass, PM2.5 mass, and PM2.5 components sulfate, nitrate, ammonium, elemental carbon and organic carbon. Semivariogram analysis was applied to assess spatial variability. Error due to this spatial variability was added to a reference pollutant time-series on the log scale using Monte Carlo simulations. Each of these time-series was exponentiated and introduced to a Poisson generalized linear model of cardiovascular disease emergency department visits. Measurement error resulted in reduced statistical significance for the risk ratio estimates for all amounts (corresponding to different pollutants) and types of error. When modelled as classical-type error, risk ratios were attenuated, particularly for primary air pollutants, with average attenuation in risk ratios on a per unit of measurement basis ranging from 18% to 92% and on an IQR basis ranging from 18% to 86%. When modelled as Berkson-type error, risk ratios per unit of measurement were biased away from the null hypothesis by 2% to 31%, whereas risk ratios per IQR were attenuated (i.e. biased toward the null) by 5% to 34%. For CO modelled error amount, a range of error types were simulated and effects on risk ratio bias and significance were observed. For multiplicative error, both the amount and type of measurement error impact health effect estimates in air pollution epidemiology. By modelling instrument imprecision and spatial variability as different error types, we estimate direction and magnitude of the effects of error over a range of error types.

  5. Radiometric Calibration of a Dual-Wavelength, Full-Waveform Terrestrial Lidar.

    PubMed

    Li, Zhan; Jupp, David L B; Strahler, Alan H; Schaaf, Crystal B; Howe, Glenn; Hewawasam, Kuravi; Douglas, Ewan S; Chakrabarti, Supriya; Cook, Timothy A; Paynter, Ian; Saenz, Edward J; Schaefer, Michael

    2016-03-02

    Radiometric calibration of the Dual-Wavelength Echidna(®) Lidar (DWEL), a full-waveform terrestrial laser scanner with two simultaneously-pulsing infrared lasers at 1064 nm and 1548 nm, provides accurate dual-wavelength apparent reflectance (ρ(app)), a physically-defined value that is related to the radiative and structural characteristics of scanned targets and independent of range and instrument optics and electronics. The errors of ρ(app) are 8.1% for 1064 nm and 6.4% for 1548 nm. A sensitivity analysis shows that ρ(app) error is dominated by range errors at near ranges, but by lidar intensity errors at far ranges. Our semi-empirical model for radiometric calibration combines a generalized logistic function to explicitly model telescopic effects due to defocusing of return signals at near range with a negative exponential function to model the fall-off of return intensity with range. Accurate values of ρ(app) from the radiometric calibration improve the quantification of vegetation structure, facilitate the comparison and coupling of lidar datasets from different instruments, campaigns or wavelengths and advance the utilization of bi- and multi-spectral information added to 3D scans by novel spectral lidars.

  6. Radiometric Calibration of a Dual-Wavelength, Full-Waveform Terrestrial Lidar

    PubMed Central

    Li, Zhan; Jupp, David L. B.; Strahler, Alan H.; Schaaf, Crystal B.; Howe, Glenn; Hewawasam, Kuravi; Douglas, Ewan S.; Chakrabarti, Supriya; Cook, Timothy A.; Paynter, Ian; Saenz, Edward J.; Schaefer, Michael

    2016-01-01

    Radiometric calibration of the Dual-Wavelength Echidna® Lidar (DWEL), a full-waveform terrestrial laser scanner with two simultaneously-pulsing infrared lasers at 1064 nm and 1548 nm, provides accurate dual-wavelength apparent reflectance (ρapp), a physically-defined value that is related to the radiative and structural characteristics of scanned targets and independent of range and instrument optics and electronics. The errors of ρapp are 8.1% for 1064 nm and 6.4% for 1548 nm. A sensitivity analysis shows that ρapp error is dominated by range errors at near ranges, but by lidar intensity errors at far ranges. Our semi-empirical model for radiometric calibration combines a generalized logistic function to explicitly model telescopic effects due to defocusing of return signals at near range with a negative exponential function to model the fall-off of return intensity with range. Accurate values of ρapp from the radiometric calibration improve the quantification of vegetation structure, facilitate the comparison and coupling of lidar datasets from different instruments, campaigns or wavelengths and advance the utilization of bi- and multi-spectral information added to 3D scans by novel spectral lidars. PMID:26950126

  7. New evidence of factor structure and measurement invariance of the SDQ across five European nations.

    PubMed

    Ortuño-Sierra, Javier; Fonseca-Pedrero, Eduardo; Aritio-Solana, Rebeca; Velasco, Alvaro Moreno; de Luis, Edurne Chocarro; Schumann, Gunter; Cattrell, Anna; Flor, Herta; Nees, Frauke; Banaschewski, Tobias; Bokde, Arun; Whelan, Rob; Buechel, Christian; Bromberg, Uli; Conrod, Patricia; Frouin, Vincent; Papadopoulos, Dimitri; Gallinat, Juergen; Garavan, Hugh; Heinz, Andreas; Walter, Henrik; Struve, Maren; Gowland, Penny; Paus, Tomáš; Poustka, Luise; Martinot, Jean-Luc; Paillère-Martinot, Marie-Laure; Vetter, Nora C; Smolka, Michael N; Lawrence, Claire

    2015-12-01

    The main purpose of the present study was to analyse the internal structure and to test the measurement invariance of the Strengths and Difficulties Questionnaire (SDQ), self-reported version, in five European countries. The sample consisted of 3012 adolescents aged between 12 and 17 years (M = 14.20; SD = 0.83). The five-factor model (with correlated errors added), and the five-factor model (with correlated errors added) with the reverse-worded items allowed to cross-load on the Prosocial subscale, displayed adequate goodness of-fit indices. Multi-group confirmatory factor analysis showed that the five-factor model (with correlated errors added) had partial strong measurement invariance by countries. A total of 11 of the 25 items were non-invariant across samples. The level of internal consistency of the Total difficulties score was 0.84, ranging between 0.69 and 0.78 for the SDQ subscales. The findings indicate that the SDQ's subscales need to be modified in various ways for screening emotional and behavioural problems in the five European countries that were analysed.

  8. Brorfelde Schmidt CCD Catalog (BSCC)

    DTIC Science & Technology

    2010-06-23

    reference stars. Errors of individual positions are about 20 to 200 mas for stars in the R = 10 to 18 mag range. External comparisons with 2MASS and SDSS...reveal possible small systematic errors in the BSCC of up to about 30 mas. The catalog is supplemented with J, H, and Ks magnitudes from the 2MASS ...Survey ( 2MASS ) near-infrared photometry added to the catalog (2). The fil- ters used at the Brorfelde Schmidt for this project are approximating the

  9. Attitude determination for small satellites using GPS signal-to-noise ratio

    NASA Astrophysics Data System (ADS)

    Peters, Daniel

    An embedded system for GPS-based attitude determination (AD) using signal-to-noise (SNR) measurements was developed for CubeSat applications. The design serves as an evaluation testbed for conducting ground based experiments using various computational methods and antenna types to determine the optimum AD accuracy. Raw GPS data is also stored to non-volatile memory for downloading and post analysis. Two low-power microcontrollers are used for processing and to display information on a graphic screen for real-time performance evaluations. A new parallel inter-processor communication protocol was developed that is faster and uses less power than existing standard protocols. A shorted annular patch (SAP) antenna was fabricated for the initial ground-based AD experiments with the testbed. Static AD estimations with RMS errors in the range of 2.5° to 4.8° were achieved over a range of off-zenith attitudes.

  10. SU-F-J-65: Prediction of Patient Setup Errors and Errors in the Calibration Curve from Prompt Gamma Proton Range Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Albert, J; Labarbe, R; Sterpin, E

    2016-06-15

    Purpose: To understand the extent to which the prompt gamma camera measurements can be used to predict the residual proton range due to setup errors and errors in the calibration curve. Methods: We generated ten variations on a default calibration curve (CC) and ten corresponding range maps (RM). Starting with the default RM, we chose a square array of N beamlets, which were then rotated by a random angle θ and shifted by a random vector s. We added a 5% distal Gaussian noise to each beamlet in order to introduce discrepancies that exist between the ranges predicted from themore » prompt gamma measurements and those simulated with Monte Carlo algorithms. For each RM, s, θ, along with an offset u in the CC, were optimized using a simple Euclidian distance between the default ranges and the ranges produced by the given RM. Results: The application of our method lead to the maximal overrange of 2.0mm and underrange of 0.6mm on average. Compared to the situations where s, θ, and u were ignored, these values were larger: 2.1mm and 4.3mm. In order to quantify the need for setup error corrections, we also performed computations in which u was corrected for, but s and θ were not. This yielded: 3.2mm and 3.2mm. The average computation time for 170 beamlets was 65 seconds. Conclusion: These results emphasize the necessity to correct for setup errors and the errors in the calibration curve. The simplicity and speed of our method makes it a good candidate for being implemented as a tool for in-room adaptive therapy. This work also demonstrates that the Prompt gamma range measurements can indeed be useful in the effort to reduce range errors. Given these results, and barring further refinements, this approach is a promising step towards an adaptive proton radiotherapy.« less

  11. Semantic error patterns on the Boston Naming Test in normal aging, amnestic mild cognitive impairment, and mild Alzheimer's disease: is there semantic disruption?

    PubMed

    Balthazar, Marcio Luiz Figueredo; Cendes, Fernando; Damasceno, Benito Pereira

    2008-11-01

    Naming difficulty is common in Alzheimer's disease (AD), but the nature of this problem is not well established. The authors investigated the presence of semantic breakdown and the pattern of general and semantic errors in patients with mild AD, patients with amnestic mild cognitive impairment (aMCI), and normal controls by examining their spontaneous answers on the Boston Naming Test (BNT) and verifying whether they needed or were benefited by semantic and phonemic cues. The errors in spontaneous answers were classified in four mutually exclusive categories (semantic errors, visual paragnosia, phonological errors, and omission errors), and the semantic errors were further subclassified as coordinate, superordinate, and circumlocutory. Patients with aMCI performed normally on the BNT and needed fewer semantic and phonemic cues than patients with mild AD. After semantic cues, subjects with aMCI and control subjects gave more correct answers than patients with mild AD, but after phonemic cues, there was no difference between the three groups, suggesting that the low performance of patients with AD cannot be completely explained by semantic breakdown. Patterns of spontaneous naming errors and subtypes of semantic errors were similar in the three groups, with decreasing error frequency from coordinate to superordinate to circumlocutory subtypes.

  12. Financial errors in dementia: Testing a neuroeconomic conceptual framework

    PubMed Central

    Chiong, Winston; Hsu, Ming; Wudka, Danny; Miller, Bruce L.; Rosen, Howard J.

    2013-01-01

    Financial errors by patients with dementia can have devastating personal and family consequences. We developed and evaluated a neuroeconomic conceptual framework for understanding financial errors across different dementia syndromes, using a systematic, retrospective, blinded chart review of demographically-balanced cohorts of patients with Alzheimer’s disease (AD, n=100) and behavioral variant frontotemporal dementia (bvFTD, n=50). Reviewers recorded specific reports of financial errors according to a conceptual framework identifying patient cognitive and affective characteristics, and contextual influences, conferring susceptibility to each error. Specific financial errors were reported for 49% of AD and 70% of bvFTD patients (p = 0.012). AD patients were more likely than bvFTD patients to make amnestic errors (p< 0.001), while bvFTD patients were more likely to spend excessively (p = 0.004) and to exhibit other behaviors consistent with diminished sensitivity to losses and other negative outcomes (p< 0.001). Exploratory factor analysis identified a social/affective vulnerability factor associated with errors in bvFTD, and a cognitive vulnerability factor associated with errors in AD. Our findings highlight the frequency and functional importance of financial errors as symptoms of AD and bvFTD. A conceptual model derived from neuroeconomic literature identifies factors that influence vulnerability to different types of financial error in different dementia syndromes, with implications for early diagnosis and subsequent risk prevention. PMID:23550884

  13. Simulation of aspheric tolerance with polynomial fitting

    NASA Astrophysics Data System (ADS)

    Li, Jing; Cen, Zhaofeng; Li, Xiaotong

    2018-01-01

    The shape of the aspheric lens changes caused by machining errors, resulting in a change in the optical transfer function, which affects the image quality. At present, there is no universally recognized tolerance criterion standard for aspheric surface. To study the influence of aspheric tolerances on the optical transfer function, the tolerances of polynomial fitting are allocated on the aspheric surface, and the imaging simulation is carried out by optical imaging software. Analysis is based on a set of aspheric imaging system. The error is generated in the range of a certain PV value, and expressed as a form of Zernike polynomial, which is added to the aspheric surface as a tolerance term. Through optical software analysis, the MTF of optical system can be obtained and used as the main evaluation index. Evaluate whether the effect of the added error on the MTF of the system meets the requirements of the current PV value. Change the PV value and repeat the operation until the acceptable maximum allowable PV value is obtained. According to the actual processing technology, consider the error of various shapes, such as M type, W type, random type error. The new method will provide a certain development for the actual free surface processing technology the reference value.

  14. Error behaviors associated with loss of competency in Alzheimer's disease.

    PubMed

    Marson, D C; Annis, S M; McInturff, B; Bartolucci, A; Harrell, L E

    1999-12-10

    To investigate qualitative behavioral changes associated with declining medical decision-making capacity (competency) in patients with AD. Qualitative measures can yield clinical information about functional changes in neurologic disease not available through quantitative measures. Normal older controls (n = 21) and patients with mild and moderate probable AD (n = 72) were compared using a standardized competency measure and neuropsychological measures. A system of 16 qualitative error scores representing conceptual domains of language, executive dysfunction, affective dysfunction, and compensatory responses was used to analyze errors produced on the competency measure. Patterns of errors were examined across groups. Relationships between error behaviors and competency performance were determined, and neurocognitive correlates of specific error behaviors were identified. AD patients demonstrated more miscomprehension, factual confusion, intrusions, incoherent responses, nonresponsive answers, loss of task, and delegation than controls. Errors in the executive domain (loss of task, nonresponsive answer, and loss of detachment) were key predictors of declining competency performance by AD patients. Neuropsychological analyses in the AD group generally confirmed the conceptual domain assignments of the qualitative scores. Loss of task, nonresponsive answers, and loss of detachment were key behavioral changes associated with declining competency of AD patients and with neurocognitive measures of executive dysfunction. These findings support the growing linkage between executive dysfunction and competency loss.

  15. Ad hoc instrumentation methods in ecological studies produce highly biased temperature measurements

    USGS Publications Warehouse

    Terando, Adam J.; Youngsteadt, Elsa; Meineke, Emily K.; Prado, Sara G.

    2017-01-01

    In light of global climate change, ecological studies increasingly address effects of temperature on organisms and ecosystems. To measure air temperature at biologically relevant scales in the field, ecologists often use small, portable temperature sensors. Sensors must be shielded from solar radiation to provide accurate temperature measurements, but our review of 18 years of ecological literature indicates that shielding practices vary across studies (when reported at all), and that ecologists often invent and construct ad hoc radiation shields without testing their efficacy. We performed two field experiments to examine the accuracy of temperature observations from three commonly used portable data loggers (HOBO Pro, HOBO Pendant, and iButton hygrochron) housed in manufactured Gill shields or ad hoc, custom‐fabricated shields constructed from everyday materials such as plastic cups. We installed this sensor array (five replicates of 11 sensor‐shield combinations) at weather stations located in open and forested sites. HOBO Pro sensors with Gill shields were the most accurate devices, with a mean absolute error of 0.2°C relative to weather stations at each site. Error in ad hoc shield treatments ranged from 0.8 to 3.0°C, with the largest errors at the open site. We then deployed one replicate of each sensor‐shield combination at five sites that varied in the amount of urban impervious surface cover, which presents a further shielding challenge. Bias in sensors paired with ad hoc shields increased by up to 0.7°C for every 10% increase in impervious surface. Our results indicate that, due to variable shielding practices, the ecological literature likely includes highly biased temperature data that cannot be compared directly across studies. If left unaddressed, these errors will hinder efforts to predict biological responses to climate change. We call for greater standardization in how temperature data are recorded in the field, handled in analyses, and reported in publications.

  16. Characterizing Accuracy and Precision of Glucose Sensors and Meters

    PubMed Central

    2014-01-01

    There is need for a method to describe precision and accuracy of glucose measurement as a smooth continuous function of glucose level rather than as a step function for a few discrete ranges of glucose. We propose and illustrate a method to generate a “Glucose Precision Profile” showing absolute relative deviation (ARD) and /or %CV versus glucose level to better characterize measurement errors at any glucose level. We examine the relationship between glucose measured by test and comparator methods using linear regression. We examine bias by plotting deviation = (test – comparator method) versus glucose level. We compute the deviation, absolute deviation (AD), ARD, and standard deviation (SD) for each data pair. We utilize curve smoothing procedures to minimize the effects of random sampling variability to facilitate identification and display of the underlying relationships between ARD or %CV and glucose level. AD, ARD, SD, and %CV display smooth continuous relationships versus glucose level. Estimates of MARD and %CV are subject to relatively large errors in the hypoglycemic range due in part to a markedly nonlinear relationship with glucose level and in part to the limited number of observations in the hypoglycemic range. The curvilinear relationships of ARD and %CV versus glucose level are helpful when characterizing and comparing the precision and accuracy of glucose sensors and meters. PMID:25037194

  17. Voice recognition software can be used for scientific articles.

    PubMed

    Pommergaard, Hans-Christian; Huang, Chenxi; Burcharth, Jacob; Rosenberg, Jacob

    2015-02-01

    Dictation of scientific articles has been recognised as an efficient method for producing high-quality, first article drafts. However, standardised transcription service by a secretary may not be available for all researchers and voice recognition software (VRS) may therefore be an alternative. The purpose of this study was to evaluate the out-of-the-box accuracy of VRS. Eleven young researchers without dictation experience dictated the first draft of their own scientific article after thorough preparation according to a pre-defined schedule. The dictate transcribed by VRS was compared with the same dictate transcribed by an experienced research secretary, and the effect of adding words to the vocabulary of the VRS was investigated. The number of errors per hundred words was used as outcome. Furthermore, three experienced researchers assessed the subjective readability using a Likert scale (0-10). Dragon Nuance Premium version 12.5 was used as VRS. The median number of errors per hundred words was 18 (range: 8.5-24.3), which improved when 15,000 words were added to the vocabulary. Subjective readability assessment showed that the texts were understandable with a median score of five (range: 3-9), which was improved with the addition of 5,000 words. The out-of-the-box performance of VRS was acceptable and improved after additional words were added. Further studies are needed to investigate the effect of additional software accuracy training.

  18. Energy and Quality-Aware Multimedia Signal Processing

    NASA Astrophysics Data System (ADS)

    Emre, Yunus

    Today's mobile devices have to support computation-intensive multimedia applications with a limited energy budget. In this dissertation, we present architecture level and algorithm-level techniques that reduce energy consumption of these devices with minimal impact on system quality. First, we present novel techniques to mitigate the effects of SRAM memory failures in JPEG2000 implementations operating in scaled voltages. We investigate error control coding schemes and propose an unequal error protection scheme tailored for JPEG2000 that reduces overhead without affecting the performance. Furthermore, we propose algorithm-specific techniques for error compensation that exploit the fact that in JPEG2000 the discrete wavelet transform outputs have larger values for low frequency subband coefficients and smaller values for high frequency subband coefficients. Next, we present use of voltage overscaling to reduce the data-path power consumption of JPEG codecs. We propose an algorithm-specific technique which exploits the characteristics of the quantized coefficients after zig-zag scan to mitigate errors introduced by aggressive voltage scaling. Third, we investigate the effect of reducing dynamic range for datapath energy reduction. We analyze the effect of truncation error and propose a scheme that estimates the mean value of the truncation error during the pre-computation stage and compensates for this error. Such a scheme is very effective for reducing the noise power in applications that are dominated by additions and multiplications such as FIR filter and transform computation. We also present a novel sum of absolute difference (SAD) scheme that is based on most significant bit truncation. The proposed scheme exploits the fact that most of the absolute difference (AD) calculations result in small values, and most of the large AD values do not contribute to the SAD values of the blocks that are selected. Such a scheme is highly effective in reducing the energy consumption of motion estimation and intra-prediction kernels in video codecs. Finally, we present several hybrid energy-saving techniques based on combination of voltage scaling, computation reduction and dynamic range reduction that further reduce the energy consumption while keeping the performance degradation very low. For instance, a combination of computation reduction and dynamic range reduction for Discrete Cosine Transform shows on average, 33% to 46% reduction in energy consumption while incurring only 0.5dB to 1.5dB loss in PSNR.

  19. Bulk locality and quantum error correction in AdS/CFT

    NASA Astrophysics Data System (ADS)

    Almheiri, Ahmed; Dong, Xi; Harlow, Daniel

    2015-04-01

    We point out a connection between the emergence of bulk locality in AdS/CFT and the theory of quantum error correction. Bulk notions such as Bogoliubov transformations, location in the radial direction, and the holographic entropy bound all have natural CFT interpretations in the language of quantum error correction. We also show that the question of whether bulk operator reconstruction works only in the causal wedge or all the way to the extremal surface is related to the question of whether or not the quantum error correcting code realized by AdS/CFT is also a "quantum secret sharing scheme", and suggest a tensor network calculation that may settle the issue. Interestingly, the version of quantum error correction which is best suited to our analysis is the somewhat nonstandard "operator algebra quantum error correction" of Beny, Kempf, and Kribs. Our proposal gives a precise formulation of the idea of "subregion-subregion" duality in AdS/CFT, and clarifies the limits of its validity.

  20. Pencil beam proton radiography using a multilayer ionization chamber

    NASA Astrophysics Data System (ADS)

    Farace, Paolo; Righetto, Roberto; Meijers, Arturs

    2016-06-01

    A pencil beam proton radiography (PR) method, using a commercial multilayer ionization chamber (MLIC) integrated with a treatment planning system (TPS) was developed. A Giraffe (IBA Dosimetry) MLIC (±0.5 mm accuracy) was used to obtain pencil beam PR by delivering spots uniformly positioned at a 5.0 mm distance in a 9  ×  9 square of spots. PRs of an electron-density (with tissue-equivalent inserts) phantom and a head phantom were acquired. The integral depth dose (IDD) curves of the delivered spots were computed by the TPS in a volume of water simulating the MLIC, and virtually added to the CT at the exit side of the phantoms. For each spot, measured and calculated IDD were overlapped in order to compute a map of range errors. On the head-phantom, the maximum dose from PR acquisition was estimated. Additionally, on the head phantom the impact on the range errors map was estimated in case of a 1 mm position misalignment. In the electron-density phantom, range errors were within 1 mm in the soft-tissue rods, but greater in the dense-rod. In the head-phantom the range errors were  -0.9  ±  2.7 mm on the whole map and within 1 mm in the brain area. On both phantoms greater errors were observed at inhomogeneity interfaces, due to sensitivity to small misalignment, and inaccurate TPS dose computation. The effect of the 1 mm misalignment was clearly visible on the range error map and produced an increased spread of range errors (-1.0  ±  3.8 mm on the whole map). The dose to the patient for such PR acquisitions would be acceptable as the maximum dose to the head phantom was  <2cGyE. By the described 2D method, allowing to discriminate misalignments, range verification can be performed in selected areas to implement an in vivo quality assurance program.

  1. Pencil beam proton radiography using a multilayer ionization chamber.

    PubMed

    Farace, Paolo; Righetto, Roberto; Meijers, Arturs

    2016-06-07

    A pencil beam proton radiography (PR) method, using a commercial multilayer ionization chamber (MLIC) integrated with a treatment planning system (TPS) was developed. A Giraffe (IBA Dosimetry) MLIC (±0.5 mm accuracy) was used to obtain pencil beam PR by delivering spots uniformly positioned at a 5.0 mm distance in a 9  ×  9 square of spots. PRs of an electron-density (with tissue-equivalent inserts) phantom and a head phantom were acquired. The integral depth dose (IDD) curves of the delivered spots were computed by the TPS in a volume of water simulating the MLIC, and virtually added to the CT at the exit side of the phantoms. For each spot, measured and calculated IDD were overlapped in order to compute a map of range errors. On the head-phantom, the maximum dose from PR acquisition was estimated. Additionally, on the head phantom the impact on the range errors map was estimated in case of a 1 mm position misalignment. In the electron-density phantom, range errors were within 1 mm in the soft-tissue rods, but greater in the dense-rod. In the head-phantom the range errors were  -0.9  ±  2.7 mm on the whole map and within 1 mm in the brain area. On both phantoms greater errors were observed at inhomogeneity interfaces, due to sensitivity to small misalignment, and inaccurate TPS dose computation. The effect of the 1 mm misalignment was clearly visible on the range error map and produced an increased spread of range errors (-1.0  ±  3.8 mm on the whole map). The dose to the patient for such PR acquisitions would be acceptable as the maximum dose to the head phantom was  <2cGyE. By the described 2D method, allowing to discriminate misalignments, range verification can be performed in selected areas to implement an in vivo quality assurance program.

  2. Characteristics of number transcoding errors of Chinese- versus English-speaking Alzheimer's disease patients.

    PubMed

    Ting, Simon Kang Seng; Chia, Pei Shi; Kwek, Kevin; Tan, Wilnard; Hameed, Shahul

    2016-10-01

    Number processing disorder is an acquired deficit in mathematical skills commonly observed in Alzheimer's disease (AD), usually as a consequence of neurological dysfunction. Common impairments include syntactic errors (800012 instead of 8012) and intrusion errors (8 thousand and 12 instead of eight thousand and twelve) in number transcoding tasks. This study aimed to understand the characterization of AD-related number processing disorder within an alphabetic language (English) and ideographical language (Chinese), and to investigate the differences between alphabetic and ideographic language processing. Chinese-speaking AD patients were hypothesized to make significantly more intrusion errors than English-speaking ones, due to the ideographical nature of both Chinese characters and Arabic numbers. A simplified number transcoding test derived from EC301 battery was administered to AD patients. Chinese-speaking AD patients made significantly more intrusion errors (p = 0.001) than English speakers. This demonstrates that number processing in an alphabetic language such as English does not function in the same manner as in Chinese. The impaired inhibition capability likely contributes to such observations due to its competitive lexical representation in brain for Chinese speakers.

  3. Enhancing the use of Argos satellite data for home range and long distance migration studies of marine animals.

    PubMed

    Hoenner, Xavier; Whiting, Scott D; Hindell, Mark A; McMahon, Clive R

    2012-01-01

    Accurately quantifying animals' spatial utilisation is critical for conservation, but has long remained an elusive goal due to technological impediments. The Argos telemetry system has been extensively used to remotely track marine animals, however location estimates are characterised by substantial spatial error. State-space models (SSM) constitute a robust statistical approach to refine Argos tracking data by accounting for observation errors and stochasticity in animal movement. Despite their wide use in ecology, few studies have thoroughly quantified the error associated with SSM predicted locations and no research has assessed their validity for describing animal movement behaviour. We compared home ranges and migratory pathways of seven hawksbill sea turtles (Eretmochelys imbricata) estimated from (a) highly accurate Fastloc GPS data and (b) locations computed using common Argos data analytical approaches. Argos 68(th) percentile error was <1 km for LC 1, 2, and 3 while markedly less accurate (>4 km) for LC ≤ 0. Argos error structure was highly longitudinally skewed and was, for all LC, adequately modelled by a Student's t distribution. Both habitat use and migration routes were best recreated using SSM locations post-processed by re-adding good Argos positions (LC 1, 2 and 3) and filtering terrestrial points (mean distance to migratory tracks ± SD = 2.2 ± 2.4 km; mean home range overlap and error ratio = 92.2% and 285.6 respectively). This parsimonious and objective statistical procedure however still markedly overestimated true home range sizes, especially for animals exhibiting restricted movements. Post-processing SSM locations nonetheless constitutes the best analytical technique for remotely sensed Argos tracking data and we therefore recommend using this approach to rework historical Argos datasets for better estimation of animal spatial utilisation for research and evidence-based conservation purposes.

  4. Application of Machine Learning in Postural Control Kinematics for the Diagnosis of Alzheimer's Disease

    PubMed Central

    Yelshyna, Darya; Bicho, Estela

    2016-01-01

    The use of wearable devices to study gait and postural control is a growing field on neurodegenerative disorders such as Alzheimer's disease (AD). In this paper, we investigate if machine-learning classifiers offer the discriminative power for the diagnosis of AD based on postural control kinematics. We compared Support Vector Machines (SVMs), Multiple Layer Perceptrons (MLPs), Radial Basis Function Neural Networks (RBNs), and Deep Belief Networks (DBNs) on 72 participants (36 AD patients and 36 healthy subjects) exposed to seven increasingly difficult postural tasks. The decisional space was composed of 18 kinematic variables (adjusted for age, education, height, and weight), with or without neuropsychological evaluation (Montreal cognitive assessment (MoCA) score), top ranked in an error incremental analysis. Classification results were based on threefold cross validation of 50 independent and randomized runs sets: training (50%), test (40%), and validation (10%). Having a decisional space relying solely on postural kinematics, accuracy of AD diagnosis ranged from 71.7 to 86.1%. Adding the MoCA variable, the accuracy ranged between 91 and 96.6%. MLP classifier achieved top performance in both decisional spaces. Having comprehended the interdynamic interaction between postural stability and cognitive performance, our results endorse machine-learning models as a useful tool for computer-aided diagnosis of AD based on postural control kinematics. PMID:28074090

  5. Application of Machine Learning in Postural Control Kinematics for the Diagnosis of Alzheimer's Disease.

    PubMed

    Costa, Luís; Gago, Miguel F; Yelshyna, Darya; Ferreira, Jaime; David Silva, Hélder; Rocha, Luís; Sousa, Nuno; Bicho, Estela

    2016-01-01

    The use of wearable devices to study gait and postural control is a growing field on neurodegenerative disorders such as Alzheimer's disease (AD). In this paper, we investigate if machine-learning classifiers offer the discriminative power for the diagnosis of AD based on postural control kinematics. We compared Support Vector Machines (SVMs), Multiple Layer Perceptrons (MLPs), Radial Basis Function Neural Networks (RBNs), and Deep Belief Networks (DBNs) on 72 participants (36 AD patients and 36 healthy subjects) exposed to seven increasingly difficult postural tasks. The decisional space was composed of 18 kinematic variables (adjusted for age, education, height, and weight), with or without neuropsychological evaluation (Montreal cognitive assessment (MoCA) score), top ranked in an error incremental analysis. Classification results were based on threefold cross validation of 50 independent and randomized runs sets: training (50%), test (40%), and validation (10%). Having a decisional space relying solely on postural kinematics, accuracy of AD diagnosis ranged from 71.7 to 86.1%. Adding the MoCA variable, the accuracy ranged between 91 and 96.6%. MLP classifier achieved top performance in both decisional spaces. Having comprehended the interdynamic interaction between postural stability and cognitive performance, our results endorse machine-learning models as a useful tool for computer-aided diagnosis of AD based on postural control kinematics.

  6. Anatomical frame identification and reconstruction for repeatable lower limb joint kinematics estimates.

    PubMed

    Donati, Marco; Camomilla, Valentina; Vannozzi, Giuseppe; Cappozzo, Aurelio

    2008-07-19

    The quantitative description of joint mechanics during movement requires the reconstruction of the position and orientation of selected anatomical axes with respect to a laboratory reference frame. These anatomical axes are identified through an ad hoc anatomical calibration procedure and their position and orientation are reconstructed relative to bone-embedded frames normally derived from photogrammetric marker positions and used to describe movement. The repeatability of anatomical calibration, both within and between subjects, is crucial for kinematic and kinetic end results. This paper illustrates an anatomical calibration approach, which does not require anatomical landmark manual palpation, described in the literature to be prone to great indeterminacy. This approach allows for the estimate of subject-specific bone morphology and automatic anatomical frame identification. The experimental procedure consists of digitization through photogrammetry of superficial points selected over the areas of the bone covered with a thin layer of soft tissue. Information concerning the location of internal anatomical landmarks, such as a joint center obtained using a functional approach, may also be added. The data thus acquired are matched with the digital model of a deformable template bone. Consequently, the repeatability of pelvis, knee and hip joint angles is determined. Five volunteers, each of whom performed five walking trials, and six operators, with no specific knowledge of anatomy, participated in the study. Descriptive statistics analysis was performed during upright posture, showing a limited dispersion of all angles (less than 3 deg) except for hip and knee internal-external rotation (6 deg and 9 deg, respectively). During level walking, the ratio of inter-operator and inter-trial error and an absolute subject-specific repeatability were assessed. For pelvic and hip angles, and knee flexion-extension the inter-operator error was equal to the inter-trial error-the absolute error ranging from 0.1 deg to 0.9 deg. Knee internal-external rotation and ab-adduction showed, on average, inter-operator errors, which were 8% and 28% greater than the relevant inter-trial errors, respectively. The absolute error was in the range 0.9-2.9 deg.

  7. Thermometric titration of cadmium with sodium diethyldithiocarbamate, with oxidation by hydrogen peroxide as indicator reaction.

    PubMed

    Hattori, T; Yoshida, H

    1987-08-01

    A new method of end-point indication is described for thermometric titration of cadmium with sodium diethyldithiocarbamate (DDTC). It is based on the redox reaction between hydrogen peroxide added to the system before titration, and the first excess of DDTC. Amounts of cadmium in the range 10-50 mumoles are titrated within 1% error.

  8. What Are Error Rates for Classifying Teacher and School Performance Using Value-Added Models?

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2013-01-01

    This article addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using a realistic performance measurement system scheme based on hypothesis testing, the authors develop error rate formulas based on ordinary least squares and…

  9. Template-based automatic breast segmentation on MRI by excluding the chest region

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lin, Muqing; Chen, Jeon-Hor; Wang, Xiaoyong

    2013-12-15

    Purpose: Methods for quantification of breast density on MRI using semiautomatic approaches are commonly used. In this study, the authors report on a fully automatic chest template-based method. Methods: Nonfat-suppressed breast MR images from 31 healthy women were analyzed. Among them, one case was randomly selected and used as the template, and the remaining 30 cases were used for testing. Unlike most model-based breast segmentation methods that use the breast region as the template, the chest body region on a middle slice was used as the template. Within the chest template, three body landmarks (thoracic spine and bilateral boundary ofmore » the pectoral muscle) were identified for performing the initial V-shape cut to determine the posterior lateral boundary of the breast. The chest template was mapped to each subject's image space to obtain a subject-specific chest model for exclusion. On the remaining image, the chest wall muscle was identified and excluded to obtain clean breast segmentation. The chest and muscle boundaries determined on the middle slice were used as the reference for the segmentation of adjacent slices, and the process continued superiorly and inferiorly until all 3D slices were segmented. The segmentation results were evaluated by an experienced radiologist to mark voxels that were wrongly included or excluded for error analysis. Results: The breast volumes measured by the proposed algorithm were very close to the radiologist's corrected volumes, showing a % difference ranging from 0.01% to 3.04% in 30 tested subjects with a mean of 0.86% ± 0.72%. The total error was calculated by adding the inclusion and the exclusion errors (so they did not cancel each other out), which ranged from 0.05% to 6.75% with a mean of 3.05% ± 1.93%. The fibroglandular tissue segmented within the breast region determined by the algorithm and the radiologist were also very close, showing a % difference ranging from 0.02% to 2.52% with a mean of 1.03% ± 1.03%. The total error by adding the inclusion and exclusion errors ranged from 0.16% to 11.8%, with a mean of 2.89% ± 2.55%. Conclusions: The automatic chest template-based breast MRI segmentation method worked well for cases with different body and breast shapes and different density patterns. Compared to the radiologist-established truth, the mean difference in segmented breast volume was approximately 1%, and the total error by considering the additive inclusion and exclusion errors was approximately 3%. This method may provide a reliable tool for MRI-based segmentation of breast density.« less

  10. Discrimination of plant-parasitic nematodes from complex soil communities using ecometagenetics.

    PubMed

    Porazinska, Dorota L; Morgan, Matthew J; Gaspar, John M; Court, Leon N; Hardy, Christopher M; Hodda, Mike

    2014-07-01

    Many plant pathogens are microscopic, cryptic, and difficult to diagnose. The new approach of ecometagenetics, involving ultrasequencing, bioinformatics, and biostatistics, has the potential to improve diagnoses of plant pathogens such as nematodes from the complex mixtures found in many agricultural and biosecurity situations. We tested this approach on a gradient of complexity ranging from a few individuals from a few species of known nematode pathogens in a relatively defined substrate to a complex and poorly known suite of nematode pathogens in a complex forest soil, including its associated biota of unknown protists, fungi, and other microscopic eukaryotes. We added three known but contrasting species (Pratylenchus neglectus, the closely related P. thornei, and Heterodera avenae) to half the set of substrates, leaving the other half without them. We then tested whether all nematode pathogens-known and unknown, indigenous, and experimentally added-were detected consistently present or absent. We always detected the Pratylenchus spp. correctly and with the number of sequence reads proportional to the numbers added. However, a single cyst of H. avenae was only identified approximately half the time it was present. Other plant-parasitic nematodes and nematodes from other trophic groups were detected well but other eukaryotes were detected less consistently. DNA sampling errors or informatic errors or both were involved in misidentification of H. avenae; however, the proportions of each varied in the different bioinformatic pipelines and with different parameters used. To a large extent, false-positive and false-negative errors were complementary: pipelines and parameters with the highest false-positive rates had the lowest false-negative rates and vice versa. Sources of error identified included assumptions in the bioinformatic pipelines, slight differences in primer regions, the number of sequence reads regarded as the minimum threshold for inclusion in analysis, and inaccessible DNA in resistant life stages. Identification of the sources of error allows us to suggest ways to improve identification using ecometagenetics.

  11. A fresh look at the predictors of naming accuracy and errors in Alzheimer's disease.

    PubMed

    Cuetos, Fernando; Rodríguez-Ferreiro, Javier; Sage, Karen; Ellis, Andrew W

    2012-09-01

    In recent years, a considerable number of studies have tried to establish which characteristics of objects and their names predict the responses of patients with Alzheimer's disease (AD) in the picture-naming task. The frequency of use of words and their age of acquisition (AoA) have been implicated as two of the most influential variables, with naming being best preserved for objects with high-frequency, early-acquired names. The present study takes a fresh look at the predictors of naming success in Spanish and English AD patients using a range of measures of word frequency and AoA along with visual complexity, imageability, and word length as predictors. Analyses using generalized linear mixed modelling found that naming accuracy was better predicted by AoA ratings taken from older adults than conventional ratings from young adults. Older frequency measures based on written language samples predicted accuracy better than more modern measures based on the frequencies of words in film subtitles. Replacing adult frequency with an estimate of cumulative (lifespan) frequency did not reduce the impact of AoA. Semantic error rates were predicted by both written word frequency and senior AoA while null response errors were only predicted by frequency. Visual complexity, imageability, and word length did not predict naming accuracy or errors. ©2012 The British Psychological Society.

  12. Twenty-Five Years of Landsat Thermal Band Calibration

    NASA Technical Reports Server (NTRS)

    Barsi, Julia A.; Markham, Brian L.; Schoff, John R.; Hook, Simon J.; Raqueno, Nina G.

    2010-01-01

    Landsat-7 Enhanced Thematic Mapper+ (ETM+), launched in April 1999, and Landsat-5 Thematic Mapper (TM), launched in 1984, both have a single thermal band. Both instruments thermal band calibrations have been updated previously: ETM+ in 2001 for a pre-launch calibration error and TM in 2007 for data acquired since the current era of vicarious calibration has been in place (1999). Vicarious calibration teams at Rochester Institute of Technology (RIT) and NASA/Jet Propulsion Laboratory (JPL) have been working to validate the instrument calibration since 1999. Recent developments in their techniques and sites have expanded the temperature and temporal range of the validation. The new data indicate that the calibration of both instruments had errors: the ETM+ calibration contained a gain error of 5.8% since launch; the TM calibration contained a gain error of 5% and an additional offset error between 1997 and 1999. Both instruments required adjustments in their thermal calibration coefficients in order to correct for the errors. The new coefficients were calculated and added to the Landsat operational processing system in early 2010. With the corrections, both instruments are calibrated to within +/-0.7K.

  13. A Highly Linear and Wide Input Range Four-Quadrant CMOS Analog Multiplier Using Active Feedback

    NASA Astrophysics Data System (ADS)

    Huang, Zhangcai; Jiang, Minglu; Inoue, Yasuaki

    Analog multipliers are one of the most important building blocks in analog signal processing circuits. The performance with high linearity and wide input range is usually required for analog four-quadrant multipliers in most applications. Therefore, a highly linear and wide input range four-quadrant CMOS analog multiplier using active feedback is proposed in this paper. Firstly, a novel configuration of four-quadrant multiplier cell is presented. Its input dynamic range and linearity are improved significantly by adding two resistors compared with the conventional structure. Then based on the proposed multiplier cell configuration, a four-quadrant CMOS analog multiplier with active feedback technique is implemented by two operational amplifiers. Because of both the proposed multiplier cell and active feedback technique, the proposed multiplier achieves a much wider input range with higher linearity than conventional structures. The proposed multiplier was fabricated by a 0.6µm CMOS process. Experimental results show that the input range of the proposed multiplier can be up to 5.6Vpp with 0.159% linearity error on VX and 4.8Vpp with 0.51% linearity error on VY for ±2.5V power supply voltages, respectively.

  14. [Implementation of a robot for the preparation of antineoplastic drugs in the Pharmacy Service].

    PubMed

    Pacheco Ramos, María de la Paz; Arenaza Peña, Ainhoa Elisa; Santiago Pérez, Alejandro; Bilbao Gómez-Martino, Cristina; Zamora Barrios, María Dolores; Arias Fernández, María Lourdes

    2015-05-01

    To describe the implementation of a robot for the preparation of antineoplastic drugs in the Pharmacy Service and to be able to analyze the added value to pharmacotherapy. The implementation was carried out in June 2012 at a tertiary level Hospital, taking place in two periods: 1- test period with the installation of the robot, with technical configuration of the equipment and validation of 29 active ingredients and the integration of electronic prescribing software with the robot application (9 months). 2- Usage period (22 months). On the other hand, training was given to pharmacists and nurses. The robot uses image recognition, barcode identification and gravimetric controls for proper operation. These checks provide information about the error ratio in the preparation, with a margin of ± 10%, which after a pilot study was restricted to a range of ±4%. The robot was programmed to recognize bags, infusion pumps, syringes and vials. The added value was assessed for 31 months by identifying preparation's errors. 11,865 preparations were made by the robot, which meant approximately 40% of all antineoplastic prepared from 29 different active ingredients. 1.12% (n=133) of the errors were identified by the robot and therefore didn't reach the patient (negative desviation - 4%). These errors were corrected manually. The implementation of a robot in the preparation of antineoplastic drugs allows to identify errors therefore preventing them to arrive to the patient. This promotes safety and quality of the process, reducing the exposure to cytotoxic drugs from the manipulator. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  15. Enhancing the Use of Argos Satellite Data for Home Range and Long Distance Migration Studies of Marine Animals

    PubMed Central

    Hoenner, Xavier; Whiting, Scott D.; Hindell, Mark A.; McMahon, Clive R.

    2012-01-01

    Accurately quantifying animals’ spatial utilisation is critical for conservation, but has long remained an elusive goal due to technological impediments. The Argos telemetry system has been extensively used to remotely track marine animals, however location estimates are characterised by substantial spatial error. State-space models (SSM) constitute a robust statistical approach to refine Argos tracking data by accounting for observation errors and stochasticity in animal movement. Despite their wide use in ecology, few studies have thoroughly quantified the error associated with SSM predicted locations and no research has assessed their validity for describing animal movement behaviour. We compared home ranges and migratory pathways of seven hawksbill sea turtles (Eretmochelys imbricata) estimated from (a) highly accurate Fastloc GPS data and (b) locations computed using common Argos data analytical approaches. Argos 68th percentile error was <1 km for LC 1, 2, and 3 while markedly less accurate (>4 km) for LC ≤0. Argos error structure was highly longitudinally skewed and was, for all LC, adequately modelled by a Student’s t distribution. Both habitat use and migration routes were best recreated using SSM locations post-processed by re-adding good Argos positions (LC 1, 2 and 3) and filtering terrestrial points (mean distance to migratory tracks ± SD = 2.2±2.4 km; mean home range overlap and error ratio  = 92.2% and 285.6 respectively). This parsimonious and objective statistical procedure however still markedly overestimated true home range sizes, especially for animals exhibiting restricted movements. Post-processing SSM locations nonetheless constitutes the best analytical technique for remotely sensed Argos tracking data and we therefore recommend using this approach to rework historical Argos datasets for better estimation of animal spatial utilisation for research and evidence-based conservation purposes. PMID:22808241

  16. Helical tomotherapy setup variations in canine nasal tumor patients immobilized with a bite block.

    PubMed

    Kubicek, Lyndsay N; Seo, Songwon; Chappell, Richard J; Jeraj, Robert; Forrest, Lisa J

    2012-01-01

    The purpose of our study was to compare setup variation in four degrees of freedom (vertical, longitudinal, lateral, and roll) between canine nasal tumor patients immobilized with a mattress and bite block, versus a mattress alone. Our secondary aim was to define a clinical target volume (CTV) to planning target volume (PTV) expansion margin based on our mean systematic error values associated with nasal tumor patients immobilized by a mattress and bite block. We evaluated six parameters for setup corrections: systematic error, random error, patient-patient variation in systematic errors, the magnitude of patient-specific random errors (root mean square [RMS]), distance error, and the variation of setup corrections from zero shift. The variations in all parameters were statistically smaller in the group immobilized by a mattress and bite block. The mean setup corrections in the mattress and bite block group ranged from 0.91 mm to 1.59 mm for the translational errors and 0.5°. Although most veterinary radiation facilities do not have access to Image-guided radiotherapy (IGRT), we identified a need for more rigid fixation, established the value of adding IGRT to veterinary radiation therapy, and define the CTV-PTV setup error margin for canine nasal tumor patients immobilized in a mattress and bite block. © 2012 Veterinary Radiology & Ultrasound.

  17. A meta-analysis of inhibitory-control deficits in patients diagnosed with Alzheimer's dementia.

    PubMed

    Kaiser, Anna; Kuhlmann, Beatrice G; Bosnjak, Michael

    2018-05-10

    The authors conducted meta-analyses to determine the magnitude of performance impairments in patients diagnosed with Alzheimer's dementia (AD) compared with healthy aging (HA) controls on eight tasks commonly used to measure inhibitory control. Response time (RT) and error rates from a total of 64 studies were analyzed with random-effects models (overall effects) and mixed-effects models (moderator analyses). Large differences between AD patients and HA controls emerged in the basic inhibition conditions of many of the tasks with AD patients often performing slower, overall d = 1.17, 95% CI [0.88-1.45], and making more errors, d = 0.83 [0.63-1.03]. However, comparably large differences were also present in performance on many of the baseline control-conditions, d = 1.01 [0.83-1.19] for RTs and d = 0.44 [0.19-0.69] for error rates. A standardized derived inhibition score (i.e., control-condition score - inhibition-condition score) suggested no significant mean group difference for RTs, d = -0.07 [-0.22-0.08], and only a small difference for errors, d = 0.24 [-0.12-0.60]. Effects systematically varied across tasks and with AD severity. Although the error rate results suggest a specific deterioration of inhibitory-control abilities in AD, further processes beyond inhibitory control (e.g., a general reduction in processing speed and other, task-specific attentional processes) appear to contribute to AD patients' performance deficits observed on a variety of inhibitory-control tasks. Nonetheless, the inhibition conditions of many of these tasks well discriminate between AD patients and HA controls. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  18. Bilingual language intrusions and other speech errors in Alzheimer's disease.

    PubMed

    Gollan, Tamar H; Stasenko, Alena; Li, Chuchu; Salmon, David P

    2017-11-01

    The current study investigated how Alzheimer's disease (AD) affects production of speech errors in reading-aloud. Twelve Spanish-English bilinguals with AD and 19 matched controls read-aloud 8 paragraphs in four conditions (a) English-only, (b) Spanish-only, (c) English-mixed (mostly English with 6 Spanish words), and (d) Spanish-mixed (mostly Spanish with 6 English words). Reading elicited language intrusions (e.g., saying la instead of the), and several types of within-language errors (e.g., saying their instead of the). Patients produced more intrusions (and self-corrected less often) than controls, particularly when reading non-dominant language paragraphs with switches into the dominant language. Patients also produced more within-language errors than controls, but differences between groups for these were not consistently larger with dominant versus non-dominant language targets. These results illustrate the potential utility of speech errors for diagnosis of AD, suggest a variety of linguistic and executive control impairments in AD, and reveal multiple cognitive mechanisms needed to mix languages fluently. The observed pattern of deficits, and unique sensitivity of intrusions to AD in bilinguals, suggests intact ability to select a default language with contextual support, to rapidly translate and switch languages in production of connected speech, but impaired ability to monitor language membership while regulating inhibitory control. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Estimation of lower flammability limits of C-H compounds in air at atmospheric pressure, evaluation of temperature dependence and diluent effect.

    PubMed

    Mendiburu, Andrés Z; de Carvalho, João A; Coronado, Christian R

    2015-03-21

    Estimation of the lower flammability limits of C-H compounds at 25 °C and 1 atm; at moderate temperatures and in presence of diluent was the objective of this study. A set of 120 C-H compounds was divided into a correlation set and a prediction set of 60 compounds each. The absolute average relative error for the total set was 7.89%; for the correlation set, it was 6.09%; and for the prediction set it was 9.68%. However, it was shown that by considering different sources of experimental data the values were reduced to 6.5% for the prediction set and to 6.29% for the total set. The method showed consistency with Le Chatelier's law for binary mixtures of C-H compounds. When tested for a temperature range from 5 °C to 100 °C, the absolute average relative errors were 2.41% for methane; 4.78% for propane; 0.29% for iso-butane and 3.86% for propylene. When nitrogen was added, the absolute average relative errors were 2.48% for methane; 5.13% for propane; 0.11% for iso-butane and 0.15% for propylene. When carbon dioxide was added, the absolute relative errors were 1.80% for methane; 5.38% for propane; 0.86% for iso-butane and 1.06% for propylene. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Vertical profile of tropospheric ozone derived from synergetic retrieval using three different wavelength ranges, UV, IR, and microwave: sensitivity study for satellite observation

    NASA Astrophysics Data System (ADS)

    Sato, Tomohiro O.; Sato, Takao M.; Sagawa, Hideo; Noguchi, Katsuyuki; Saitoh, Naoko; Irie, Hitoshi; Kita, Kazuyuki; Mahani, Mona E.; Zettsu, Koji; Imasu, Ryoichi; Hayashida, Sachiko; Kasai, Yasuko

    2018-03-01

    We performed a feasibility study of constraining the vertical profile of the tropospheric ozone by using a synergetic retrieval method on multiple spectra, i.e., ultraviolet (UV), thermal infrared (TIR), and microwave (MW) ranges, measured from space. This work provides, for the first time, a quantitative evaluation of the retrieval sensitivity of the tropospheric ozone by adding the MW measurement to the UV and TIR measurements. Two observation points in East Asia (one in an urban area and one in an ocean area) and two observation times (one during summer and one during winter) were assumed. Geometry of line of sight was nadir down-looking for the UV and TIR measurements, and limb sounding for the MW measurement. The retrieval sensitivities of the ozone profiles in the upper troposphere (UT), middle troposphere (MT), and lowermost troposphere (LMT) were estimated using the degree of freedom for signal (DFS), the pressure of maximum sensitivity, reduction rate of error from the a priori error, and the averaging kernel matrix, derived based on the optimal estimation method. The measurement noise levels were assumed to be the same as those for currently available instruments. The weighting functions for the UV, TIR, and MW ranges were calculated using the SCIATRAN radiative transfer model, the Line-By-Line Radiative Transfer Model (LBLRTM), and the Advanced Model for Atmospheric Terahertz Radiation Analysis and Simulation (AMATERASU), respectively. The DFS value was increased by approximately 96, 23, and 30 % by adding the MW measurements to the combination of UV and TIR measurements in the UT, MT, and LMT regions, respectively. The MW measurement increased the DFS value of the LMT ozone; nevertheless, the MW measurement alone has no sensitivity to the LMT ozone. The pressure of maximum sensitivity value for the LMT ozone was also increased by adding the MW measurement. These findings indicate that better information on LMT ozone can be obtained by adding constraints on the UT and MT ozone from the MW measurement. The results of this study are applicable to the upcoming air-quality monitoring missions, APOLLO, GMAP-Asia, and uvSCOPE.

  1. Counting-backward test for executive function in idiopathic normal pressure hydrocephalus.

    PubMed

    Kanno, S; Saito, M; Hayashi, A; Uchiyama, M; Hiraoka, K; Nishio, Y; Hisanaga, K; Mori, E

    2012-10-01

    The aim of this study was to develop and validate a bedside test for executive function in patients with idiopathic normal pressure hydrocephalus (INPH). Twenty consecutive patients with INPH and 20 patients with Alzheimer's disease (AD) were enrolled in this study. We developed the counting-backward test for evaluating executive function in patients with INPH. Two indices that are considered to be reflective of the attention deficits and response suppression underlying executive dysfunction in INPH were calculated: the first-error score and the reverse-effect index. Performance on both the counting-backward test and standard neuropsychological tests for executive function was assessed in INPH and AD patients. The first-error score, reverse-effect index and the scores from the standard neuropsychological tests for executive function were significantly lower for individuals in the INPH group than in the AD group. The two indices for the counting-backward test in the INPH group were strongly correlated with the total scores for Frontal Assessment Battery and Phonemic Verbal Fluency. The first-error score was also significantly correlated with the error rate of the Stroop colour-word test and the score of the go/no-go test. In addition, we found that the first-error score highly distinguished patients with INPH from those with AD using these tests. The counting-backward test is useful for evaluating executive dysfunction in INPH and for differentiating between INPH and AD patients. In particular, the first-error score may reflect deficits in the response suppression related to executive dysfunction in INPH. © 2012 John Wiley & Sons A/S.

  2. High precision thorium-230 ages of corals and the timing of sea level fluctuations in the late Quaternary

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Edwards, R.L.

    1988-01-01

    Mass spectrometric techniques for the measurement of {sup 230}Th and {sup 234}U have been developed. These techniques have made it possible to reduce the analytical errors in {sup 230}Th dating of corals using very small samples (10{sup 7} to 10{sup 10} atoms). The time range over which useful data on corals can now be obtained ranges from 15 to 500,000 years. For young corals, this approach may be preferable to {sup 14}C dating. The precision with which the age of a coral can not be determined makes it possible to determine the timing of sea level fluctuations in the latemore » Quaternary. Analyses of a number of corals that grew during the last interglacial period yield ages of 122 to 130 ky. The ages coincide with or slightly postdate the summer solar insolation high at 65{degree}N latitude, which occurred 128 ky ago. This supports the idea that changes in Pleistocene climate can be the result of orbital forcing. Coral ages may allow us to resolve the ages of individual coseismic uplift events and thereby date prehistoric earthquakes. This possibility has been examined at two localities, northwest Santo Island and north Malekula Island, Vanuatu. The {sup 230}Th growth dates of the surfaces of adjacent emerged coral heads, collected from the same elevation on northwest Santo Island, were, within analytical error, identical (A.D. 1866 {plus minus} 4 and A.D. 1864 {plus minus} 4). This indicates that the corals died at the same time and is consistent with the idea that they were killed by coseismic uplift. Similar adjacent coral heads on north Malekula Island yielded {sup 230}Th growth dates of A.D. 1729 {plus minus} 3 and A.D. 1718 {plus minus} 5. The ages are similar but analytically distinguishable. The difference may be due to erosion of the outer, younger, portion of the latter coral head.« less

  3. Encoder fault analysis system based on Moire fringe error signal

    NASA Astrophysics Data System (ADS)

    Gao, Xu; Chen, Wei; Wan, Qiu-hua; Lu, Xin-ran; Xie, Chun-yu

    2018-02-01

    Aiming at the problem of any fault and wrong code in the practical application of photoelectric shaft encoder, a fast and accurate encoder fault analysis system is researched from the aspect of Moire fringe photoelectric signal processing. DSP28335 is selected as the core processor and high speed serial A/D converter acquisition card is used. And temperature measuring circuit using AD7420 is designed. Discrete data of Moire fringe error signal is collected at different temperatures and it is sent to the host computer through wireless transmission. The error signal quality index and fault type is displayed on the host computer based on the error signal identification method. The error signal quality can be used to diagnosis the state of error code through the human-machine interface.

  4. Direct digital conversion detector technology

    NASA Astrophysics Data System (ADS)

    Mandl, William J.; Fedors, Richard

    1995-06-01

    Future imaging sensors for the aerospace and commercial video markets will depend on low cost, high speed analog-to-digital (A/D) conversion to efficiently process optical detector signals. Current A/D methods place a heavy burden on system resources, increase noise, and limit the throughput. This paper describes a unique method for incorporating A/D conversion right on the focal plane array. This concept is based on Sigma-Delta sampling, and makes optimum use of the active detector real estate. Combined with modern digital signal processors, such devices will significantly increase data rates off the focal plane. Early conversion to digital format will also decrease the signal susceptibility to noise, lowering the communications bit error rate. Computer modeling of this concept is described, along with results from several simulation runs. A potential application for direct digital conversion is also reviewed. Future uses for this technology could range from scientific instruments to remote sensors, telecommunications gear, medical diagnostic tools, and consumer products.

  5. Nonlinear problems in data-assimilation : Can synchronization help?

    NASA Astrophysics Data System (ADS)

    Tribbia, J. J.; Duane, G. S.

    2009-12-01

    Over the past several years, operational weather centers have initiated ensemble prediction and assimilation techniques to estimate the error covariance of forecasts in the short and the medium range. The ensemble techniques used are based on linear methods. The theory This technique s been shown to be a useful indicator of skill in the linear range where forecast errors are small relative to climatological variance. While this advance has been impressive, there are still ad hoc aspects of its use in practice, like the need for covariance inflation which are troubling. Furthermore, to be of utility in the nonlinear range an ensemble assimilation and prediction method must be capable of giving probabilistic information for the situation where a probability density forecast becomes multi-modal. A prototypical, simplest example of such a situation is the planetary-wave regime transition where the pdf is bimodal. Our recent research show how the inconsistencies and extensions of linear methodology can be consistently treated using the paradigm of synchronization which views the problems of assimilation and forecasting as that of optimizing the forecast model state with respect to the future evolution of the atmosphere.

  6. Gravity field error analysis for pendulum formations by a semi-analytical approach

    NASA Astrophysics Data System (ADS)

    Li, Huishu; Reubelt, Tilo; Antoni, Markus; Sneeuw, Nico

    2017-03-01

    Many geoscience disciplines push for ever higher requirements on accuracy, homogeneity and time- and space-resolution of the Earth's gravity field. Apart from better instruments or new observables, alternative satellite formations could improve the signal and error structure compared to Grace. One possibility to increase the sensitivity and isotropy by adding cross-track information is a pair of satellites flying in a pendulum formation. This formation contains two satellites which have different ascending nodes and arguments of latitude, but have the same orbital height and inclination. In this study, the semi-analytical approach for efficient pre-mission error assessment is presented, and the transfer coefficients of range, range-rate and range-acceleration gravitational perturbations are derived analytically for the pendulum formation considering a set of opening angles. The new challenge is the time variations of the opening angle and the range, leading to temporally variable transfer coefficients. This is solved by Fourier expansion of the sine/cosine of the opening angle and the central angle. The transfer coefficients are further applied to assess the error patterns which are caused by different orbital parameters. The simulation results indicate that a significant improvement in accuracy and isotropy is obtained for small and medium initial opening angles of single polar pendulums, compared to Grace. The optimal initial opening angles are 45° and 15° for accuracy and isotropy, respectively. For a Bender configuration, which is constituted by a polar Grace and an inclined pendulum in this paper, the behaviour of results is dependent on the inclination (prograde vs. retrograde) and on the relative baseline orientation (left or right leading). The simulation for a sun-synchronous orbit shows better results for the left leading case.

  7. snpAD: An ancient DNA genotype caller.

    PubMed

    Prüfer, Kay

    2018-06-21

    The study of ancient genomes can elucidate the evolutionary past. However, analyses are complicated by base-modifications in ancient DNA molecules that result in errors in DNA sequences. These errors are particularly common near the ends of sequences and pose a challenge for genotype calling. I describe an iterative method that estimates genotype frequencies and errors along sequences to allow for accurate genotype calling from ancient sequences. The implementation of this method, called snpAD, performs well on high-coverage ancient data, as shown by simulations and by subsampling the data of a high-coverage Neandertal genome. Although estimates for low-coverage genomes are less accurate, I am able to derive approximate estimates of heterozygosity from several low-coverage Neandertals. These estimates show that low heterozygosity, compared to modern humans, was common among Neandertals. The C ++ code of snpAD is freely available at http://bioinf.eva.mpg.de/snpAD/. Supplementary data are available at Bioinformatics online.

  8. Stochastic Forcing for High-Resolution Regional and Global Ocean and Atmosphere-Ocean Coupled Ensemble Forecast System

    NASA Astrophysics Data System (ADS)

    Rowley, C. D.; Hogan, P. J.; Martin, P.; Thoppil, P.; Wei, M.

    2017-12-01

    An extended range ensemble forecast system is being developed in the US Navy Earth System Prediction Capability (ESPC), and a global ocean ensemble generation capability to represent uncertainty in the ocean initial conditions has been developed. At extended forecast times, the uncertainty due to the model error overtakes the initial condition as the primary source of forecast uncertainty. Recently, stochastic parameterization or stochastic forcing techniques have been applied to represent the model error in research and operational atmospheric, ocean, and coupled ensemble forecasts. A simple stochastic forcing technique has been developed for application to US Navy high resolution regional and global ocean models, for use in ocean-only and coupled atmosphere-ocean-ice-wave ensemble forecast systems. Perturbation forcing is added to the tendency equations for state variables, with the forcing defined by random 3- or 4-dimensional fields with horizontal, vertical, and temporal correlations specified to characterize different possible kinds of error. Here, we demonstrate the stochastic forcing in regional and global ensemble forecasts with varying perturbation amplitudes and length and time scales, and assess the change in ensemble skill measured by a range of deterministic and probabilistic metrics.

  9. Dental age estimation in Japanese individuals combining permanent teeth and third molars.

    PubMed

    Ramanan, Namratha; Thevissen, Patrick; Fleuws, Steffen; Willems, G

    2012-12-01

    The study aim was, firstly, to verify the Willems et al. model on a Japanese reference sample. Secondly to develop a Japanese reference model based on the Willems et al. method and to verify it. Thirdly to analyze the age prediction performance adding tooth development information of third molars to permanent teeth. Retrospectively 1877 panoramic radiographs were selected in the age range between 1 and 23 years (1248 children, 629 sub-adults). Dental development was registered applying Demirjian 's stages of the mandibular left permanent teeth in children and Köhler stages on the third molars. The children's data were, firstly, used to validate the Willems et al. model (developed a Belgian reference sample), secondly, split ino a training and a test sample. On the training sample a Japanese reference model was developed based on the Willems method. The developed model and the Willems et al; model were verified on the test sample. Regression analysis was used to detect the age prediction performance adding third molar scores to permanent tooth scores. The validated Willems et al. model provided a mean absolute error of 0.85 and 0.75 years in females and males, respectively. The mean absolute error in the verified Willems et al. and the developed Japanese reference model was 0.85, 0.77 and 0.79, 0.75 years in females and males, respectively. On average a negligible change in root mean square error values was detected adding third molar scores to permanent teeth scores. The Belgian sample could be used as a reference model to estimate the age of the Japanese individuals. Combining information from the third molars and permanent teeth was not providing clinically significant improvement of age predictions based on permanent teeth information alone.

  10. Design framework for spherical microphone and loudspeaker arrays in a multiple-input multiple-output system.

    PubMed

    Morgenstern, Hai; Rafaely, Boaz; Noisternig, Markus

    2017-03-01

    Spherical microphone arrays (SMAs) and spherical loudspeaker arrays (SLAs) facilitate the study of room acoustics due to the three-dimensional analysis they provide. More recently, systems that combine both arrays, referred to as multiple-input multiple-output (MIMO) systems, have been proposed due to the added spatial diversity they facilitate. The literature provides frameworks for designing SMAs and SLAs separately, including error analysis from which the operating frequency range (OFR) of an array is defined. However, such a framework does not exist for the joint design of a SMA and a SLA that comprise a MIMO system. This paper develops a design framework for MIMO systems based on a model that addresses errors and highlights the importance of a matched design. Expanding on a free-field assumption, errors are incorporated separately for each array and error bounds are defined, facilitating error analysis for the system. The dependency of the error bounds on the SLA and SMA parameters is studied and it is recommended that parameters should be chosen to assure matched OFRs of the arrays in MIMO system design. A design example is provided, demonstrating the superiority of a matched system over an unmatched system in the synthesis of directional room impulse responses.

  11. Measurement Error and Bias in Value-Added Models. Research Report. ETS RR-17-25

    ERIC Educational Resources Information Center

    Kane, Michael T.

    2017-01-01

    By aggregating residual gain scores (the differences between each student's current score and a predicted score based on prior performance) for a school or a teacher, value-added models (VAMs) can be used to generate estimates of school or teacher effects. It is known that random errors in the prior scores will introduce bias into predictions of…

  12. Low Temperature Testing of a Radiation Hardened CMOS 8-Bit Flash Analog-to-Digital (A/D) Converter

    NASA Technical Reports Server (NTRS)

    Gerber, Scott S.; Hammond, Ahmad; Elbuluk, Malik E.; Patterson, Richard L.; Overton, Eric; Ghaffarian, Reza; Ramesham, Rajeshuni; Agarwal, Shri G.

    2001-01-01

    Power processing electronic systems, data acquiring probes, and signal conditioning circuits are required to operate reliably under harsh environments in many of NASA:s missions. The environment of the space mission as well as the operational requirements of some of the electronic systems, such as infrared-based satellite or telescopic observation stations where cryogenics are involved, dictate the utilization of electronics that can operate efficiently and reliably at low temperatures. In this work, radiation-hard CMOS 8-bit flash A/D converters were characterized in terms of voltage conversion and offset in the temperature range of +25 to -190 C. Static and dynamic supply currents, ladder resistance, and gain and offset errors were also obtained in the temperature range of +125 to -190 C. The effect of thermal cycling on these properties for a total of ten cycles between +80 and - 150 C was also determined. The experimental procedure along with the data obtained are reported and discussed in this paper.

  13. Characteristics of Single-Event Upsets in a Fabric Switch (ADS151)

    NASA Technical Reports Server (NTRS)

    Buchner, Stephen; Carts, Martin A.; McMorrow, Dale; Kim, Hak; Marshall, Paul W.; LaBel, Kenneth A.

    2003-01-01

    Abstract-Two types of single event effects - bit errors and single event functional interrupts - were observed during heavy-ion testing of the AD8151 crosspoint switch. Bit errors occurred in bursts with the average number of bits in a burst being dependent on both the ion LET and on the data rate. A pulsed laser was used to identify the locations on the chip where the bit errors and single event functional interrupts occurred. Bit errors originated in the switches, drivers, and output buffers. Single event functional interrupts occurred when the laser was focused on the second rank latch containing the data specifying the state of each switch in the 33x17 matrix.

  14. A new methodology for vibration error compensation of optical encoders.

    PubMed

    Lopez, Jesus; Artes, Mariano

    2012-01-01

    Optical encoders are sensors based on grating interference patterns. Tolerances inherent to the manufacturing process can induce errors in the position accuracy as the measurement signals stand apart from the ideal conditions. In case the encoder is working under vibrations, the oscillating movement of the scanning head is registered by the encoder system as a displacement, introducing an error into the counter to be added up to graduation, system and installation errors. Behavior improvement can be based on different techniques trying to compensate the error from measurement signals processing. In this work a new "ad hoc" methodology is presented to compensate the error of the encoder when is working under the influence of vibration. The methodology is based on fitting techniques to the Lissajous figure of the deteriorated measurement signals and the use of a look up table, giving as a result a compensation procedure in which a higher accuracy of the sensor is obtained.

  15. Face-name association learning in early Alzheimer's disease: a comparison of learning methods and their underlying mechanisms.

    PubMed

    Bier, Nathalie; Van Der Linden, Martial; Gagnon, Lise; Desrosiers, Johanne; Adam, Stephane; Louveaux, Stephanie; Saint-Mleux, Julie

    2008-06-01

    This study compared the efficacy of five learning methods in the acquisition of face-name associations in early dementia of Alzheimer type (AD). The contribution of error production and implicit memory to the efficacy of each method was also examined. Fifteen participants with early AD and 15 matched controls were exposed to five learning methods: spaced retrieval, vanishing cues, errorless, and two trial-and-error methods, one with explicit and one with implicit memory task instructions. Under each method, participants had to learn a list of five face-name associations, followed by free recall, cued recall and recognition. Delayed recall was also assessed. For AD, results showed that all methods were efficient but there were no significant differences between them. The number of errors produced during the learning phases varied between the five methods but did not influence learning. There were no significant differences between implicit and explicit memory task instructions on test performances. For the control group, there were no differences between the five methods. Finally, no significant correlations were found between the performance of the AD participants in free recall and their cognitive profile, but generally, the best performers had better remaining episodic memory. Also, case study analyses showed that spaced retrieval was the method for which the greatest number of participants (four) obtained results as good as the controls. This study suggests that the five methods are effective for new learning of face-name associations in AD. It appears that early AD patients can learn, even in the context of error production and explicit memory conditions.

  16. Neuropathologic Associations of Learning and Memory in Primary Progressive Aphasia.

    PubMed

    Kielb, Stephanie; Cook, Amanda; Wieneke, Christina; Rademaker, Alfred; Bigio, Eileen H; Mesulam, Marek-Marsel; Rogalski, Emily; Weintraub, Sandra

    2016-07-01

    The dementia syndrome of primary progressive aphasia (PPA) can be caused by 1 of several neuropathologic entities, including forms of frontotemporal lobar degeneration (FTLD) or Alzheimer disease (AD). Although episodic memory is initially spared in this syndrome, the subtle learning and memory features of PPA and their neuropathologic associations have not been characterized. To detect subtle memory differences on the basis of autopsy-confirmed neuropathologic diagnoses in PPA. Retrospective analysis was conducted at the Northwestern Cognitive Neurology and Alzheimer's Disease Center in August 2015 using clinical and postmortem autopsy data that had been collected between August 1983 and June 2012. Thirteen patients who had the primary clinical diagnosis of PPA and an autopsy-confirmed diagnosis of either AD (PPA-AD) or a tau variant of FTLD (PPA-FTLD) and 6 patients who had the clinical diagnosis of amnestic dementia and autopsy-confirmed AD (AMN-AD) were included. Scores on the effortless learning, delayed retrieval, and retention conditions of the Three Words Three Shapes test, a specialized measure of verbal and nonverbal episodic memory. The PPA-FTLD (n = 6), PPA-AD (n = 7), and AMN-AD (n = 6) groups did not differ by demographic composition (all P > .05). The sample mean (SD) age was 64.1 (10.3) years at symptom onset and 67.9 (9.9) years at Three Words Three Shapes test administration. The PPA-FTLD group had normal (ie, near-ceiling) scores on all verbal and nonverbal test conditions. Both the PPA-AD and AMN-AD groups had deficits in verbal effortless learning (mean [SD] number of errors, 9.9 [4.6] and 14.2 [2.0], respectively) and verbal delayed retrieval (mean [SD] number of errors, 6.1 [5.9] and 12.0 [4.4], respectively). The AMN-AD group had additional deficits in nonverbal effortless learning (mean [SD] number of errors, 10.3 [4.0]) and verbal retention (mean [SD] number of errors, 8.33 [5.2]), which were not observed in the PPA-FTLD or PPA-AD groups (all P < .005). This study identified neuropathologic associations of learning and memory in autopsy-confirmed cases of PPA. Among patients with clinical PPA syndrome, AD neuropathology appeared to interfere with effortless learning and delayed retrieval of verbal information, whereas FTLD-tau pathology did not. The results provide directions for future research on the interactions between limbic and language networks.

  17. Theoretical uncertainties on the radius of low- and very-low-mass stars

    NASA Astrophysics Data System (ADS)

    Tognelli, E.; Prada Moroni, P. G.; Degl'Innocenti, S.

    2018-05-01

    We performed an analysis of the main theoretical uncertainties that affect the radius of low- and very-low-mass stars predicted by current stellar models. We focused on stars in the mass range 0.1-1 M⊙, on both the zero-age main sequence (ZAMS) and on 1, 2, and 5 Gyr isochrones. First, we quantified the impact on the radius of the uncertainty of several quantities, namely the equation of state, radiative opacity, atmospheric models, convection efficiency, and initial chemical composition. Then, we computed the cumulative radius error stripe obtained by adding the radius variation due to all the analysed quantities. As a general trend, the radius uncertainty increases with the stellar mass. For ZAMS structures the cumulative error stripe of very-low-mass stars is about ±2 and ±3 per cent, while at larger masses it increases up to ±4 and ±5 per cent. The radius uncertainty gets larger and age dependent if isochrones are considered, reaching for M ˜ 1 M⊙ about +12(-15) per cent at an age of 5 Gyr. We also investigated the radius uncertainty at a fixed luminosity. In this case, the cumulative error stripe is the same for both ZAMS and isochrone models and it ranges from about ±4 to +7 and +9(-5) per cent. We also showed that the sole uncertainty on the chemical composition plays an important role in determining the radius error stripe, producing a radius variation that ranges between about ±1 and ±2 per cent on ZAMS models with fixed mass and about ±3 and ±5 per cent at a fixed luminosity.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wijesooriya, K; Seitter, K; Desai, V

    Purpose: To present our single institution experience on catching errors with trajectory log file analysis. The reported causes of failures, probability of occurrences (O), severity of effects (S), and the probability of the failures to be undetected (D) could be added to guidelines of FMEA analysis. Methods: From March 2013 to March 2014, 19569 patient treatment fields/arcs were analyzed. This work includes checking all 131 treatment delivery parameters for all patients, all treatment sites and all treatment delivery fractions. TrueBeam trajectory log files for all treatment field types as well as all imaging types were accessed, read in every 20ms,more » and every control point (total of 37 million parameters) compared to the physician approved plan in the planning system. Results: Couch angle outlier occurrence: N= 327, range = −1.7 −1.2 deg; gantry angle outlier occurrence: N =59, range = 0.09 – 5.61 deg, collimator angle outlier occurrence: N = 13, range = −0.2 – 0.2 deg. VMAT cases have slightly larger variations in mechanical parameters. MLC: 3D single control point fields have a maximum deviation of 0.04 mm, 39 step and shoot IMRT cases have MLC −0.3 – 0.5 mm deviations, all (1286) VMAT cases have −0.9 – 0.7 mm deviations. Two possible serious errors were found: 1) A 4 cm isocenter shift for the PA beam of an AP-PA pair, under-dosing a portion of PTV by 25%. 2) Delivery with MLC leaves abutted behind the jaws as opposed to the midline as planned, leading to a under-dosing of a small volume of the PTV by 25%, by just the boost plan. Due to their error origin, neither of these errors could have been detected by pre-treatment verification. Conclusion: Performing Trajectory Log file analysis could catch typically undetected errors to avoid potentially adverse incidents.« less

  19. A New Methodology for Vibration Error Compensation of Optical Encoders

    PubMed Central

    Lopez, Jesus; Artes, Mariano

    2012-01-01

    Optical encoders are sensors based on grating interference patterns. Tolerances inherent to the manufacturing process can induce errors in the position accuracy as the measurement signals stand apart from the ideal conditions. In case the encoder is working under vibrations, the oscillating movement of the scanning head is registered by the encoder system as a displacement, introducing an error into the counter to be added up to graduation, system and installation errors. Behavior improvement can be based on different techniques trying to compensate the error from measurement signals processing. In this work a new “ad hoc” methodology is presented to compensate the error of the encoder when is working under the influence of vibration. The methodology is based on fitting techniques to the Lissajous figure of the deteriorated measurement signals and the use of a look up table, giving as a result a compensation procedure in which a higher accuracy of the sensor is obtained. PMID:22666067

  20. Synthesis, spectroscopic analysis and theoretical study of new pyrrole-isoxazoline derivatives

    NASA Astrophysics Data System (ADS)

    Rawat, Poonam; Singh, R. N.; Baboo, Vikas; Niranjan, Priydarshni; Rani, Himanshu; Saxena, Rajat; Ahmad, Sartaj

    2017-02-01

    In the present work, we have efficiently synthesized the pyrrole-isoxazoline derivatives (4a-d) by cyclization of substituted 4-chalconylpyrrole (3a-d) with hydroxylamine hydrochloride. The reactivity of substituted 4-chalconylpyrrole (3a-d), towards nucleophiles hydroxylamine hydrochloride was evaluated on the basis of electrophilic reactivity descriptors (fk+, sk+, ωk+) and they were found to be high at unsaturated β carbon of chalconylpyrrole indicating its more proneness to nucleophilic attack and thereby favoring the formation of reported new pyrrole-isoxazoline compounds (4a-d). The structures of newly synthesized pyrrole-isoxazoline derivatives were derived from IR, 1H NMR, Mass, UV-Vis and elemental analysis. All experimental spectral data corroborate well with the calculated spectral data. The FT-IR analysis shows red shifts in vN-H and vC = O stretching due to dimer formation through intermolecular hydrogen bonding. On basis set superposition error correction, the intermolecular interaction energy for (4a-d) is found to be 10.10, 9.99, 10.18, 11.01 and 11.19 kcal/mol respectively. The calculated first hyperpolarizability (β0) values of (4a-d) molecules are in the range of 7.40-9.05 × 10-30 esu indicating their suitability for non-linear optical (NLO) applications. Experimental spectral results, theoretical data, analysis of chalcone intermediates and pyrrole-isoxazolines find usefulness in advancement of pyrrole-azole chemistry.

  1. Accuracy Assessment of Digital Surface Models Based on WorldView-2 and ADS80 Stereo Remote Sensing Data

    PubMed Central

    Hobi, Martina L.; Ginzler, Christian

    2012-01-01

    Digital surface models (DSMs) are widely used in forest science to model the forest canopy. Stereo pairs of very high resolution satellite and digital aerial images are relatively new and their absolute accuracy for DSM generation is largely unknown. For an assessment of these input data two DSMs based on a WorldView-2 stereo pair and a ADS80 DSM were generated with photogrammetric instruments. Rational polynomial coefficients (RPCs) are defining the orientation of the WorldView-2 satellite images, which can be enhanced with ground control points (GCPs). Thus two WorldView-2 DSMs were distinguished: a WorldView-2 RPCs-only DSM and a WorldView-2 GCP-enhanced RPCs DSM. The accuracy of the three DSMs was estimated with GPS measurements, manual stereo-measurements, and airborne laser scanning data (ALS). With GCP-enhanced RPCs the WorldView-2 image orientation could be optimised to a root mean square error (RMSE) of 0.56 m in planimetry and 0.32 m in height. This improvement in orientation allowed for a vertical median error of −0.24 m for the WorldView-2 GCP-enhanced RPCs DSM in flat terrain. Overall, the DSM based on ADS80 images showed the highest accuracy of the three models with a median error of 0.08 m over bare ground. As the accuracy of a DSM varies with land cover three classes were distinguished: herb and grass, forests, and artificial areas. The study suggested the ADS80 DSM to best model actual surface height in all three land cover classes, with median errors <1.1 m. The WorldView-2 GCP-enhanced RPCs model achieved good accuracy, too, with median errors of −0.43 m for the herb and grass vegetation and −0.26 m for artificial areas. Forested areas emerged as the most difficult land cover type for height modelling; still, with median errors of −1.85 m for the WorldView-2 GCP-enhanced RPCs model and −1.12 m for the ADS80 model, the input data sets evaluated here are quite promising for forest canopy modelling. PMID:22778645

  2. Accuracy assessment of digital surface models based on WorldView-2 and ADS80 stereo remote sensing data.

    PubMed

    Hobi, Martina L; Ginzler, Christian

    2012-01-01

    Digital surface models (DSMs) are widely used in forest science to model the forest canopy. Stereo pairs of very high resolution satellite and digital aerial images are relatively new and their absolute accuracy for DSM generation is largely unknown. For an assessment of these input data two DSMs based on a WorldView-2 stereo pair and a ADS80 DSM were generated with photogrammetric instruments. Rational polynomial coefficients (RPCs) are defining the orientation of the WorldView-2 satellite images, which can be enhanced with ground control points (GCPs). Thus two WorldView-2 DSMs were distinguished: a WorldView-2 RPCs-only DSM and a WorldView-2 GCP-enhanced RPCs DSM. The accuracy of the three DSMs was estimated with GPS measurements, manual stereo-measurements, and airborne laser scanning data (ALS). With GCP-enhanced RPCs the WorldView-2 image orientation could be optimised to a root mean square error (RMSE) of 0.56 m in planimetry and 0.32 m in height. This improvement in orientation allowed for a vertical median error of -0.24 m for the WorldView-2 GCP-enhanced RPCs DSM in flat terrain. Overall, the DSM based on ADS80 images showed the highest accuracy of the three models with a median error of 0.08 m over bare ground. As the accuracy of a DSM varies with land cover three classes were distinguished: herb and grass, forests, and artificial areas. The study suggested the ADS80 DSM to best model actual surface height in all three land cover classes, with median errors <1.1 m. The WorldView-2 GCP-enhanced RPCs model achieved good accuracy, too, with median errors of -0.43 m for the herb and grass vegetation and -0.26 m for artificial areas. Forested areas emerged as the most difficult land cover type for height modelling; still, with median errors of -1.85 m for the WorldView-2 GCP-enhanced RPCs model and -1.12 m for the ADS80 model, the input data sets evaluated here are quite promising for forest canopy modelling.

  3. Modeling of the UAE Wind Turbine for Refinement of FAST{_}AD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jonkman, J. M.

    The Unsteady Aerodynamics Experiment (UAE) research wind turbine was modeled both aerodynamically and structurally in the FAST{_}AD wind turbine design code, and its response to wind inflows was simulated for a sample of test cases. A study was conducted to determine why wind turbine load magnitude discrepancies-inconsistencies in aerodynamic force coefficients, rotor shaft torque, and out-of-plane bending moments at the blade root across a range of operating conditions-exist between load predictions made by FAST{_}AD and other modeling tools and measured loads taken from the actual UAE wind turbine during the NASA-Ames wind tunnel tests. The acquired experimental test data representmore » the finest, most accurate set of wind turbine aerodynamic and induced flow field data available today. A sample of the FAST{_}AD model input parameters most critical to the aerodynamics computations was also systematically perturbed to determine their effect on load and performance predictions. Attention was focused on the simpler upwind rotor configuration, zero yaw error test cases. Inconsistencies in input file parameters, such as aerodynamic performance characteristics, explain a noteworthy fraction of the load prediction discrepancies of the various modeling tools.« less

  4. Generalized weighted ratio method for accurate turbidity measurement over a wide range.

    PubMed

    Liu, Hongbo; Yang, Ping; Song, Hong; Guo, Yilu; Zhan, Shuyue; Huang, Hui; Wang, Hangzhou; Tao, Bangyi; Mu, Quanquan; Xu, Jing; Li, Dejun; Chen, Ying

    2015-12-14

    Turbidity measurement is important for water quality assessment, food safety, medicine, ocean monitoring, etc. In this paper, a method that accurately estimates the turbidity over a wide range is proposed, where the turbidity of the sample is represented as a weighted ratio of the scattered light intensities at a series of angles. An improvement in the accuracy is achieved by expanding the structure of the ratio function, thus adding more flexibility to the turbidity-intensity fitting. Experiments have been carried out with an 850 nm laser and a power meter fixed on a turntable to measure the light intensity at different angles. The results show that the relative estimation error of the proposed method is 0.58% on average for a four-angle intensity combination for all test samples with a turbidity ranging from 160 NTU to 4000 NTU.

  5. Operational hydrological forecasting in Bavaria. Part II: Ensemble forecasting

    NASA Astrophysics Data System (ADS)

    Ehret, U.; Vogelbacher, A.; Moritz, K.; Laurent, S.; Meyer, I.; Haag, I.

    2009-04-01

    In part I of this study, the operational flood forecasting system in Bavaria and an approach to identify and quantify forecast uncertainty was introduced. The approach is split into the calculation of an empirical 'overall error' from archived forecasts and the calculation of an empirical 'model error' based on hydrometeorological forecast tests, where rainfall observations were used instead of forecasts. The 'model error' can especially in upstream catchments where forecast uncertainty is strongly dependent on the current predictability of the atrmosphere be superimposed on the spread of a hydrometeorological ensemble forecast. In Bavaria, two meteorological ensemble prediction systems are currently tested for operational use: the 16-member COSMO-LEPS forecast and a poor man's ensemble composed of DWD GME, DWD Cosmo-EU, NCEP GFS, Aladin-Austria, MeteoSwiss Cosmo-7. The determination of the overall forecast uncertainty is dependent on the catchment characteristics: 1. Upstream catchment with high influence of weather forecast a) A hydrological ensemble forecast is calculated using each of the meteorological forecast members as forcing. b) Corresponding to the characteristics of the meteorological ensemble forecast, each resulting forecast hydrograph can be regarded as equally likely. c) The 'model error' distribution, with parameters dependent on hydrological case and lead time, is added to each forecast timestep of each ensemble member d) For each forecast timestep, the overall (i.e. over all 'model error' distribution of each ensemble member) error distribution is calculated e) From this distribution, the uncertainty range on a desired level (here: the 10% and 90% percentile) is extracted and drawn as forecast envelope. f) As the mean or median of an ensemble forecast does not necessarily exhibit meteorologically sound temporal evolution, a single hydrological forecast termed 'lead forecast' is chosen and shown in addition to the uncertainty bounds. This can be either an intermediate forecast between the extremes of the ensemble spread or a manually selected forecast based on a meteorologists advice. 2. Downstream catchments with low influence of weather forecast In downstream catchments with strong human impact on discharge (e.g. by reservoir operation) and large influence of upstream gauge observation quality on forecast quality, the 'overall error' may in most cases be larger than the combination of the 'model error' and an ensemble spread. Therefore, the overall forecast uncertainty bounds are calculated differently: a) A hydrological ensemble forecast is calculated using each of the meteorological forecast members as forcing. Here, additionally the corresponding inflow hydrograph from all upstream catchments must be used. b) As for an upstream catchment, the uncertainty range is determined by combination of 'model error' and the ensemble member forecasts c) In addition, the 'overall error' is superimposed on the 'lead forecast'. For reasons of consistency, the lead forecast must be based on the same meteorological forecast in the downstream and all upstream catchments. d) From the resulting two uncertainty ranges (one from the ensemble forecast and 'model error', one from the 'lead forecast' and 'overall error'), the envelope is taken as the most prudent uncertainty range. In sum, the uncertainty associated with each forecast run is calculated and communicated to the public in the form of 10% and 90% percentiles. As in part I of this study, the methodology as well as the useful- or uselessness of the resulting uncertainty ranges will be presented and discussed by typical examples.

  6. Structural MRI-based detection of Alzheimer's disease using feature ranking and classification error.

    PubMed

    Beheshti, Iman; Demirel, Hasan; Farokhian, Farnaz; Yang, Chunlan; Matsuda, Hiroshi

    2016-12-01

    This paper presents an automatic computer-aided diagnosis (CAD) system based on feature ranking for detection of Alzheimer's disease (AD) using structural magnetic resonance imaging (sMRI) data. The proposed CAD system is composed of four systematic stages. First, global and local differences in the gray matter (GM) of AD patients compared to the GM of healthy controls (HCs) are analyzed using a voxel-based morphometry technique. The aim is to identify significant local differences in the volume of GM as volumes of interests (VOIs). Second, the voxel intensity values of the VOIs are extracted as raw features. Third, the raw features are ranked using a seven-feature ranking method, namely, statistical dependency (SD), mutual information (MI), information gain (IG), Pearson's correlation coefficient (PCC), t-test score (TS), Fisher's criterion (FC), and the Gini index (GI). The features with higher scores are more discriminative. To determine the number of top features, the estimated classification error based on training set made up of the AD and HC groups is calculated, with the vector size that minimized this error selected as the top discriminative feature. Fourth, the classification is performed using a support vector machine (SVM). In addition, a data fusion approach among feature ranking methods is introduced to improve the classification performance. The proposed method is evaluated using a data-set from ADNI (130 AD and 130 HC) with 10-fold cross-validation. The classification accuracy of the proposed automatic system for the diagnosis of AD is up to 92.48% using the sMRI data. An automatic CAD system for the classification of AD based on feature-ranking method and classification errors is proposed. In this regard, seven-feature ranking methods (i.e., SD, MI, IG, PCC, TS, FC, and GI) are evaluated. The optimal size of top discriminative features is determined by the classification error estimation in the training phase. The experimental results indicate that the performance of the proposed system is comparative to that of state-of-the-art classification models. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  7. Medical Errors Reduction Initiative

    DTIC Science & Technology

    2005-05-01

    working with great success to minimize error. 14. SUBJECT TERMS 15. NUMBER OF PAGES Medical Error, Patient Safety, Personal Data Terminal, Barcodes, 9...AD Award Number: W81XWH-04-1-0536 TITLE: Medical Errors Reduction Initiative PRINCIPAL INVESTIGATOR: Michael L. Mutter 1To CONTRACTING ORGANIZATION...The Valley Hospital Ridgewood, NJ 07450 REPORT DATE: May 2005 TYPE OF REPORT: Annual PREPARED FOR: U.S. Army Medical Research and Materiel Command

  8. Design and tolerance analysis of a transmission sphere by interferometer model

    NASA Astrophysics Data System (ADS)

    Peng, Wei-Jei; Ho, Cheng-Fong; Lin, Wen-Lung; Yu, Zong-Ru; Huang, Chien-Yao; Hsu, Wei-Yao

    2015-09-01

    The design of a 6-in, f/2.2 transmission sphere for Fizeau interferometry is presented in this paper. To predict the actual performance during design phase, we build an interferometer model combined with tolerance analysis in Zemax. Evaluating focus imaging is not enough for a double pass optical system. Thus, we study the interferometer model that includes system error, wavefronts reflected from reference surface and tested surface. Firstly, we generate a deformation map of the tested surface. Because of multiple configurations in Zemax, we can get the test wavefront and the reference wavefront reflected from the tested surface and the reference surface of transmission sphere respectively. According to the theory of interferometry, we subtract both wavefronts to acquire the phase of tested surface. Zernike polynomial is applied to transfer the map from phase to sag and to remove piston, tilt and power. The restored map is the same as original map; because of no system error exists. Secondly, perturbed tolerances including fabrication of lenses and assembly are considered. The system error occurs because the test and reference beam are no longer common path perfectly. The restored map is inaccurate while the system error is added. Although the system error can be subtracted by calibration, it should be still controlled within a small range to avoid calibration error. Generally the reference wavefront error including the system error and the irregularity of the reference surface of 6-in transmission sphere is measured within peak-to-valley (PV) 0.1 λ (λ=0.6328 um), which is not easy to approach. Consequently, it is necessary to predict the value of system error before manufacture. Finally, a prototype is developed and tested by a reference surface with PV 0.1 λ irregularity.

  9. An Empirical State Error Covariance Matrix for Batch State Estimation

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. Consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. It then follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully account for the error in the state estimate. By way of a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm, it is possible to arrive at an appropriate, and formally correct, empirical state error covariance matrix. The first specific step of the method is to use the average form of the weighted measurement residual variance performance index rather than its usual total weighted residual form. Next it is helpful to interpret the solution to the normal equations as the average of a collection of sample vectors drawn from a hypothetical parent population. From here, using a standard statistical analysis approach, it directly follows as to how to determine the standard empirical state error covariance matrix. This matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty. Also, in its most straight forward form, the technique only requires supplemental calculations to be added to existing batch algorithms. The generation of this direct, empirical form of the state error covariance matrix is independent of the dimensionality of the observations. Mixed degrees of freedom for an observation set are allowed. As is the case with any simple, empirical sample variance problems, the presented approach offers an opportunity (at least in the case of weighted least squares) to investigate confidence interval estimates for the error covariance matrix elements. The diagonal or variance terms of the error covariance matrix have a particularly simple form to associate with either a multiple degree of freedom chi-square distribution (more approximate) or with a gamma distribution (less approximate). The off diagonal or covariance terms of the matrix are less clear in their statistical behavior. However, the off diagonal covariance matrix elements still lend themselves to standard confidence interval error analysis. The distributional forms associated with the off diagonal terms are more varied and, perhaps, more approximate than those associated with the diagonal terms. Using a simple weighted least squares sample problem, results obtained through use of the proposed technique are presented. The example consists of a simple, two observer, triangulation problem with range only measurements. Variations of this problem reflect an ideal case (perfect knowledge of the range errors) and a mismodeled case (incorrect knowledge of the range errors).

  10. Reducing waste and errors: piloting lean principles at Intermountain Healthcare.

    PubMed

    Jimmerson, Cindy; Weber, Dorothy; Sobek, Durward K

    2005-05-01

    The Toyota Production System (TPS), based on industrial engineering principles and operational innovations, is used to achieve waste reduction and efficiency while increasing product quality. Several key tools and principles, adapted to health care, have proved effective in improving hospital operations. Value Stream Maps (VSMs), which represent the key people, material, and information flows required to deliver a product or service, distinguish between value-adding and non-value-adding steps. The one-page Problem-Solving A3 Report guides staff through a rigorous and systematic problem-solving process. PILOT PROJECT at INTERMOUNTAIN HEALTHCARE: In a pilot project, participants made many improvements, ranging from simple changes implemented immediately (for example, heart monitor paper not available when a patient presented with a dysrythmia) to larger projects involving patient or information flow issues across multiple departments. Most of the improvements required little or no investment and reduced significant amounts of wasted time for front-line workers. In one unit, turnaround time for pathologist reports from an anatomical pathology lab was reduced from five to two days. TPS principles and tools are applicable to an endless variety of processes and work settings in health care and can be used to address critical challenges such as medical errors, escalating costs, and staffing shortages.

  11. Earth horizon modeling and application to static Earth sensors on TRMM spacecraft

    NASA Technical Reports Server (NTRS)

    Keat, J.; Challa, M.; Tracewell, D.; Galal, K.

    1995-01-01

    Data from Earth sensor assemblies (ESA's) often are used in the attitude determination (AD) for both spinning and Earth-pointing spacecraft. The ESA's on previous such spacecraft for which the ground-based AD operation was performed by the Flight Dynamics Division (FDD) used the Earth scanning method. AD on such spacecraft requires a model of the shape of the Earth disk as seen from the spacecraft. AD accuracy requirements often are too severe to permit Earth oblateness to be ignored when modeling disk shape. Section 2 of this paper reexamines and extends the methods for Earth disk shape modeling employed in AD work at FDD for the past decade. A new formulation, based on a more convenient Earth flatness parameter, is introduced, and the geometric concepts are examined in detail. It is shown that the Earth disk can be approximated as an ellipse in AD computations. Algorithms for introducing Earth oblateness into the AD process for spacecraft carrying scanning ESA's have been developed at FDD and implemented into the support systems. The Tropical Rainfall Measurement Mission (TRMM) will be the first spacecraft with AD operation performed at FDD that uses a different type of ESA - namely, a static one - containing four fixed detectors D(sub i) (i = 1 to 4). Section 3 of this paper considers the effect of Earth oblateness on AD accuracy for TRMM. This effect ideally will not induce AD errors on TRMM when data from all four D(sub i) are present. When data from only two or three D(sub i) are available, however, a spherical Earth approximation can introduce errors of 0.05 to 0.30 deg on TRMM. These oblateness-induced errors are eliminated by a new algorithm that uses the results of Section 2 to model the Earth disk as an ellipse.

  12. An accurate system for onsite calibration of electronic transformers with digital output.

    PubMed

    Zhi, Zhang; Li, Hong-Bin

    2012-06-01

    Calibration systems with digital output are used to replace conventional calibration systems because of principle diversity and characteristics of digital output of electronic transformers. But precision and unpredictable stability limit their onsite application even development. So fully considering the factors influencing accuracy of calibration system and employing simple but reliable structure, an all-digital calibration system with digital output is proposed in this paper. In complicated calibration environments, precision and dynamic range are guaranteed by A/D converter with 24-bit resolution, synchronization error limit is nanosecond by using the novelty synchronization method. In addition, an error correction algorithm based on the differential method by using two-order Hanning convolution window has good inhibition of frequency fluctuation and inter-harmonics interference. To verify the effectiveness, error calibration was carried out in the State Grid Electric Power Research Institute of China and results show that the proposed system can reach the precision class up to 0.05. Actual onsite calibration shows that the system has high accuracy, and is easy to operate with satisfactory stability.

  13. An accurate system for onsite calibration of electronic transformers with digital output

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhi Zhang; Li Hongbin; State Key Laboratory of Advanced Electromagnetic Engineering and Technology, Wuhan 430074

    Calibration systems with digital output are used to replace conventional calibration systems because of principle diversity and characteristics of digital output of electronic transformers. But precision and unpredictable stability limit their onsite application even development. So fully considering the factors influencing accuracy of calibration system and employing simple but reliable structure, an all-digital calibration system with digital output is proposed in this paper. In complicated calibration environments, precision and dynamic range are guaranteed by A/D converter with 24-bit resolution, synchronization error limit is nanosecond by using the novelty synchronization method. In addition, an error correction algorithm based on the differentialmore » method by using two-order Hanning convolution window has good inhibition of frequency fluctuation and inter-harmonics interference. To verify the effectiveness, error calibration was carried out in the State Grid Electric Power Research Institute of China and results show that the proposed system can reach the precision class up to 0.05. Actual onsite calibration shows that the system has high accuracy, and is easy to operate with satisfactory stability.« less

  14. An accurate system for onsite calibration of electronic transformers with digital output

    NASA Astrophysics Data System (ADS)

    Zhi, Zhang; Li, Hong-Bin

    2012-06-01

    Calibration systems with digital output are used to replace conventional calibration systems because of principle diversity and characteristics of digital output of electronic transformers. But precision and unpredictable stability limit their onsite application even development. So fully considering the factors influencing accuracy of calibration system and employing simple but reliable structure, an all-digital calibration system with digital output is proposed in this paper. In complicated calibration environments, precision and dynamic range are guaranteed by A/D converter with 24-bit resolution, synchronization error limit is nanosecond by using the novelty synchronization method. In addition, an error correction algorithm based on the differential method by using two-order Hanning convolution window has good inhibition of frequency fluctuation and inter-harmonics interference. To verify the effectiveness, error calibration was carried out in the State Grid Electric Power Research Institute of China and results show that the proposed system can reach the precision class up to 0.05. Actual onsite calibration shows that the system has high accuracy, and is easy to operate with satisfactory stability.

  15. Mean annual runoff and peak flow estimates based on channel geometry of streams in northeastern and western Montana

    USGS Publications Warehouse

    Parrett, Charles; Omang, R.J.; Hull, J.A.

    1983-01-01

    Equations for estimating mean annual runoff and peak discharge from measurements of channel geometry were developed for western and northeastern Montana. The study area was divided into two regions for the mean annual runoff analysis, and separate multiple-regression equations were developed for each region. The active-channel width was determined to be the most important independent variable in each region. The standard error of estimate for the estimating equation using active-channel width was 61 percent in the Northeast Region and 38 percent in the West region. The study area was divided into six regions for the peak discharge analysis, and multiple regression equations relating channel geometry and basin characteristics to peak discharges having recurrence intervals of 2, 5, 10, 25, 50 and 100 years were developed for each region. The standard errors of estimate for the regression equations using only channel width as an independent variable ranged from 35 to 105 percent. The standard errors improved in four regions as basin characteristics were added to the estimating equations. (USGS)

  16. Optimizing X-ray mirror thermal performance using matched profile cooling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Lin; Cocco, Daniele; Kelez, Nicholas

    2015-08-07

    To cover a large photon energy range, the length of an X-ray mirror is often longer than the beam footprint length for much of the applicable energy range. To limit thermal deformation of such a water-cooled X-ray mirror, a technique using side cooling with a cooled length shorter than the beam footprint length is proposed. This cooling length can be optimized by using finite-element analysis. For the Kirkpatrick–Baez (KB) mirrors at LCLS-II, the thermal deformation can be reduced by a factor of up to 30, compared with full-length cooling. Furthermore, a second, alternative technique, based on a similar principle ismore » presented: using a long, single-length cooling block on each side of the mirror and adding electric heaters between the cooling blocks and the mirror substrate. The electric heaters consist of a number of cells, located along the mirror length. The total effective length of the electric heater can then be adjusted by choosing which cells to energize, using electric power supplies. The residual height error can be minimized to 0.02 nm RMS by using optimal heater parameters (length and power density). Compared with a case without heaters, this residual height error is reduced by a factor of up to 45. The residual height error in the LCLS-II KB mirrors, due to free-electron laser beam heat load, can be reduced by a factor of ~11belowthe requirement. The proposed techniques are also effective in reducing thermal slope errors and are, therefore, applicable to white beam mirrors in synchrotron radiation beamlines.« less

  17. Real-Time Phase Correction Based on FPGA in the Beam Position and Phase Measurement System

    NASA Astrophysics Data System (ADS)

    Gao, Xingshun; Zhao, Lei; Liu, Jinxin; Jiang, Zouyi; Hu, Xiaofang; Liu, Shubin; An, Qi

    2016-12-01

    A fully digital beam position and phase measurement (BPPM) system was designed for the linear accelerator (LINAC) in Accelerator Driven Sub-critical System (ADS) in China. Phase information is obtained from the summed signals from four pick-ups of the Beam Position Monitor (BPM). Considering that the delay variations of different analog circuit channels would introduce phase measurement errors, we propose a new method to tune the digital waveforms of four channels before summation and achieve real-time error correction. The process is based on the vector rotation method and implemented within one single Field Programmable Gate Array (FPGA) device. Tests were conducted to evaluate this correction method and the results indicate that a phase correction precision better than ± 0.3° over the dynamic range from -60 dBm to 0 dBm is achieved.

  18. A sensitive LC-MS/MS method for simultaneous determination of amygdalin and paeoniflorin in human plasma and its application.

    PubMed

    Li, Xiaobing; Shi, Fuguo; Gu, Pan; Liu, Lingye; He, Hua; Ding, Li

    2014-04-01

    A simple and sensitive HPLC-MS/MS method was developed and fully validated for the simultaneous determination of amygdalin (AD) and paeoniflorin (PF) in human plasma. For both analytes, the method exhibited high sensitivity (LLOQs of 0.6ng/mL) by selecting the ammonium adduct ions ([M+NH4](+)) as the precursor ions and good linearity over the concentration range of 0.6-2000ng/mL with the correlation coefficients>0.9972. The intra- and inter-day precision was lower than 10% in relation to relative standard deviation, while accuracy was within ±2.3% in terms of relative error for both analytes. The developed method was successfully applied to a pilot pharmacokinetic study of AD and PF in healthy volunteers after intravenous infusion administration of Huoxue-Tongluo lyophilized powder for injection. Copyright © 2014 Elsevier B.V. All rights reserved.

  19. COMPLEX VARIABLE BOUNDARY ELEMENT METHOD: APPLICATIONS.

    USGS Publications Warehouse

    Hromadka, T.V.; Yen, C.C.; Guymon, G.L.

    1985-01-01

    The complex variable boundary element method (CVBEM) is used to approximate several potential problems where analytical solutions are known. A modeling result produced from the CVBEM is a measure of relative error in matching the known boundary condition values of the problem. A CVBEM error-reduction algorithm is used to reduce the relative error of the approximation by adding nodal points in boundary regions where error is large. From the test problems, overall error is reduced significantly by utilizing the adaptive integration algorithm.

  20. The added mass forces in insect flapping wings.

    PubMed

    Liu, Longgui; Sun, Mao

    2018-01-21

    The added mass forces of three-dimensional (3D) flapping wings of some representative insects, and the accuracy of the often used simple two-dimensional (2D) method, are studied. The added mass force of a flapping wing is calculated by both 3D and 2D methods, and the total aerodynamic force of the wing is calculated by the CFD method. Our findings are as following. The added mass force has a significant contribution to the total aerodynamic force of the flapping wings during and near the stroke reversals, and the shorter the stroke amplitude is, the larger the added mass force becomes. Thus the added mass force could not be neglected when using the simple models to estimate the aerodynamics force, especially for insects with relatively small stroke amplitudes. The accuracy of the often used simple 2D method is reasonably good: when the aspect ratio of the wing is greater than about 3.3, error in the added mass force calculation due to the 2D assumption is less than 9%; even when the aspect ratio is 2.8 (approximately the smallest for an insect), the error is no more than 13%. Copyright © 2017 Elsevier Ltd. All rights reserved.

  1. Bilateral deep brain stimulation of the fornix for Alzheimer's disease: surgical safety in the ADvance trial.

    PubMed

    Ponce, Francisco A; Asaad, Wael F; Foote, Kelly D; Anderson, William S; Rees Cosgrove, G; Baltuch, Gordon H; Beasley, Kara; Reymers, Donald E; Oh, Esther S; Targum, Steven D; Smith, Gwenn S; Lyketsos, Constantine G; Lozano, Andres M

    2016-07-01

    OBJECT This report describes the stereotactic technique, hospitalization, and 90-day perioperative safety of bilateral deep brain stimulation (DBS) of the fornix in patients who underwent DBS for the treatment of mild, probable Alzheimer's disease (AD). METHODS The ADvance Trial is a multicenter, 12-month, double-blind, randomized, controlled feasibility study being conducted to evaluate the safety, efficacy, and tolerability of DBS of the fornix in patients with mild, probable AD. Intraoperative and perioperative data were collected prospectively. All patients underwent postoperative MRI. Stereotactic analyses were performed in a blinded fashion by a single surgeon. Adverse events (AEs) were reported to an independent clinical events committee and adjudicated to determine the relationship between the AE and the study procedure. RESULTS Between June 6, 2012, and April 28, 2014, a total of 42 patients with mild, probable AD were treated with bilateral fornix DBS (mean age 68.2 ± 7.8 years; range 48.0-79.7 years; 23 men and 19 women). The mean planned target coordinates were x = 5.2 ± 1.0 mm (range 3.0-7.9 mm), y = 9.6 ± 0.9 mm (range 8.0-11.6 mm), z = -7.5 ± 1.2 mm (range -5.4 to -10.0 mm), and the mean postoperative stereotactic radial error on MRI was 1.5 ± 1.0 mm (range 0.2-4.0 mm). The mean length of hospitalization was 1.4 ± 0.8 days. Twenty-six (61.9%) patients experienced 64 AEs related to the study procedure, of which 7 were serious AEs experienced by 5 patients (11.9%). Four (9.5%) patients required return to surgery: 2 patients for explantation due to infection, 1 patient for lead repositioning, and 1 patient for chronic subdural hematoma. No patients experienced neurological deficits as a result of the study, and no deaths were reported. CONCLUSIONS Accurate targeting of DBS to the fornix without direct injury to it is feasible across surgeons and treatment centers. At 90 days after surgery, bilateral fornix DBS was well tolerated by patients with mild, probable AD. Clinical trial registration no.: NCT01608061 ( clinicaltrials.gov ).

  2. Bilateral deep brain stimulation of the fornix for Alzheimer’s disease: surgical safety in the ADvance trial

    PubMed Central

    Ponce, Francisco A.; Asaad, Wael F.; Foote, Kelly D.; Anderson, William S.; Cosgrove, G. Rees; Baltuch, Gordon H.; Beasley, Kara; Reymers, Donald E.; Oh, Esther S.; Targum, Steven D.; Smith, Gwenn S.; Lyketsos, Constantine G.; Lozano, Andres M.

    2016-01-01

    OBJECT This report describes the stereotactic technique, hospitalization, and 90-day perioperative safety of bilateral deep brain stimulation (DBS) of the fornix in patients who underwent DBS for the treatment of mild, probable Alzheimer’s disease (AD). METHODS The ADvance Trial is a multicenter, 12-month, double-blind, randomized, controlled feasibility study being conducted to evaluate the safety, efficacy, and tolerability of DBS of the fornix in patients with mild, probable AD. Intra-operative and perioperative data were collected prospectively. All patients underwent postoperative MRI. Stereotactic analyses were performed in a blinded fashion by a single surgeon. Adverse events (AEs) were reported to an independent clinical events committee and adjudicated to determine the relationship between the AE and the study procedure. RESULTS Between June 6, 2012, and April 28, 2014, a total of 42 patients with mild, probable AD were treated with bilateral fornix DBS (mean age 68.2 ± 7.8 years; range 48.0–79.7 years; 23 men and 19 women). The mean planned target coordinates were x = 5.2 ± 1.0 mm (range 3.0–7.9 mm), y = 9.6 ± 0.9 mm (range 8.0–11.6 mm), z = −7.5 ± 1.2 mm (range −5.4 to −10.0 mm), and the mean postoperative stereotactic radial error on MRI was 1.5 ± 1.0 mm (range 0.2–4.0 mm). The mean length of hospitalization was 1.4 ± 0.8 days. Twenty-six (61.9%) patients experienced 64 AEs related to the study procedure, of which 7 were serious AEs experienced by 5 patients (11.9%). Four (9.5%) patients required return to surgery: 2 patients for explantation due to infection, 1 patient for lead repositioning, and 1 patient for chronic subdural hematoma. No patients experienced neurological deficits as a result of the study, and no deaths were reported. CONCLUSIONS Accurate targeting of DBS to the fornix without direct injury to it is feasible across surgeons and treatment centers. At 90 days after surgery, bilateral fornix DBS was well tolerated by patients with mild, probable AD. PMID:26684775

  3. False Recognition in Behavioral Variant Frontotemporal Dementia and Alzheimer's Disease-Disinhibition or Amnesia?

    PubMed

    Flanagan, Emma C; Wong, Stephanie; Dutt, Aparna; Tu, Sicong; Bertoux, Maxime; Irish, Muireann; Piguet, Olivier; Rao, Sulakshana; Hodges, John R; Ghosh, Amitabha; Hornberger, Michael

    2016-01-01

    Episodic memory recall processes in Alzheimer's disease (AD) and behavioral variant frontotemporal dementia (bvFTD) can be similarly impaired, whereas recognition performance is more variable. A potential reason for this variability could be false-positive errors made on recognition trials and whether these errors are due to amnesia per se or a general over-endorsement of recognition items regardless of memory. The current study addressed this issue by analysing recognition performance on the Rey Auditory Verbal Learning Test (RAVLT) in 39 bvFTD, 77 AD and 61 control participants from two centers (India, Australia), as well as disinhibition assessed using the Hayling test. Whereas both AD and bvFTD patients were comparably impaired on delayed recall, bvFTD patients showed intact recognition performance in terms of the number of correct hits. However, both patient groups endorsed significantly more false-positives than controls, and bvFTD and AD patients scored equally poorly on a sensitivity index (correct hits-false-positives). Furthermore, measures of disinhibition were significantly associated with false positives in both groups, with a stronger relationship with false-positives in bvFTD. Voxel-based morphometry analyses revealed similar neural correlates of false positive endorsement across bvFTD and AD, with both patient groups showing involvement of prefrontal and Papez circuitry regions, such as medial temporal and thalamic regions, and a DTI analysis detected an emerging but non-significant trend between false positives and decreased fornix integrity in bvFTD only. These findings suggest that false-positive errors on recognition tests relate to similar mechanisms in bvFTD and AD, reflecting deficits in episodic memory processes and disinhibition. These findings highlight that current memory tests are not sufficient to accurately distinguish between bvFTD and AD patients.

  4. [Research on the measurement range of particle size with total light scattering method in vis-IR region].

    PubMed

    Sun, Xiao-gang; Tang, Hong; Dai, Jing-min

    2008-12-01

    The problem of determining the particle size range in the visible-infrared region was studied using the independent model algorithm in the total scattering technique. By the analysis and comparison of the accuracy of the inversion results for different R-R distributions, the measurement range of particle size was determined. Meanwhile, the corrected extinction coefficient was used instead of the original extinction coefficient, which could determine the measurement range of particle size with higher accuracy. Simulation experiments illustrate that the particle size distribution can be retrieved very well in the range from 0. 05 to 18 microm at relative refractive index m=1.235 in the visible-infrared spectral region, and the measurement range of particle size will vary with the varied wavelength range and relative refractive index. It is feasible to use the constrained least squares inversion method in the independent model to overcome the influence of the measurement error, and the inverse results are all still satisfactory when 1% stochastic noise is added to the value of the light extinction.

  5. Ad hoc versus standardized admixtures for continuous infusion drugs in neonatal intensive care: cognitive task analysis of safety at the bedside.

    PubMed

    Brannon, Timothy S

    2006-01-01

    Continuous infusion intravenous (IV) drugs in neonatal intensive care are usually prepared based on patient weight so that the dose is readable as a simple multiple of the infusion pump rate. New safety guidelines propose that hospitals switch to using standardized admixtures of these drugs to prevent calculation errors during ad hoc preparation. Extended hierarchical task analysis suggests that switching to standardized admixtures may lead to more errors in programming the pump at the bedside.

  6. Ad Hoc versus Standardized Admixtures for Continuous Infusion Drugs in Neonatal Intensive Care: Cognitive Task Analysis of Safety at the Bedside

    PubMed Central

    Brannon, Timothy S.

    2006-01-01

    Continuous infusion intravenous (IV) drugs in neonatal intensive care are usually prepared based on patient weight so that the dose is readable as a simple multiple of the infusion pump rate. New safety guidelines propose that hospitals switch to using standardized admixtures of these drugs to prevent calculation errors during ad hoc preparation. Extended hierarchical task analysis suggests that switching to standardized admixtures may lead to more errors in programming the pump at the bedside. PMID:17238482

  7. Accuracy of iodine quantification using dual energy CT in latest generation dual source and dual layer CT.

    PubMed

    Pelgrim, Gert Jan; van Hamersvelt, Robbert W; Willemink, Martin J; Schmidt, Bernhard T; Flohr, Thomas; Schilham, Arnold; Milles, Julien; Oudkerk, Matthijs; Leiner, Tim; Vliegenthart, Rozemarijn

    2017-09-01

    To determine the accuracy of iodine quantification with dual energy computed tomography (DECT) in two high-end CT systems with different spectral imaging techniques. Five tubes with different iodine concentrations (0, 5, 10, 15, 20 mg/ml) were analysed in an anthropomorphic thoracic phantom. Adding two phantom rings simulated increased patient size. For third-generation dual source CT (DSCT), tube voltage combinations of 150Sn and 70, 80, 90, 100 kVp were analysed. For dual layer CT (DLCT), 120 and 140 kVp were used. Scans were repeated three times. Median normalized values and interquartile ranges (IQRs) were calculated for all kVp settings and phantom sizes. Correlation between measured and known iodine concentrations was excellent for both systems (R = 0.999-1.000, p < 0.0001). For DSCT, median measurement errors ranged from -0.5% (IQR -2.0, 2.0%) at 150Sn/70 kVp and -2.3% (IQR -4.0, -0.1%) at 150Sn/80 kVp to -4.0% (IQR -6.0, -2.8%) at 150Sn/90 kVp. For DLCT, median measurement errors ranged from -3.3% (IQR -4.9, -1.5%) at 140 kVp to -4.6% (IQR -6.0, -3.6%) at 120 kVp. Larger phantom sizes increased variability of iodine measurements (p < 0.05). Iodine concentration can be accurately quantified with state-of-the-art DECT systems from two vendors. The lowest absolute errors were found for DSCT using the 150Sn/70 kVp or 150Sn/80 kVp combinations, which was slightly more accurate than 140 kVp in DLCT. • High-end CT scanners allow accurate iodine quantification using different DECT techniques. • Lowest measurement error was found in scans with largest photon energy separation. • Dual-source CT quantified iodine slightly more accurately than dual layer CT.

  8. Searching for Flickering Giants in the Ursa Minor Dwarf Spheroidal Galaxy

    NASA Astrophysics Data System (ADS)

    Montiel, Edward J.; Mighell, K. J.

    2010-01-01

    We present a preliminary analysis of three epochs of archival Hubble Space Telescope (HST) Wide Field Planetary Camera 2 (WFPC2) observations of a single field in the Ursa Minor (UMi) dwarf spheroidal (dSph) galaxy. These observations were obtained in 2000, 2002, and 2004 (GO-7341, GO-8776, GO-2004; PI: Olszewski). We expand upon the work of Mighell and Roederer 2004 who reported the existence of low-amplitude variability in red giant stars in the UMi dSph. We report the 16 brightest point sources (F606W <= 21.5 mag) that we are able to match between all 3 epochs. The 112 observations were analyzed with HSTphot. We tested for variability with a chi-squared statistic that had a softened photometric error where 0.01 mag was added in quadrature to the reported HSTphot photometric error. We find that all 13 stars and 3 probable galaxies exhibit the same phenomenon as described in Mighell and Roederer with peak to peak amplitudes ranging from 54 to 125 mmags on 10 minute timescales. If these objects were not varying, the deviates should be normally distributed. However, we find that the deviates have a standard deviation of 1.4. This leads to three possible conclusions: (1) the observed phenomenon is real, (2) an additional systematic error of 7 mmag needs to be added to account for additional photometric errors (possibly due to dithering), or (3) there was a small instrumental instability with the WFPC2 instrument from 2000 to 2004. E.J.M. was supported by the NOAO/KPNO Research Experience for Undergraduates (REU) Program which is funded by the National Science Foundation Research Experiences for Undergraduates Program and the Department of Defense ASSURE program through Scientific Program Order No. 13 (AST-0754223) of the Cooperative Agreement No.AST-0132798 between the Association of Universities for Research in Astronomy (AURA) and the NSF.

  9. Detecting Signatures of GRACE Sensor Errors in Range-Rate Residuals

    NASA Astrophysics Data System (ADS)

    Goswami, S.; Flury, J.

    2016-12-01

    In order to reach the accuracy of the GRACE baseline, predicted earlier from the design simulations, efforts are ongoing since a decade. GRACE error budget is highly dominated by noise from sensors, dealiasing models and modeling errors. GRACE range-rate residuals contain these errors. Thus, their analysis provides an insight to understand the individual contribution to the error budget. Hence, we analyze the range-rate residuals with focus on contribution of sensor errors due to mis-pointing and bad ranging performance in GRACE solutions. For the analysis of pointing errors, we consider two different reprocessed attitude datasets with differences in pointing performance. Then range-rate residuals are computed from these two datasetsrespectively and analysed. We further compare the system noise of four K-and Ka- band frequencies of the two spacecrafts, with range-rate residuals. Strong signatures of mis-pointing errors can be seen in the range-rate residuals. Also, correlation between range frequency noise and range-rate residuals are seen.

  10. A glacier runoff extension to the Precipitation Runoff Modeling System

    USGS Publications Warehouse

    Van Beusekom, Ashley E.; Viger, Roland

    2016-01-01

    A module to simulate glacier runoff, PRMSglacier, was added to PRMS (Precipitation Runoff Modeling System), a distributed-parameter, physical-process hydrological simulation code. The extension does not require extensive on-glacier measurements or computational expense but still relies on physical principles over empirical relations as much as is feasible while maintaining model usability. PRMSglacier is validated on two basins in Alaska, Wolverine, and Gulkana Glacier basin, which have been studied since 1966 and have a substantial amount of data with which to test model performance over a long period of time covering a wide range of climatic and hydrologic conditions. When error in field measurements is considered, the Nash-Sutcliffe efficiencies of streamflow are 0.87 and 0.86, the absolute bias fractions of the winter mass balance simulations are 0.10 and 0.08, and the absolute bias fractions of the summer mass balances are 0.01 and 0.03, all computed over 42 years for the Wolverine and Gulkana Glacier basins, respectively. Without taking into account measurement error, the values are still within the range achieved by the more computationally expensive codes tested over shorter time periods.

  11. Error Rates in Measuring Teacher and School Performance Based on Student Test Score Gains. NCEE 2010-4004

    ERIC Educational Resources Information Center

    Schochet, Peter Z.; Chiang, Hanley S.

    2010-01-01

    This paper addresses likely error rates for measuring teacher and school performance in the upper elementary grades using value-added models applied to student test score gain data. Using realistic performance measurement system schemes based on hypothesis testing, we develop error rate formulas based on OLS and Empirical Bayes estimators.…

  12. RD Optimized, Adaptive, Error-Resilient Transmission of MJPEG2000-Coded Video over Multiple Time-Varying Channels

    NASA Astrophysics Data System (ADS)

    Bezan, Scott; Shirani, Shahram

    2006-12-01

    To reliably transmit video over error-prone channels, the data should be both source and channel coded. When multiple channels are available for transmission, the problem extends to that of partitioning the data across these channels. The condition of transmission channels, however, varies with time. Therefore, the error protection added to the data at one instant of time may not be optimal at the next. In this paper, we propose a method for adaptively adding error correction code in a rate-distortion (RD) optimized manner using rate-compatible punctured convolutional codes to an MJPEG2000 constant rate-coded frame of video. We perform an analysis on the rate-distortion tradeoff of each of the coding units (tiles and packets) in each frame and adapt the error correction code assigned to the unit taking into account the bandwidth and error characteristics of the channels. This method is applied to both single and multiple time-varying channel environments. We compare our method with a basic protection method in which data is either not transmitted, transmitted with no protection, or transmitted with a fixed amount of protection. Simulation results show promising performance for our proposed method.

  13. Statistical approaches to account for false-positive errors in environmental DNA samples.

    PubMed

    Lahoz-Monfort, José J; Guillera-Arroita, Gurutzeta; Tingley, Reid

    2016-05-01

    Environmental DNA (eDNA) sampling is prone to both false-positive and false-negative errors. We review statistical methods to account for such errors in the analysis of eDNA data and use simulations to compare the performance of different modelling approaches. Our simulations illustrate that even low false-positive rates can produce biased estimates of occupancy and detectability. We further show that removing or classifying single PCR detections in an ad hoc manner under the suspicion that such records represent false positives, as sometimes advocated in the eDNA literature, also results in biased estimation of occupancy, detectability and false-positive rates. We advocate alternative approaches to account for false-positive errors that rely on prior information, or the collection of ancillary detection data at a subset of sites using a sampling method that is not prone to false-positive errors. We illustrate the advantages of these approaches over ad hoc classifications of detections and provide practical advice and code for fitting these models in maximum likelihood and Bayesian frameworks. Given the severe bias induced by false-negative and false-positive errors, the methods presented here should be more routinely adopted in eDNA studies. © 2015 John Wiley & Sons Ltd.

  14. Backus Effect on a Perpendicular Errors in Harmonic Models of Real vs. Synthetic Data

    NASA Technical Reports Server (NTRS)

    Voorhies, C. V.; Santana, J.; Sabaka, T.

    1999-01-01

    Measurements of geomagnetic scalar intensity on a thin spherical shell alone are not enough to separate internal from external source fields; moreover, such scalar data are not enough for accurate modeling of the vector field from internal sources because of unmodeled fields and small data errors. Spherical harmonic models of the geomagnetic potential fitted to scalar data alone therefore suffer from well-understood Backus effect and perpendicular errors. Curiously, errors in some models of simulated 'data' are very much less than those in models of real data. We analyze select Magsat vector and scalar measurements separately to illustrate Backus effect and perpendicular errors in models of real scalar data. By using a model to synthesize 'data' at the observation points, and by adding various types of 'noise', we illustrate such errors in models of synthetic 'data'. Perpendicular errors prove quite sensitive to the maximum degree in the spherical harmonic expansion of the potential field model fitted to the scalar data. Small errors in models of synthetic 'data' are found to be an artifact of matched truncation levels. For example, consider scalar synthetic 'data' computed from a degree 14 model. A degree 14 model fitted to such synthetic 'data' yields negligible error, but amplifies 4 nT (rmss) added noise into a 60 nT error (rmss); however, a degree 12 model fitted to the noisy 'data' suffers a 492 nT error (rmms through degree 12). Geomagnetic measurements remain unaware of model truncation, so the small errors indicated by some simulations cannot be realized in practice. Errors in models fitted to scalar data alone approach 1000 nT (rmss) and several thousand nT (maximum).

  15. A predictive model of nuclear power plant crew decision-making and performance in a dynamic simulation environment

    NASA Astrophysics Data System (ADS)

    Coyne, Kevin Anthony

    The safe operation of complex systems such as nuclear power plants requires close coordination between the human operators and plant systems. In order to maintain an adequate level of safety following an accident or other off-normal event, the operators often are called upon to perform complex tasks during dynamic situations with incomplete information. The safety of such complex systems can be greatly improved if the conditions that could lead operators to make poor decisions and commit erroneous actions during these situations can be predicted and mitigated. The primary goal of this research project was the development and validation of a cognitive model capable of simulating nuclear plant operator decision-making during accident conditions. Dynamic probabilistic risk assessment methods can improve the prediction of human error events by providing rich contextual information and an explicit consideration of feedback arising from man-machine interactions. The Accident Dynamics Simulator paired with the Information, Decision, and Action in a Crew context cognitive model (ADS-IDAC) shows promise for predicting situational contexts that might lead to human error events, particularly knowledge driven errors of commission. ADS-IDAC generates a discrete dynamic event tree (DDET) by applying simple branching rules that reflect variations in crew responses to plant events and system status changes. Branches can be generated to simulate slow or fast procedure execution speed, skipping of procedure steps, reliance on memorized information, activation of mental beliefs, variations in control inputs, and equipment failures. Complex operator mental models of plant behavior that guide crew actions can be represented within the ADS-IDAC mental belief framework and used to identify situational contexts that may lead to human error events. This research increased the capabilities of ADS-IDAC in several key areas. The ADS-IDAC computer code was improved to support additional branching events and provide a better representation of the IDAC cognitive model. An operator decision-making engine capable of responding to dynamic changes in situational context was implemented. The IDAC human performance model was fully integrated with a detailed nuclear plant model in order to realistically simulate plant accident scenarios. Finally, the improved ADS-IDAC model was calibrated, validated, and updated using actual nuclear plant crew performance data. This research led to the following general conclusions: (1) A relatively small number of branching rules are capable of efficiently capturing a wide spectrum of crew-to-crew variabilities. (2) Compared to traditional static risk assessment methods, ADS-IDAC can provide a more realistic and integrated assessment of human error events by directly determining the effect of operator behaviors on plant thermal hydraulic parameters. (3) The ADS-IDAC approach provides an efficient framework for capturing actual operator performance data such as timing of operator actions, mental models, and decision-making activities.

  16. Addressing the Heterogeneity of Subject Indexing in the ADS Databases

    NASA Astrophysics Data System (ADS)

    Dubin, David S.

    A drawback of the current document representation scheme in the ADS abstract service is its heterogeneous subject indexing. Several related but inconsistent indexing languages are represented in ADS. A method of reconciling some indexing inconsistencies is described. Using lexical similarity alone, one out of six ADS descriptors can be automatically mapped to some other descriptor. Analysis of postings data can direct administrators to those mergings it is most important to check for errors.

  17. A method to improve the accuracy of pair-wise combinations of anthropometric elements when only limited data are available.

    PubMed

    Albin, Thomas J

    2013-01-01

    Designers and ergonomists occasionally must produce anthropometric models of workstations with only summary percentile data available regarding the intended users. Until now the only option available was adding or subtracting percentiles of the anthropometric elements, e.g. heights and widths, used in the model, despite the known resultant errors in the estimate of the percent of users accommodated. This paper introduces a new method, the Median Correlation Method (MCM) that reduces the error. Compare the relative accuracy of MCM to combining percentiles for anthropometric models comprised of all possible pairs of five anthropometric elements. Describe the mathematical basis of the greater accuracy of MCM. MCM is described. 95th percentile accommodation percentiles are calculated for the sums and differences of all combinations of five anthropometric elements by combining percentiles and using MCM. The resulting estimates are compared with empirical values of the 95th percentiles, and the relative errors are reported. The MCM method is shown to be significantly more accurate than adding percentiles. MCM is demonstrated to have a mathematical advantage estimating accommodation relative to adding or subtracting percentiles. The MCM method should be used in preference to adding or subtracting percentiles when limited data prevent more sophisticated anthropometric models.

  18. Pan-Antarctic analysis aggregating spatial estimates of Adélie penguin abundance reveals robust dynamics despite stochastic noise.

    PubMed

    Che-Castaldo, Christian; Jenouvrier, Stephanie; Youngflesh, Casey; Shoemaker, Kevin T; Humphries, Grant; McDowall, Philip; Landrum, Laura; Holland, Marika M; Li, Yun; Ji, Rubao; Lynch, Heather J

    2017-10-10

    Colonially-breeding seabirds have long served as indicator species for the health of the oceans on which they depend. Abundance and breeding data are repeatedly collected at fixed study sites in the hopes that changes in abundance and productivity may be useful for adaptive management of marine resources, but their suitability for this purpose is often unknown. To address this, we fit a Bayesian population dynamics model that includes process and observation error to all known Adélie penguin abundance data (1982-2015) in the Antarctic, covering >95% of their population globally. We find that process error exceeds observation error in this system, and that continent-wide "year effects" strongly influence population growth rates. Our findings have important implications for the use of Adélie penguins in Southern Ocean feedback management, and suggest that aggregating abundance across space provides the fastest reliable signal of true population change for species whose dynamics are driven by stochastic processes.Adélie penguins are a key Antarctic indicator species, but data patchiness has challenged efforts to link population dynamics to key drivers. Che-Castaldo et al. resolve this issue using a pan-Antarctic Bayesian model to infer missing data, and show that spatial aggregation leads to more robust inference regarding dynamics.

  19. TH-AB-202-02: Real-Time Verification and Error Detection for MLC Tracking Deliveries Using An Electronic Portal Imaging Device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    J Zwan, B; Central Coast Cancer Centre, Gosford, NSW; Colvill, E

    2016-06-15

    Purpose: The added complexity of the real-time adaptive multi-leaf collimator (MLC) tracking increases the likelihood of undetected MLC delivery errors. In this work we develop and test a system for real-time delivery verification and error detection for MLC tracking radiotherapy using an electronic portal imaging device (EPID). Methods: The delivery verification system relies on acquisition and real-time analysis of transit EPID image frames acquired at 8.41 fps. In-house software was developed to extract the MLC positions from each image frame. Three comparison metrics were used to verify the MLC positions in real-time: (1) field size, (2) field location and, (3)more » field shape. The delivery verification system was tested for 8 VMAT MLC tracking deliveries (4 prostate and 4 lung) where real patient target motion was reproduced using a Hexamotion motion stage and a Calypso system. Sensitivity and detection delay was quantified for various types of MLC and system errors. Results: For both the prostate and lung test deliveries the MLC-defined field size was measured with an accuracy of 1.25 cm{sup 2} (1 SD). The field location was measured with an accuracy of 0.6 mm and 0.8 mm (1 SD) for lung and prostate respectively. Field location errors (i.e. tracking in wrong direction) with a magnitude of 3 mm were detected within 0.4 s of occurrence in the X direction and 0.8 s in the Y direction. Systematic MLC gap errors were detected as small as 3 mm. The method was not found to be sensitive to random MLC errors and individual MLC calibration errors up to 5 mm. Conclusion: EPID imaging may be used for independent real-time verification of MLC trajectories during MLC tracking deliveries. Thresholds have been determined for error detection and the system has been shown to be sensitive to a range of delivery errors.« less

  20. The impact of command signal power distribution, processing delays, and speed scaling on neurally-controlled devices.

    PubMed

    Marathe, A R; Taylor, D M

    2015-08-01

    Decoding algorithms for brain-machine interfacing (BMI) are typically only optimized to reduce the magnitude of decoding errors. Our goal was to systematically quantify how four characteristics of BMI command signals impact closed-loop performance: (1) error magnitude, (2) distribution of different frequency components in the decoding errors, (3) processing delays, and (4) command gain. To systematically evaluate these different command features and their interactions, we used a closed-loop BMI simulator where human subjects used their own wrist movements to command the motion of a cursor to targets on a computer screen. Random noise with three different power distributions and four different relative magnitudes was added to the ongoing cursor motion in real time to simulate imperfect decoding. These error characteristics were tested with four different visual feedback delays and two velocity gains. Participants had significantly more trouble correcting for errors with a larger proportion of low-frequency, slow-time-varying components than they did with jittery, higher-frequency errors, even when the error magnitudes were equivalent. When errors were present, a movement delay often increased the time needed to complete the movement by an order of magnitude more than the delay itself. Scaling down the overall speed of the velocity command can actually speed up target acquisition time when low-frequency errors and delays are present. This study is the first to systematically evaluate how the combination of these four key command signal features (including the relatively-unexplored error power distribution) and their interactions impact closed-loop performance independent of any specific decoding method. The equations we derive relating closed-loop movement performance to these command characteristics can provide guidance on how best to balance these different factors when designing BMI systems. The equations reported here also provide an efficient way to compare a diverse range of decoding options offline.

  1. The impact of command signal power distribution, processing delays, and speed scaling on neurally-controlled devices

    NASA Astrophysics Data System (ADS)

    Marathe, A. R.; Taylor, D. M.

    2015-08-01

    Objective. Decoding algorithms for brain-machine interfacing (BMI) are typically only optimized to reduce the magnitude of decoding errors. Our goal was to systematically quantify how four characteristics of BMI command signals impact closed-loop performance: (1) error magnitude, (2) distribution of different frequency components in the decoding errors, (3) processing delays, and (4) command gain. Approach. To systematically evaluate these different command features and their interactions, we used a closed-loop BMI simulator where human subjects used their own wrist movements to command the motion of a cursor to targets on a computer screen. Random noise with three different power distributions and four different relative magnitudes was added to the ongoing cursor motion in real time to simulate imperfect decoding. These error characteristics were tested with four different visual feedback delays and two velocity gains. Main results. Participants had significantly more trouble correcting for errors with a larger proportion of low-frequency, slow-time-varying components than they did with jittery, higher-frequency errors, even when the error magnitudes were equivalent. When errors were present, a movement delay often increased the time needed to complete the movement by an order of magnitude more than the delay itself. Scaling down the overall speed of the velocity command can actually speed up target acquisition time when low-frequency errors and delays are present. Significance. This study is the first to systematically evaluate how the combination of these four key command signal features (including the relatively-unexplored error power distribution) and their interactions impact closed-loop performance independent of any specific decoding method. The equations we derive relating closed-loop movement performance to these command characteristics can provide guidance on how best to balance these different factors when designing BMI systems. The equations reported here also provide an efficient way to compare a diverse range of decoding options offline.

  2. The Use of Analog Track Angle Error Display for Improving Simulated GPS Approach Performance

    DOT National Transportation Integrated Search

    1995-08-01

    The effect of adding track angle error (TAE) information to general aviation aircraft cockpit displays used for GPS : nonprecision instrument approaches was studied experimentally. Six pilots flew 120 approaches in a Frasca 242 light : twin aircraft ...

  3. An examination of the operational error database for air route traffic control centers.

    DOT National Transportation Integrated Search

    1993-12-01

    Monitoring the frequency and determining the causes of operational errors - defined as the loss of prescribed separation between aircraft - is one approach to assessing the operational safety of the air traffic control system. The Federal Aviation Ad...

  4. Evaluation of causes and frequency of medication errors during information technology downtime.

    PubMed

    Hanuscak, Tara L; Szeinbach, Sheryl L; Seoane-Vazquez, Enrique; Reichert, Brendan J; McCluskey, Charles F

    2009-06-15

    The causes and frequency of medication errors occurring during information technology downtime were evaluated. Individuals from a convenience sample of 78 hospitals who were directly responsible for supporting and maintaining clinical information systems (CISs) and automated dispensing systems (ADSs) were surveyed using an online tool between February 2007 and May 2007 to determine if medication errors were reported during periods of system downtime. The errors were classified using the National Coordinating Council for Medication Error Reporting and Prevention severity scoring index. The percentage of respondents reporting downtime was estimated. Of the 78 eligible hospitals, 32 respondents with CIS and ADS responsibilities completed the online survey for a response rate of 41%. For computerized prescriber order entry, patch installations and system upgrades caused an average downtime of 57% over a 12-month period. Lost interface and interface malfunction were reported for centralized and decentralized ADSs, with an average downtime response of 34% and 29%, respectively. The average downtime response was 31% for software malfunctions linked to clinical decision-support systems. Although patient harm did not result from 30 (54%) medication errors, the potential for harm was present for 9 (16%) of these errors. Medication errors occurred during CIS and ADS downtime despite the availability of backup systems and standard protocols to handle periods of system downtime. Efforts should be directed to reduce the frequency and length of down-time in order to minimize medication errors during such downtime.

  5. The high accuracy data processing system of laser interferometry signals based on MSP430

    NASA Astrophysics Data System (ADS)

    Qi, Yong-yue; Lin, Yu-chi; Zhao, Mei-rong

    2009-07-01

    Generally speaking there are two orthogonal signals used in single-frequency laser interferometer for differentiating direction and electronic subdivision. However there usually exist three errors with the interferential signals: zero offsets error, unequal amplitude error and quadrature phase shift error. These three errors have a serious impact on subdivision precision. Based on Heydemann error compensation algorithm, it is proposed to achieve compensation of the three errors. Due to complicated operation of the Heydemann mode, a improved arithmetic is advanced to decrease the calculating time effectively in accordance with the special characteristic that only one item of data will be changed in each fitting algorithm operation. Then a real-time and dynamic compensatory circuit is designed. Taking microchip MSP430 as the core of hardware system, two input signals with the three errors are turned into digital quantity by the AD7862. After data processing in line with improved arithmetic, two ideal signals without errors are output by the AD7225. At the same time two original signals are turned into relevant square wave and imported to the differentiating direction circuit. The impulse exported from the distinguishing direction circuit is counted by the timer of the microchip. According to the number of the pulse and the soft subdivision the final result is showed by LED. The arithmetic and the circuit are adopted to test the capability of a laser interferometer with 8 times optical path difference and the measuring accuracy of 12-14nm is achieved.

  6. Beam-specific planning volumes for scattered-proton lung radiotherapy

    NASA Astrophysics Data System (ADS)

    Flampouri, S.; Hoppe, B. S.; Slopsema, R. L.; Li, Z.

    2014-08-01

    This work describes the clinical implementation of a beam-specific planning treatment volume (bsPTV) calculation for lung cancer proton therapy and its integration into the treatment planning process. Uncertainties incorporated in the calculation of the bsPTV included setup errors, machine delivery variability, breathing effects, inherent proton range uncertainties and combinations of the above. Margins were added for translational and rotational setup errors and breathing motion variability during the course of treatment as well as for their effect on proton range of each treatment field. The effect of breathing motion and deformation on the proton range was calculated from 4D computed tomography data. Range uncertainties were considered taking into account the individual voxel HU uncertainty along each proton beamlet. Beam-specific treatment volumes generated for 12 patients were used: a) as planning targets, b) for routine plan evaluation, c) to aid beam angle selection and d) to create beam-specific margins for organs at risk to insure sparing. The alternative planning technique based on the bsPTVs produced similar target coverage as the conventional proton plans while better sparing the surrounding tissues. Conventional proton plans were evaluated by comparing the dose distributions per beam with the corresponding bsPTV. The bsPTV volume as a function of beam angle revealed some unexpected sources of uncertainty and could help the planner choose more robust beams. Beam-specific planning volume for the spinal cord was used for dose distribution shaping to ensure organ sparing laterally and distally to the beam.

  7. Quantum chemical modeling of zeolite-catalyzed methylation reactions: toward chemical accuracy for barriers.

    PubMed

    Svelle, Stian; Tuma, Christian; Rozanska, Xavier; Kerber, Torsten; Sauer, Joachim

    2009-01-21

    The methylation of ethene, propene, and t-2-butene by methanol over the acidic microporous H-ZSM-5 catalyst has been investigated by a range of computational methods. Density functional theory (DFT) with periodic boundary conditions (PBE functional) fails to describe the experimentally determined decrease of apparent energy barriers with the alkene size due to inadequate description of dispersion forces. Adding a damped dispersion term expressed as a parametrized sum over atom pair C(6) contributions leads to uniformly underestimated barriers due to self-interaction errors. A hybrid MP2:DFT scheme is presented that combines MP2 energy calculations on a series of cluster models of increasing size with periodic DFT calculations, which allows extrapolation to the periodic MP2 limit. Additionally, errors caused by the use of finite basis sets, contributions of higher order correlation effects, zero-point vibrational energy, and thermal contributions to the enthalpy were evaluated and added to the "periodic" MP2 estimate. This multistep approach leads to enthalpy barriers at 623 K of 104, 77, and 48 kJ/mol for ethene, propene, and t-2-butene, respectively, which deviate from the experimentally measured values by 0, +13, and +8 kJ/mol. Hence, enthalpy barriers can be calculated with near chemical accuracy, which constitutes significant progress in the quantum chemical modeling of reactions in heterogeneous catalysis in general and microporous zeolites in particular.

  8. Modeling and analysis of pinhole occulter experiment

    NASA Technical Reports Server (NTRS)

    Ring, J. R.

    1986-01-01

    The objectives were to improve pointing control system implementation by converting the dynamic compensator from a continuous domain representation to a discrete one; to determine pointing stability sensitivites to sensor and actuator errors by adding sensor and actuator error models to treetops and by developing an error budget for meeting pointing stability requirements; and to determine pointing performance for alternate mounting bases (space station for example).

  9. An Adaptive Method of Lines with Error Control for Parabolic Equations of the Reaction-Diffusion Type.

    DTIC Science & Technology

    1984-06-01

    space discretization error . 1. I 3 1. INTRODUCTION Reaction- diffusion processes occur in many branches of biology and physical chemistry. Examples...to model reaction- diffusion phenomena. The primary goal of this adaptive method is to keep a particular norm of the space discretization error less...AD-A142 253 AN ADAPTIVE MET6 OFD LNES WITH ERROR CONTROL FOR 1 INST FOR PHYSICAL SCIENCE AND TECH. I BABUSKAAAO C7 EA OH S UMR AN UNVC EEP R

  10. Free-Inertial and Damped-Inertial Navigation Mechanization and Error Equations

    DTIC Science & Technology

    1975-04-18

    AD-A014 356 FREE-INERTIAL AND DAMPED-INERTIAL NAVIGATION MECHANIZATION AND ERROR EQUATIONS Warren G. Heller Analytic Sciences Corporation Prepared...IHI IL JI -J THE ANALYTIC SCIENCES CORPORATION TR-312-1-1 FREE-INERTIAL AND DAMPED-INERTIAL NAViGATION MECHANIZATION AND ERROR EQUATIONS Ap~ril 18...PERIOO COVC/REO Fr-,- 1wer l and Dmped-Inertial Navigation Technical Mechanization and Error Equations 8/20-73 - 8/20/74 S. PjLtFORJ4djNjOjO, REPORT

  11. Evaluation of voice codecs for the Australian mobile satellite system

    NASA Technical Reports Server (NTRS)

    Bundrock, Tony; Wilkinson, Mal

    1990-01-01

    The evaluation procedure to choose a low bit rate voice coding algorithm is described for the Australian land mobile satellite system. The procedure is designed to assess both the inherent quality of the codec under 'normal' conditions and its robustness under 'severe' conditions. For the assessment, normal conditions were chosen to be random bit error rate with added background acoustic noise and the severe condition is designed to represent burst error conditions when mobile satellite channel suffers from signal fading due to roadside vegetation. The assessment is divided into two phases. First, a reduced set of conditions is used to determine a short list of candidate codecs for more extensive testing in the second phase. The first phase conditions include quality and robustness and codecs are ranked with a 60:40 weighting on the two. Second, the short listed codecs are assessed over a range of input voice levels, BERs, background noise conditions, and burst error distributions. Assessment is by subjective rating on a five level opinion scale and all results are then used to derive a weighted Mean Opinion Score using appropriate weights for each of the test conditions.

  12. Understanding the complete pathophysiology of chronic mild to moderate neck pain: Implications for the inclusion of a comprehensive sensorimotor evaluation.

    PubMed

    Cheever, Kelly M; Myrer, J William; Johnson, A Wayne; Fellingham, Gilbert W

    2017-09-22

    Inconsistencies in the literature concerning the effect of neck pain have led to a lack of understanding concerning the complete pathophysiology of neck pain. While the effect of neck pain on motor function as measured by active range of motion and isometric neck strength is well documented the effect of neck pain on sensory measures such as tactical acuity and neck reposition error (NRE) remain poorly understood. The purpose of this study was to evaluate a combined sensorimotor evaluation to explore the potential benefits of incorporating both sensory and motor task into a physical evaluation of neck pain suffers to gain an added knowledge of the complete pathophysiology of their health status. A cross-sectional study that measured neck joint reposition error, tactical acuity, neck isometric strength and range of motion in 40 volunteer participants (22 pain, 18 control). A statistically significant increase in NRE in flexion (2.75∘± 1.52∘ vs. 4.53∘± 1.74∘ and in extension (3.78∘± 1.95∘ vs 5.77∘± 2.73∘ in participants suffering from neck pain was observed. Additionally, the dermatome C5 was found to be the most affected. No differences were found in neck strength or neck range of motion between healthy controls and patients with chronic moderate neck pain.

  13. Progress in The Semantic Analysis of Scientific Code

    NASA Technical Reports Server (NTRS)

    Stewart, Mark

    2000-01-01

    This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, independent expert parsers. These semantic parsers encode domain knowledge and recognize formulae in different disciplines including physics, numerical methods, mathematics, and geometry. The parsers will automatically recognize and document some static, semantic concepts and help locate some program semantic errors. These techniques may apply to a wider range of scientific codes. If so, the techniques could reduce the time, risk, and effort required to develop and modify scientific codes.

  14. Multisource Localization from Delay and Doppler Measurements.

    DTIC Science & Technology

    1986-07-01

    evaluated using the chain rule : 4a ,. .w aR sd aRs ad fT ()s -’s R d -au = d - (2n) where R _x sa and xs /R s of have been used to evaluate .3R/axs...I as the tth row/column entry. The derivative, .3(R.d)/cz., isa evaluated ,tsing the product rule : 3(R.d) R d 3R. R d =3z 3. : _ i = ._L -in(5 i...mean square error is shown to be in the range of two to ten times the corresponding Crammer -Rao bounds when these techniques - are applied to 2-sensor

  15. Evidence from the lamarck granodiorite for rapid late cretaceous crust formation in California

    USGS Publications Warehouse

    Coleman, D.S.; Frost, T.P.; Glazner, A.F.

    1992-01-01

    Strontium and neodymium isotopic data for rocks from the voluminous 90-million-year-old Lamarck intrusive suite in the Sierra Nevada batholith, California, show little variation across a compositional range from gabbro to granite. Data for three different gabbro intrusions within the suite are identical within analytical error and are consistent with derivation from an enriched mantle source. Recognition of local involvement of enriched mantle during generation of the Sierran batholith modifies estimates of crustal growth rates in the United States. These data indicate that parts of the Sierra Nevada batholith may consist almost entirely of juvenile crust added during Cretaceous magmatism.

  16. Fault-tolerant quantum error detection.

    PubMed

    Linke, Norbert M; Gutierrez, Mauricio; Landsman, Kevin A; Figgatt, Caroline; Debnath, Shantanu; Brown, Kenneth R; Monroe, Christopher

    2017-10-01

    Quantum computers will eventually reach a size at which quantum error correction becomes imperative. Quantum information can be protected from qubit imperfections and flawed control operations by encoding a single logical qubit in multiple physical qubits. This redundancy allows the extraction of error syndromes and the subsequent detection or correction of errors without destroying the logical state itself through direct measurement. We show the encoding and syndrome measurement of a fault-tolerantly prepared logical qubit via an error detection protocol on four physical qubits, represented by trapped atomic ions. This demonstrates the robustness of a logical qubit to imperfections in the very operations used to encode it. The advantage persists in the face of large added error rates and experimental calibration errors.

  17. Error response test system and method using test mask variable

    NASA Technical Reports Server (NTRS)

    Gender, Thomas K. (Inventor)

    2006-01-01

    An error response test system and method with increased functionality and improved performance is provided. The error response test system provides the ability to inject errors into the application under test to test the error response of the application under test in an automated and efficient manner. The error response system injects errors into the application through a test mask variable. The test mask variable is added to the application under test. During normal operation, the test mask variable is set to allow the application under test to operate normally. During testing, the error response test system can change the test mask variable to introduce an error into the application under test. The error response system can then monitor the application under test to determine whether the application has the correct response to the error.

  18. Decreasing range resolution of a SAR image to permit correction of motion measurement errors beyond the SAR range resolution

    DOEpatents

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2010-07-20

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  19. Comparing range data across the slow-time dimension to correct motion measurement errors beyond the range resolution of a synthetic aperture radar

    DOEpatents

    Doerry, Armin W.; Heard, Freddie E.; Cordaro, J. Thomas

    2010-08-17

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  20. 78 FR 9455 - Agency Information Collection (eBenefits Portal) Activity Under OMB Review; Correction

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-02-08

    ... DEPARTMENT OF VETERANS AFFAIRS [OMB Control No. 2900-0737] Agency Information Collection (e... error. The notice incorrectly identified the responsible VA organization. This document corrects that error by removing ``Office of Information and Technology'' and adding, in its place, ``Veterans Benefits...

  1. The use of genetic programming to develop a predictor of swash excursion on sandy beaches

    NASA Astrophysics Data System (ADS)

    Passarella, Marinella; Goldstein, Evan B.; De Muro, Sandro; Coco, Giovanni

    2018-02-01

    We use genetic programming (GP), a type of machine learning (ML) approach, to predict the total and infragravity swash excursion using previously published data sets that have been used extensively in swash prediction studies. Three previously published works with a range of new conditions are added to this data set to extend the range of measured swash conditions. Using this newly compiled data set we demonstrate that a ML approach can reduce the prediction errors compared to well-established parameterizations and therefore it may improve coastal hazards assessment (e.g. coastal inundation). Predictors obtained using GP can also be physically sound and replicate the functionality and dependencies of previous published formulas. Overall, we show that ML techniques are capable of both improving predictability (compared to classical regression approaches) and providing physical insight into coastal processes.

  2. JPL-ANTOPT antenna structure optimization program

    NASA Technical Reports Server (NTRS)

    Strain, D. M.

    1994-01-01

    New antenna path-length error and pointing-error structure optimization codes were recently added to the MSC/NASTRAN structural analysis computer program. Path-length and pointing errors are important measured of structure-related antenna performance. The path-length and pointing errors are treated as scalar displacements for statics loading cases. These scalar displacements can be subject to constraint during the optimization process. Path-length and pointing-error calculations supplement the other optimization and sensitivity capabilities of NASTRAN. The analysis and design functions were implemented as 'DMAP ALTERs' to the Design Optimization (SOL 200) Solution Sequence of MSC-NASTRAN, Version 67.5.

  3. Correction of motion measurement errors beyond the range resolution of a synthetic aperture radar

    DOEpatents

    Doerry, Armin W [Albuquerque, NM; Heard, Freddie E [Albuquerque, NM; Cordaro, J Thomas [Albuquerque, NM

    2008-06-24

    Motion measurement errors that extend beyond the range resolution of a synthetic aperture radar (SAR) can be corrected by effectively decreasing the range resolution of the SAR in order to permit measurement of the error. Range profiles can be compared across the slow-time dimension of the input data in order to estimate the error. Once the error has been determined, appropriate frequency and phase correction can be applied to the uncompressed input data, after which range and azimuth compression can be performed to produce a desired SAR image.

  4. Gravity and Nonconservative Force Model Tuning for the GEOSAT Follow-On Spacecraft

    NASA Technical Reports Server (NTRS)

    Lemoine, Frank G.; Zelensky, Nikita P.; Rowlands, David D.; Luthcke, Scott B.; Chinn, Douglas S.; Marr, Gregory C.; Smith, David E. (Technical Monitor)

    2000-01-01

    The US Navy's GEOSAT Follow-On spacecraft was launched on February 10, 1998 and the primary objective of the mission was to map the oceans using a radar altimeter. Three radar altimeter calibration campaigns have been conducted in 1999 and 2000. The spacecraft is tracked by satellite laser ranging (SLR) and Doppler beacons and a limited amount of data have been obtained from the Global Positioning Receiver (GPS) on board the satellite. Even with EGM96, the predicted radial orbit error due to gravity field mismodelling (to 70x70) remains high at 2.61 cm (compared to 0.88 cm for TOPEX). We report on the preliminary gravity model tuning for GFO using SLR, and altimeter crossover data. Preliminary solutions using SLR and GFO/GFO crossover data from CalVal campaigns I and II in June-August 1999, and January-February 2000 have reduced the predicted radial orbit error to 1.9 cm and further reduction will be possible when additional data are added to the solutions. The gravity model tuning has improved principally the low order m-daily terms and has reduced significantly the geographically correlated error present in this satellite orbit. In addition to gravity field mismodelling, the largest contributor to the orbit error is the non-conservative force mismodelling. We report on further nonconservative force model tuning results using available data from over one cycle in beta prime.

  5. The impact of satellite temperature soundings on the forecasts of a small national meteorological service

    NASA Technical Reports Server (NTRS)

    Wolfson, N.; Thomasell, A.; Alperson, Z.; Brodrick, H.; Chang, J. T.; Gruber, A.; Ohring, G.

    1984-01-01

    The impact of introducing satellite temperature sounding data on a numerical weather prediction model of a national weather service is evaluated. A dry five level, primitive equation model which covers most of the Northern Hemisphere, is used for these experiments. Series of parallel forecast runs out to 48 hours are made with three different sets of initial conditions: (1) NOSAT runs, only conventional surface and upper air observations are used; (2) SAT runs, satellite soundings are added to the conventional data over oceanic regions and North Africa; and (3) ALLSAT runs, the conventional upper air observations are replaced by satellite soundings over the entire model domain. The impact on the forecasts is evaluated by three verification methods: the RMS errors in sea level pressure forecasts, systematic errors in sea level pressure forecasts, and errors in subjective forecasts of significant weather elements for a selected portion of the model domain. For the relatively short range of the present forecasts, the major beneficial impacts on the sea level pressure forecasts are found precisely in those areas where the satellite sounding are inserted and where conventional upper air observations are sparse. The RMS and systematic errors are reduced in these regions. The subjective forecasts of significant weather elements are improved with the use of the satellite data. It is found that the ALLSAT forecasts are of a quality comparable to the SAR forecasts.

  6. Accuracy Enhancement of Inertial Sensors Utilizing High Resolution Spectral Analysis

    PubMed Central

    Noureldin, Aboelmagd; Armstrong, Justin; El-Shafie, Ahmed; Karamat, Tashfeen; McGaughey, Don; Korenberg, Michael; Hussain, Aini

    2012-01-01

    In both military and civilian applications, the inertial navigation system (INS) and the global positioning system (GPS) are two complementary technologies that can be integrated to provide reliable positioning and navigation information for land vehicles. The accuracy enhancement of INS sensors and the integration of INS with GPS are the subjects of widespread research. Wavelet de-noising of INS sensors has had limited success in removing the long-term (low-frequency) inertial sensor errors. The primary objective of this research is to develop a novel inertial sensor accuracy enhancement technique that can remove both short-term and long-term error components from inertial sensor measurements prior to INS mechanization and INS/GPS integration. A high resolution spectral analysis technique called the fast orthogonal search (FOS) algorithm is used to accurately model the low frequency range of the spectrum, which includes the vehicle motion dynamics and inertial sensor errors. FOS models the spectral components with the most energy first and uses an adaptive threshold to stop adding frequency terms when fitting a term does not reduce the mean squared error more than fitting white noise. The proposed method was developed, tested and validated through road test experiments involving both low-end tactical grade and low cost MEMS-based inertial systems. The results demonstrate that in most cases the position accuracy during GPS outages using FOS de-noised data is superior to the position accuracy using wavelet de-noising.

  7. Acute Diarrheal Syndromic Surveillance

    PubMed Central

    Kam, H.J.; Choi, S.; Cho, J.P.; Min, Y.G.; Park, R.W.

    2010-01-01

    Objective In an effort to identify and characterize the environmental factors that affect the number of patients with acute diarrheal (AD) syndrome, we developed and tested two regional surveillance models including holiday and weather information in addition to visitor records, at emergency medical facilities in the Seoul metropolitan area of Korea. Methods With 1,328,686 emergency department visitor records from the National Emergency Department Information system (NEDIS) and the holiday and weather information, two seasonal ARIMA models were constructed: (1) The simple model (only with total patient number), (2) the environmental factor-added model. The stationary R-squared was utilized as an in-sample model goodness-of-fit statistic for the constructed models, and the cumulative mean of the Mean Absolute Percentage Error (MAPE) was used to measure post-sample forecast accuracy over the next 1 month. Results The (1,0,1)(0,1,1)7 ARIMA model resulted in an adequate model fit for the daily number of AD patient visits over 12 months for both cases. Among various features, the total number of patient visits was selected as a commonly influential independent variable. Additionally, for the environmental factor-added model, holidays and daily precipitation were selected as features that statistically significantly affected model fitting. Stationary R-squared values were changed in a range of 0.651-0.828 (simple), and 0.805-0.844 (environmental factor-added) with p<0.05. In terms of prediction, the MAPE values changed within 0.090-0.120 and 0.089-0.114, respectively. Conclusion The environmental factor-added model yielded better MAPE values. Holiday and weather information appear to be crucial for the construction of an accurate syndromic surveillance model for AD, in addition to the visitor and assessment records. PMID:23616829

  8. 76 FR 72082 - Miscellaneous Administrative Changes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-11-22

    ... the 2008 administrative rule. Revise Table Formatting Error in 10 CFR Part 171 The table in paragraph (c) of Sec. 171.16 is missing a colon and a hard return that would separate the heading... subsequent list item, ``35 to 500 employees.'' The formatting errors are corrected, adding a colon after the...

  9. Color discrimination performance in patients with Alzheimer's disease.

    PubMed

    Salamone, Giovanna; Di Lorenzo, Concetta; Mosti, Serena; Lupo, Federica; Cravello, Luca; Palmer, Katie; Musicco, Massimo; Caltagirone, Carlo

    2009-01-01

    Visual deficits are frequent in Alzheimer's disease (AD), yet little is known about the nature of these disturbances. The aim of the present study was to investigate color discrimination in patients with AD to determine whether impairment of this visual function is a cognitive or perceptive/sensory disturbance. A cross-sectional clinical study was conducted in a specialized dementia unit on 20 patients with mild/moderate AD and 21 age-matched normal controls. Color discrimination was measured by the Farnsworth-Munsell 100 hue test. Cognitive functioning was measured with the Mini-Mental State Examination (MMSE) and a comprehensive battery of neuropsychological tests. The scores obtained on the color discrimination test were compared between AD patients and controls adjusting for global and domain-specific cognitive performance. Color discrimination performance was inversely related to MMSE score. AD patients had a higher number of errors in color discrimination than controls (mean +/- SD total error score: 442.4 +/- 84.5 vs. 304.1 +/- 45.9). This trend persisted even after adjustment for MMSE score and cognitive performance on specific cognitive domains. A specific reduction of color discrimination capacity is present in AD patients. This deficit does not solely depend upon cognitive impairment, and involvement of the primary visual cortex and/or retinal ganglionar cells may be contributory.

  10. Fault-tolerant quantum error detection

    PubMed Central

    Linke, Norbert M.; Gutierrez, Mauricio; Landsman, Kevin A.; Figgatt, Caroline; Debnath, Shantanu; Brown, Kenneth R.; Monroe, Christopher

    2017-01-01

    Quantum computers will eventually reach a size at which quantum error correction becomes imperative. Quantum information can be protected from qubit imperfections and flawed control operations by encoding a single logical qubit in multiple physical qubits. This redundancy allows the extraction of error syndromes and the subsequent detection or correction of errors without destroying the logical state itself through direct measurement. We show the encoding and syndrome measurement of a fault-tolerantly prepared logical qubit via an error detection protocol on four physical qubits, represented by trapped atomic ions. This demonstrates the robustness of a logical qubit to imperfections in the very operations used to encode it. The advantage persists in the face of large added error rates and experimental calibration errors. PMID:29062889

  11. Error Discounting in Probabilistic Category Learning

    PubMed Central

    Craig, Stewart; Lewandowsky, Stephan; Little, Daniel R.

    2011-01-01

    Some current theories of probabilistic categorization assume that people gradually attenuate their learning in response to unavoidable error. However, existing evidence for this error discounting is sparse and open to alternative interpretations. We report two probabilistic-categorization experiments that investigated error discounting by shifting feedback probabilities to new values after different amounts of training. In both experiments, responding gradually became less responsive to errors, and learning was slowed for some time after the feedback shift. Both results are indicative of error discounting. Quantitative modeling of the data revealed that adding a mechanism for error discounting significantly improved the fits of an exemplar-based and a rule-based associative learning model, as well as of a recency-based model of categorization. We conclude that error discounting is an important component of probabilistic learning. PMID:21355666

  12. 78 FR 68360 - Airworthiness Directives; Rolls-Royce plc Turbofan Engines

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-11-14

    ... Airworthiness Directives; Rolls-Royce plc Turbofan Engines AGENCY: Federal Aviation Administration (FAA), DOT... turbofan engines. The AD number is incorrect in the Regulatory text. This document corrects that error. In... turbofan engines. As published, the AD number 2013-19-17 under Sec. 39.13 [Amended], is incorrect. No other...

  13. A Preliminary ZEUS Lightning Location Error Analysis Using a Modified Retrieval Theory

    NASA Technical Reports Server (NTRS)

    Elander, Valjean; Koshak, William; Phanord, Dieudonne

    2004-01-01

    The ZEUS long-range VLF arrival time difference lightning detection network now covers both Europe and Africa, and there are plans for further expansion into the western hemisphere. In order to fully optimize and assess ZEUS lightning location retrieval errors and to determine the best placement of future receivers expected to be added to the network, a software package is being developed jointly between the NASA Marshall Space Flight Center (MSFC) and the University of Nevada Las Vegas (UNLV). The software package, called the ZEUS Error Analysis for Lightning (ZEAL), will be used to obtain global scale lightning location retrieval error maps using both a Monte Carlo approach and chi-squared curvature matrix theory. At the core of ZEAL will be an implementation of an Iterative Oblate (IO) lightning location retrieval method recently developed at MSFC. The IO method will be appropriately modified to account for variable wave propagation speed, and the new retrieval results will be compared with the current ZEUS retrieval algorithm to assess potential improvements. In this preliminary ZEAL work effort, we defined 5000 source locations evenly distributed across the Earth. We then used the existing (as well as potential future ZEUS sites) to simulate arrival time data between source and ZEUS site. A total of 100 sources were considered at each of the 5000 locations, and timing errors were selected from a normal distribution having a mean of 0 seconds and a standard deviation of 20 microseconds. This simulated "noisy" dataset was analyzed using the IO algorithm to estimate source locations. The exact locations were compared with the retrieved locations, and the results are summarized via several color-coded "error maps."

  14. Calibration of Contactless Pulse Oximetry

    PubMed Central

    Bartula, Marek; Bresch, Erik; Rocque, Mukul; Meftah, Mohammed; Kirenko, Ihor

    2017-01-01

    BACKGROUND: Contactless, camera-based photoplethysmography (PPG) interrogates shallower skin layers than conventional contact probes, either transmissive or reflective. This raises questions on the calibratability of camera-based pulse oximetry. METHODS: We made video recordings of the foreheads of 41 healthy adults at 660 and 840 nm, and remote PPG signals were extracted. Subjects were in normoxic, hypoxic, and low temperature conditions. Ratio-of-ratios were compared to reference Spo2 from 4 contact probes. RESULTS: A calibration curve based on artifact-free data was determined for a population of 26 individuals. For an Spo2 range of approximately 83% to 100% and discarding short-term errors, a root mean square error of 1.15% was found with an upper 99% one-sided confidence limit of 1.65%. Under normoxic conditions, a decrease in ambient temperature from 23 to 7°C resulted in a calibration error of 0.1% (±1.3%, 99% confidence interval) based on measurements for 3 subjects. PPG signal strengths varied strongly among individuals from about 0.9 × 10−3 to 4.6 × 10−3 for the infrared wavelength. CONCLUSIONS: For healthy adults, the results present strong evidence that camera-based contactless pulse oximetry is fundamentally feasible because long-term (eg, 10 minutes) error stemming from variation among individuals expressed as A*rms is significantly lower (<1.65%) than that required by the International Organization for Standardization standard (<4%) with the notion that short-term errors should be added. A first illustration of such errors has been provided with A**rms = 2.54% for 40 individuals, including 6 with dark skin. Low signal strength and subject motion present critical challenges that will have to be addressed to make camera-based pulse oximetry practically feasible. PMID:27258081

  15. Hydrologic Record Extension of Water-Level Data in the Everglades Depth Estimation Network (EDEN) Using Artificial Neural Network Models, 2000-2006

    USGS Publications Warehouse

    Conrads, Paul; Roehl, Edwin A.

    2007-01-01

    The Everglades Depth Estimation Network (EDEN) is an integrated network of real-time water-level gaging stations, ground-elevation models, and water-surface models designed to provide scientists, engineers, and water-resource managers with current (2000-present) water-depth information for the entire freshwater portion of the greater Everglades. The U.S. Geological Survey Greater Everglades Priority Ecosystem Science provides support for EDEN and the goal of providing quality assured monitoring data for the U.S. Army Corps of Engineers Comprehensive Everglades Restoration Plan. To increase the accuracy of the water-surface models, 25 real-time water-level gaging stations were added to the network of 253 established water-level gaging stations. To incorporate the data from the newly added stations to the 7-year EDEN database in the greater Everglades, the short-term water-level records (generally less than 1 year) needed to be simulated back in time (hindcasted) to be concurrent with data from the established gaging stations in the database. A three-step modeling approach using artificial neural network models was used to estimate the water levels at the new stations. The artificial neural network models used static variables that represent the gaging station location and percent vegetation in addition to dynamic variables that represent water-level data from the established EDEN gaging stations. The final step of the modeling approach was to simulate the computed error of the initial estimate to increase the accuracy of the final water-level estimate. The three-step modeling approach for estimating water levels at the new EDEN gaging stations produced satisfactory results. The coefficients of determination (R2) for 21 of the 25 estimates were greater than 0.95, and all of the estimates (25 of 25) were greater than 0.82. The model estimates showed good agreement with the measured data. For some new EDEN stations with limited measured data, the record extension (hindcasts) included periods beyond the range of the data used to train the artificial neural network models. The comparison of the hindcasts with long-term water-level data proximal to the new EDEN gaging stations indicated that the water-level estimates were reasonable. The percent model error (root mean square error divided by the range of the measured data) was less than 6 percent, and for the majority of stations (20 of 25), the percent model error was less than 1 percent.

  16. Optimal JPWL Forward Error Correction Rate Allocation for Robust JPEG 2000 Images and Video Streaming over Mobile Ad Hoc Networks

    NASA Astrophysics Data System (ADS)

    Agueh, Max; Diouris, Jean-François; Diop, Magaye; Devaux, François-Olivier; De Vleeschouwer, Christophe; Macq, Benoit

    2008-12-01

    Based on the analysis of real mobile ad hoc network (MANET) traces, we derive in this paper an optimal wireless JPEG 2000 compliant forward error correction (FEC) rate allocation scheme for a robust streaming of images and videos over MANET. The packet-based proposed scheme has a low complexity and is compliant to JPWL, the 11th part of the JPEG 2000 standard. The effectiveness of the proposed method is evaluated using a wireless Motion JPEG 2000 client/server application; and the ability of the optimal scheme to guarantee quality of service (QoS) to wireless clients is demonstrated.

  17. Modeling Aboveground Biomass in Hulunber Grassland Ecosystem by Using Unmanned Aerial Vehicle Discrete Lidar

    PubMed Central

    Wang, Dongliang; Xin, Xiaoping; Shao, Quanqin; Brolly, Matthew; Zhu, Zhiliang; Chen, Jin

    2017-01-01

    Accurate canopy structure datasets, including canopy height and fractional cover, are required to monitor aboveground biomass as well as to provide validation data for satellite remote sensing products. In this study, the ability of an unmanned aerial vehicle (UAV) discrete light detection and ranging (lidar) was investigated for modeling both the canopy height and fractional cover in Hulunber grassland ecosystem. The extracted mean canopy height, maximum canopy height, and fractional cover were used to estimate the aboveground biomass. The influences of flight height on lidar estimates were also analyzed. The main findings are: (1) the lidar-derived mean canopy height is the most reasonable predictor of aboveground biomass (R2 = 0.340, root-mean-square error (RMSE) = 81.89 g·m−2, and relative error of 14.1%). The improvement of multiple regressions to the R2 and RMSE values is unobvious when adding fractional cover in the regression since the correlation between mean canopy height and fractional cover is high; (2) Flight height has a pronounced effect on the derived fractional cover and details of the lidar data, but the effect is insignificant on the derived canopy height when the flight height is within the range (<100 m). These findings are helpful for modeling stable regressions to estimate grassland biomass using lidar returns. PMID:28106819

  18. Modeling Aboveground Biomass in Hulunber Grassland Ecosystem by Using Unmanned Aerial Vehicle Discrete Lidar.

    PubMed

    Wang, Dongliang; Xin, Xiaoping; Shao, Quanqin; Brolly, Matthew; Zhu, Zhiliang; Chen, Jin

    2017-01-19

    Accurate canopy structure datasets, including canopy height and fractional cover, are required to monitor aboveground biomass as well as to provide validation data for satellite remote sensing products. In this study, the ability of an unmanned aerial vehicle (UAV) discrete light detection and ranging (lidar) was investigated for modeling both the canopy height and fractional cover in Hulunber grassland ecosystem. The extracted mean canopy height, maximum canopy height, and fractional cover were used to estimate the aboveground biomass. The influences of flight height on lidar estimates were also analyzed. The main findings are: (1) the lidar-derived mean canopy height is the most reasonable predictor of aboveground biomass ( R ² = 0.340, root-mean-square error (RMSE) = 81.89 g·m -2 , and relative error of 14.1%). The improvement of multiple regressions to the R ² and RMSE values is unobvious when adding fractional cover in the regression since the correlation between mean canopy height and fractional cover is high; (2) Flight height has a pronounced effect on the derived fractional cover and details of the lidar data, but the effect is insignificant on the derived canopy height when the flight height is within the range (<100 m). These findings are helpful for modeling stable regressions to estimate grassland biomass using lidar returns.

  19. Dating young geomorphic surfaces using age of colonizing Douglas fir in southwestern Washington and northwestern Oregon, USA

    USGS Publications Warehouse

    Pierson, T.C.

    2007-01-01

    Dating of dynamic, young (<500 years) geomorphic landforms, particularly volcanofluvial features, requires higher precision than is possible with radiocarbon dating. Minimum ages of recently created landforms have long been obtained from tree-ring ages of the oldest trees growing on new surfaces. But to estimate the year of landform creation requires that two time corrections be added to tree ages obtained from increment cores: (1) the time interval between stabilization of the new landform surface and germination of the sampled trees (germination lag time or GLT); and (2) the interval between seedling germination and growth to sampling height, if the trees are not cored at ground level. The sum of these two time intervals is the colonization time gap (CTG). Such time corrections have been needed for more precise dating of terraces and floodplains in lowland river valleys in the Cascade Range, where significant eruption-induced lateral shifting and vertical aggradation of channels can occur over years to decades, and where timing of such geomorphic changes can be critical to emergency planning. Earliest colonizing Douglas fir (Pseudotsuga menziesii) were sampled for tree-ring dating at eight sites on lowland (<750 m a.s.l.), recently formed surfaces of known age near three Cascade volcanoes - Mount Rainier, Mount St. Helens and Mount Hood - in southwestern Washington and northwestern Oregon. Increment cores or stem sections were taken at breast height and, where possible, at ground level from the largest, oldest-looking trees at each study site. At least ten trees were sampled at each site unless the total of early colonizers was less. Results indicate that a correction of four years should be used for GLT and 10 years for CTG if the single largest (and presumed oldest) Douglas fir growing on a surface of unknown age is sampled. This approach would have a potential error of up to 20 years. Error can be reduced by sampling the five largest Douglas fir instead of the single largest. A GLT correction of 5 years should be added to the mean ring-count age of the five largest trees growing on the surface being dated, if the trees are cored at ground level. This correction would have an approximate error of ??5 years. If the trees are cored at about 1.4 m above the round surface (breast height), a CTG correction of 11 years should be added to the mean age of the five sampled trees (with an error of about ??7 years).

  20. Comparing source-based and gist-based false recognition in aging and Alzheimer's disease.

    PubMed

    Pierce, Benton H; Sullivan, Alison L; Schacter, Daniel L; Budson, Andrew E

    2005-07-01

    This study examined 2 factors contributing to false recognition of semantic associates: errors based on confusion of source and errors based on general similarity information or gist. The authors investigated these errors in patients with Alzheimer's disease (AD), age-matched control participants, and younger adults, focusing on each group's ability to use recollection of source information to suppress false recognition. The authors used a paradigm consisting of both deep and shallow incidental encoding tasks, followed by study of a series of categorized lists in which several typical exemplars were omitted. Results showed that healthy older adults were able to use recollection from the deep processing task to some extent but less than that used by younger adults. In contrast, false recognition in AD patients actually increased following the deep processing task, suggesting that they were unable to use recollection to oppose familiarity arising from incidental presentation. (c) 2005 APA, all rights reserved.

  1. ALGORITHM TO REDUCE APPROXIMATION ERROR FROM THE COMPLEX-VARIABLE BOUNDARY-ELEMENT METHOD APPLIED TO SOIL FREEZING.

    USGS Publications Warehouse

    Hromadka, T.V.; Guymon, G.L.

    1985-01-01

    An algorithm is presented for the numerical solution of the Laplace equation boundary-value problem, which is assumed to apply to soil freezing or thawing. The Laplace equation is numerically approximated by the complex-variable boundary-element method. The algorithm aids in reducing integrated relative error by providing a true measure of modeling error along the solution domain boundary. This measure of error can be used to select locations for adding, removing, or relocating nodal points on the boundary or to provide bounds for the integrated relative error of unknown nodal variable values along the boundary.

  2. 3D measurement using combined Gray code and dual-frequency phase-shifting approach

    NASA Astrophysics Data System (ADS)

    Yu, Shuang; Zhang, Jing; Yu, Xiaoyang; Sun, Xiaoming; Wu, Haibin; Liu, Xin

    2018-04-01

    The combined Gray code and phase-shifting approach is a commonly used 3D measurement technique. In this technique, an error that equals integer multiples of the phase-shifted fringe period, i.e. period jump error, often exists in the absolute analog code, which can lead to gross measurement errors. To overcome this problem, the present paper proposes 3D measurement using a combined Gray code and dual-frequency phase-shifting approach. Based on 3D measurement using the combined Gray code and phase-shifting approach, one set of low-frequency phase-shifted fringe patterns with an odd-numbered multiple of the original phase-shifted fringe period is added. Thus, the absolute analog code measured value can be obtained by the combined Gray code and phase-shifting approach, and the low-frequency absolute analog code measured value can also be obtained by adding low-frequency phase-shifted fringe patterns. Then, the corrected absolute analog code measured value can be obtained by correcting the former by the latter, and the period jump errors can be eliminated, resulting in reliable analog code unwrapping. For the proposed approach, we established its measurement model, analyzed its measurement principle, expounded the mechanism of eliminating period jump errors by error analysis, and determined its applicable conditions. Theoretical analysis and experimental results show that the proposed approach can effectively eliminate period jump errors, reliably perform analog code unwrapping, and improve the measurement accuracy.

  3. Operational hydrological forecasting in Bavaria. Part I: Forecast uncertainty

    NASA Astrophysics Data System (ADS)

    Ehret, U.; Vogelbacher, A.; Moritz, K.; Laurent, S.; Meyer, I.; Haag, I.

    2009-04-01

    In Bavaria, operational flood forecasting has been established since the disastrous flood of 1999. Nowadays, forecasts based on rainfall information from about 700 raingauges and 600 rivergauges are calculated and issued for nearly 100 rivergauges. With the added experience of the 2002 and 2005 floods, awareness grew that the standard deterministic forecast, neglecting the uncertainty associated with each forecast is misleading, creating a false feeling of unambiguousness. As a consequence, a system to identify, quantify and communicate the sources and magnitude of forecast uncertainty has been developed, which will be presented in part I of this study. In this system, the use of ensemble meteorological forecasts plays a key role which will be presented in part II. Developing the system, several constraints stemming from the range of hydrological regimes and operational requirements had to be met: Firstly, operational time constraints obviate the variation of all components of the modeling chain as would be done in a full Monte Carlo simulation. Therefore, an approach was chosen where only the most relevant sources of uncertainty were dynamically considered while the others were jointly accounted for by static error distributions from offline analysis. Secondly, the dominant sources of uncertainty vary over the wide range of forecasted catchments: In alpine headwater catchments, typically of a few hundred square kilometers in size, rainfall forecast uncertainty is the key factor for forecast uncertainty, with a magnitude dynamically changing with the prevailing predictability of the atmosphere. In lowland catchments encompassing several thousands of square kilometers, forecast uncertainty in the desired range (usually up to two days) is mainly dependent on upstream gauge observation quality, routing and unpredictable human impact such as reservoir operation. The determination of forecast uncertainty comprised the following steps: a) From comparison of gauge observations and several years of archived forecasts, overall empirical error distributions termed 'overall error' were for each gauge derived for a range of relevant forecast lead times. b) The error distributions vary strongly with the hydrometeorological situation, therefore a subdivision into the hydrological cases 'low flow, 'rising flood', 'flood', flood recession' was introduced. c) For the sake of numerical compression, theoretical distributions were fitted to the empirical distributions using the method of moments. Here, the normal distribution was generally best suited. d) Further data compression was achieved by representing the distribution parameters as a function (second-order polynome) of lead time. In general, the 'overall error' obtained from the above procedure is most useful in regions where large human impact occurs and where the influence of the meteorological forecast is limited. In upstream regions however, forecast uncertainty is strongly dependent on the current predictability of the atmosphere, which is contained in the spread of an ensemble forecast. Including this dynamically in the hydrological forecast uncertainty estimation requires prior elimination of the contribution of the weather forecast to the 'overall error'. This was achieved by calculating long series of hydrometeorological forecast tests, where rainfall observations were used instead of forecasts. The resulting error distribution is termed 'model error' and can be applied on hydrological ensemble forecasts, where ensemble rainfall forecasts are used as forcing. The concept will be illustrated by examples (good and bad ones) covering a wide range of catchment sizes, hydrometeorological regimes and quality of hydrological model calibration. The methodology to combine the static and dynamic shares of uncertainty will be presented in part II of this study.

  4. Statistically Self-Consistent and Accurate Errors for SuperDARN Data

    NASA Astrophysics Data System (ADS)

    Reimer, A. S.; Hussey, G. C.; McWilliams, K. A.

    2018-01-01

    The Super Dual Auroral Radar Network (SuperDARN)-fitted data products (e.g., spectral width and velocity) are produced using weighted least squares fitting. We present a new First-Principles Fitting Methodology (FPFM) that utilizes the first-principles approach of Reimer et al. (2016) to estimate the variance of the real and imaginary components of the mean autocorrelation functions (ACFs) lags. SuperDARN ACFs fitted by the FPFM do not use ad hoc or empirical criteria. Currently, the weighting used to fit the ACF lags is derived from ad hoc estimates of the ACF lag variance. Additionally, an overcautious lag filtering criterion is used that sometimes discards data that contains useful information. In low signal-to-noise (SNR) and/or low signal-to-clutter regimes the ad hoc variance and empirical criterion lead to underestimated errors for the fitted parameter because the relative contributions of signal, noise, and clutter to the ACF variance is not taken into consideration. The FPFM variance expressions include contributions of signal, noise, and clutter. The clutter is estimated using the maximal power-based self-clutter estimator derived by Reimer and Hussey (2015). The FPFM was successfully implemented and tested using synthetic ACFs generated with the radar data simulator of Ribeiro, Ponomarenko, et al. (2013). The fitted parameters and the fitted-parameter errors produced by the FPFM are compared with the current SuperDARN fitting software, FITACF. Using self-consistent statistical analysis, the FPFM produces reliable or trustworthy quantitative measures of the errors of the fitted parameters. For an SNR in excess of 3 dB and velocity error below 100 m/s, the FPFM produces 52% more data points than FITACF.

  5. Error Analysis and Validation for Insar Height Measurement Induced by Slant Range

    NASA Astrophysics Data System (ADS)

    Zhang, X.; Li, T.; Fan, W.; Geng, X.

    2018-04-01

    InSAR technique is an important method for large area DEM extraction. Several factors have significant influence on the accuracy of height measurement. In this research, the effect of slant range measurement for InSAR height measurement was analysis and discussed. Based on the theory of InSAR height measurement, the error propagation model was derived assuming no coupling among different factors, which directly characterise the relationship between slant range error and height measurement error. Then the theoretical-based analysis in combination with TanDEM-X parameters was implemented to quantitatively evaluate the influence of slant range error to height measurement. In addition, the simulation validation of InSAR error model induced by slant range was performed on the basis of SRTM DEM and TanDEM-X parameters. The spatial distribution characteristics and error propagation rule of InSAR height measurement were further discussed and evaluated.

  6. Performance Characteristics of Plasma Amyloid β 40 and 42 Assays

    PubMed Central

    Okereke, Olivia I.; Xia, Weiming; Irizarry, Michael C.; Sun, Xiaoyan; Qiu, Wei Q.; Fagan, Anne M.; Mehta, Pankaj D.; Hyman, Bradley T.; Selkoe, Dennis J.; Grodstein, Francine

    2009-01-01

    Background Identifying biomarkers of Alzheimer disease (AD) risk will be critical to effective AD prevention. Levels of circulating amyloid β (Aβ) 40 and 42 may be candidate biomarkers. However, properties of plasma Aβ assays must be established. Methods Using five different protocols, blinded samples were used to assess: intra-assay reproducibility; impact of EDTA vs. heparin anticoagulant tubes; and effect of time-to-blood processing. In addition, percent recovery of known Aβ concentrations in spiked samples was assessed. Results Median intra-assay coefficients of variation (CVs) for the assay protocols ranged from 6–24% for Aβ-40, and 8–14% for Aβ-42. There were no systematic differences in reproducibility by collection method. Plasma concentrations of Aβ (particularly Aβ-42) appeared stable in whole blood kept in ice packs and processed as long as 24 hours after collection. Recovery of expected concentrations was modest, ranging from -24% to 44% recovery of Aβ-40, and 17% to 61% of Aβ-42. Conclusions Across five protocols, plasma Aβ-40 and Aβ-42 levels were measured with generally low error, and measurements appeared similar in blood collected in EDTA vs. heparin. While these preliminary findings suggest that measuring plasma Aβ-40 and Aβ-42 may be feasible in varied research settings, additional work in this area is necessary. PMID:19221417

  7. Assessing the Impact of Analytical Error on Perceived Disease Severity.

    PubMed

    Kroll, Martin H; Garber, Carl C; Bi, Caixia; Suffin, Stephen C

    2015-10-01

    The perception of the severity of disease from laboratory results assumes that the results are free of analytical error; however, analytical error creates a spread of results into a band and thus a range of perceived disease severity. To assess the impact of analytical errors by calculating the change in perceived disease severity, represented by the hazard ratio, using non-high-density lipoprotein (nonHDL) cholesterol as an example. We transformed nonHDL values into ranges using the assumed total allowable errors for total cholesterol (9%) and high-density lipoprotein cholesterol (13%). Using a previously determined relationship between the hazard ratio and nonHDL, we calculated a range of hazard ratios for specified nonHDL concentrations affected by analytical error. Analytical error, within allowable limits, created a band of values of nonHDL, with a width spanning 30 to 70 mg/dL (0.78-1.81 mmol/L), depending on the cholesterol and high-density lipoprotein cholesterol concentrations. Hazard ratios ranged from 1.0 to 2.9, a 16% to 50% error. Increased bias widens this range and decreased bias narrows it. Error-transformed results produce a spread of values that straddle the various cutoffs for nonHDL. The range of the hazard ratio obscures the meaning of results, because the spread of ratios at different cutoffs overlap. The magnitude of the perceived hazard ratio error exceeds that for the allowable analytical error, and significantly impacts the perceived cardiovascular disease risk. Evaluating the error in the perceived severity (eg, hazard ratio) provides a new way to assess the impact of analytical error.

  8. Psycho-Motor and Error Enabled Simulations: Modeling Vulnerable Skills in the Pre-Mastery Phase Medical Practice Initiative Procedural Skill Decay and Maintenance (MPI-PSD)

    DTIC Science & Technology

    2014-04-01

    laparoscopic ventral hernia repair. Additional simulation stations were added to the standards and purchases (including a motion tracking system) were...framework for laparoscopic ventral hernia; Incorporation of error-based simulators into an exit assessment of chief surgical residents; Development of...simulating a laparoscopic ventral hernia (LVH) repair. Based on collected data, the lab worked to finalize the incorporation of error-based simulators

  9. Learning from Error

    DTIC Science & Technology

    1988-01-01

    AD-A 199 117 . fNOooN - 6 -JS/_~ Learning from Error Colleen M. Seifert . . i’ UCSDand NPRDC L" , Edwin L. Hutchins UCSD -,- -" Introduction Most...always rely on learning on the job, and where there is the need for learning , there is potential for error. A naturally situated system of cooperative work...reorganized, change the things they do, and change the technology they utilize to do the job. Even if tasks and tools could be somehow frozen, changes in

  10. Multiplicative effects model with internal standard in mobile phase for quantitative liquid chromatography-mass spectrometry.

    PubMed

    Song, Mi; Chen, Zeng-Ping; Chen, Yao; Jin, Jing-Wen

    2014-07-01

    Liquid chromatography-mass spectrometry assays suffer from signal instability caused by the gradual fouling of the ion source, vacuum instability, aging of the ion multiplier, etc. To address this issue, in this contribution, an internal standard was added into the mobile phase. The internal standard was therefore ionized and detected together with the analytes of interest by the mass spectrometer to ensure that variations in measurement conditions and/or instrument have similar effects on the signal contributions of both the analytes of interest and the internal standard. Subsequently, based on the unique strategy of adding internal standard in mobile phase, a multiplicative effects model was developed for quantitative LC-MS assays and tested on a proof of concept model system: the determination of amino acids in water by LC-MS. The experimental results demonstrated that the proposed method could efficiently mitigate the detrimental effects of continuous signal variation, and achieved quantitative results with average relative predictive error values in the range of 8.0-15.0%, which were much more accurate than the corresponding results of conventional internal standard method based on the peak height ratio and partial least squares method (their average relative predictive error values were as high as 66.3% and 64.8%, respectively). Therefore, it is expected that the proposed method can be developed and extended in quantitative LC-MS analysis of more complex systems. Copyright © 2014 Elsevier B.V. All rights reserved.

  11. Linear and Order Statistics Combiners for Pattern Classification

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Ghosh, Joydeep; Lau, Sonie (Technical Monitor)

    2001-01-01

    Several researchers have experimentally shown that substantial improvements can be obtained in difficult pattern recognition problems by combining or integrating the outputs of multiple classifiers. This chapter provides an analytical framework to quantify the improvements in classification results due to combining. The results apply to both linear combiners and order statistics combiners. We first show that to a first order approximation, the error rate obtained over and above the Bayes error rate, is directly proportional to the variance of the actual decision boundaries around the Bayes optimum boundary. Combining classifiers in output space reduces this variance, and hence reduces the 'added' error. If N unbiased classifiers are combined by simple averaging. the added error rate can be reduced by a factor of N if the individual errors in approximating the decision boundaries are uncorrelated. Expressions are then derived for linear combiners which are biased or correlated, and the effect of output correlations on ensemble performance is quantified. For order statistics based non-linear combiners, we derive expressions that indicate how much the median, the maximum and in general the i-th order statistic can improve classifier performance. The analysis presented here facilitates the understanding of the relationships among error rates, classifier boundary distributions, and combining in output space. Experimental results on several public domain data sets are provided to illustrate the benefits of combining and to support the analytical results.

  12. Derivative spectrophotometric method for simultaneous determination of clindamycin phosphate and tretinoin in pharmaceutical dosage forms

    PubMed Central

    2013-01-01

    A derivative spectrophotometric method was proposed for the simultaneous determination of clindamycin and tretinoin in pharmaceutical dosage forms. The measurement was achieved using the first and second derivative signals of clindamycin at (1D) 251 nm and (2D) 239 nm and tretinoin at (1D) 364 nm and (2D) 387 nm. The proposed method showed excellent linearity at both first and second derivative order in the range of 60–1200 and 1.25–25 μg/ml for clindamycin phosphate and tretinoin respectively. The within-day and between-day precision and accuracy was in acceptable range (CV<3.81%, error<3.20%). Good agreement between the found and added concentrations indicates successful application of the proposed method for simultaneous determination of clindamycin and tretinoin in synthetic mixtures and pharmaceutical dosage form. PMID:23575006

  13. Cardiac sodium channel Markov model with temperature dependence and recovery from inactivation.

    PubMed Central

    Irvine, L A; Jafri, M S; Winslow, R L

    1999-01-01

    A Markov model of the cardiac sodium channel is presented. The model is similar to the CA1 hippocampal neuron sodium channel model developed by Kuo and Bean (1994. Neuron. 12:819-829) with the following modifications: 1) an additional open state is added; 2) open-inactivated transitions are made voltage-dependent; and 3) channel rate constants are exponential functions of enthalpy, entropy, and voltage and have explicit temperature dependence. Model parameters are determined using a simulated annealing algorithm to minimize the error between model responses and various experimental data sets. The model reproduces a wide range of experimental data including ionic currents, gating currents, tail currents, steady-state inactivation, recovery from inactivation, and open time distributions over a temperature range of 10 degrees C to 25 degrees C. The model also predicts measures of single channel activity such as first latency, probability of a null sweep, and probability of reopening. PMID:10096885

  14. Driving range estimation for electric vehicles based on driving condition identification and forecast

    NASA Astrophysics Data System (ADS)

    Pan, Chaofeng; Dai, Wei; Chen, Liao; Chen, Long; Wang, Limei

    2017-10-01

    With the impact of serious environmental pollution in our cities combined with the ongoing depletion of oil resources, electric vehicles are becoming highly favored as means of transport. Not only for the advantage of low noise, but for their high energy efficiency and zero pollution. The Power battery is used as the energy source of electric vehicles. However, it does currently still have a few shortcomings, noticeably the low energy density, with high costs and short cycle life results in limited mileage compared with conventional passenger vehicles. There is great difference in vehicle energy consumption rate under different environment and driving conditions. Estimation error of current driving range is relatively large due to without considering the effects of environmental temperature and driving conditions. The development of a driving range estimation method will have a great impact on the electric vehicles. A new driving range estimation model based on the combination of driving cycle identification and prediction is proposed and investigated. This model can effectively eliminate mileage errors and has good convergence with added robustness. Initially the identification of the driving cycle is based on Kernel Principal Component feature parameters and fuzzy C referring to clustering algorithm. Secondly, a fuzzy rule between the characteristic parameters and energy consumption is established under MATLAB/Simulink environment. Furthermore the Markov algorithm and BP(Back Propagation) neural network method is utilized to predict the future driving conditions to improve the accuracy of the remaining range estimation. Finally, driving range estimation method is carried out under the ECE 15 condition by using the rotary drum test bench, and the experimental results are compared with the estimation results. Results now show that the proposed driving range estimation method can not only estimate the remaining mileage, but also eliminate the fluctuation of the residual range under different driving conditions.

  15. A SEMantic and EPisodic Memory Test (SEMEP) Developed within the Embodied Cognition Framework: Application to Normal Aging, Alzheimer's Disease and Semantic Dementia.

    PubMed

    Vallet, Guillaume T; Hudon, Carol; Bier, Nathalie; Macoir, Joël; Versace, Rémy; Simard, Martine

    2017-01-01

    Embodiment has highlighted the importance of sensory-motor components in cognition. Perception and memory are thus very tightly bound together, and episodic and semantic memories should rely on the same grounded memory traces. Reduced perception should then directly reduce the ability to encode and retrieve an episodic memory, as in normal aging. Multimodal integration deficits, as in Alzheimer's disease, should lead to more severe episodic memory impairment. The present study introduces a new memory test developed to take into account these assumptions. The SEMEP (SEMantic-Episodic) memory test proposes to assess conjointly semantic and episodic knowledge across multiple tasks: semantic matching, naming, free recall, and recognition. The performance of young adults is compared to healthy elderly adults (HE), patients with Alzheimer's disease (AD), and patients with semantic dementia (SD). The results show specific patterns of performance between the groups. HE commit memory errors only for presented but not to be remembered items. AD patients present the worst episodic memory performance associated with intrusion errors (recall or recognition of items never presented). They were the only group to not benefit from a visual isolation (addition of a yellow background), a method known to increase the distinctiveness of the memory traces. Finally, SD patients suffer from the most severe semantic impairment. To conclude, confusion errors are common across all the elderly groups, whereas AD was the only group to exhibit regular intrusion errors and SD patients to show severe semantic impairment.

  16. Association between presenilin-1 polymorphism and maternal meiosis II errors in Down syndrome.

    PubMed

    Petersen, M B; Karadima, G; Samaritaki, M; Avramopoulos, D; Vassilopoulos, D; Mikkelsen, M

    2000-08-28

    Several lines of evidence suggest a shared genetic susceptibility to Down syndrome (DS) and Alzheimer disease (AD). Rare forms of autosomal-dominant AD are caused by mutations in the APP and presenilin genes (PS-1 and PS-2). The presenilin proteins have been localized to the nuclear membrane, kinetochores, and centrosomes, suggesting a function in chromosome segregation. A genetic association between a polymorphism in intron 8 of the PS-1 gene and AD has been described in some series, and an increased risk of AD has been reported in mothers of DS probands. We therefore studied 168 probands with free trisomy 21 of known parental and meiotic origin and their parents from a population-based material, by analyzing the intron 8 polymorphism in the PS-1 gene. An increased frequency of allele 1 in mothers with a meiosis II error (70.8%) was found compared with mothers with a meiosis I error (52.7%, P < 0.01), with an excess of the 11 genotype in the meiosis II mothers. The frequency of allele 1 in mothers carrying apolipoprotein E (APOE) epsilon4 allele (68.0%) was higher than in mothers without epsilon4 (52.2%, P < 0.01). We hypothesize that the PS-1 intronic polymorphism might be involved in chromosomal nondisjunction through an influence on the expression level of PS-1 or due to linkage disequilibrium with biologically relevant polymorphisms in or outside the PS-1 gene. Copyright 2000 Wiley-Liss, Inc.

  17. Stereoscopic distance perception

    NASA Technical Reports Server (NTRS)

    Foley, John M.

    1989-01-01

    Limited cue, open-loop tasks in which a human observer indicates distances or relations among distances are discussed. By open-loop tasks, it is meant tasks in which the observer gets no feedback as to the accuracy of the responses. What happens when cues are added and when the loop is closed are considered. The implications of this research for the effectiveness of visual displays is discussed. Errors in visual distance tasks do not necessarily mean that the percept is in error. The error could arise in transformations that intervene between the percept and the response. It is argued that the percept is in error. It is also argued that there exist post-perceptual transformations that may contribute to the error or be modified by feedback to correct for the error.

  18. Hydrogen bonding and pi-stacking: how reliable are force fields? A critical evaluation of force field descriptions of nonbonded interactions.

    PubMed

    Paton, Robert S; Goodman, Jonathan M

    2009-04-01

    We have evaluated the performance of a set of widely used force fields by calculating the geometries and stabilization energies for a large collection of intermolecular complexes. These complexes are representative of a range of chemical and biological systems for which hydrogen bonding, electrostatic, and van der Waals interactions play important roles. Benchmark energies are taken from the high-level ab initio values in the JSCH-2005 and S22 data sets. All of the force fields underestimate stabilization resulting from hydrogen bonding, but the energetics of electrostatic and van der Waals interactions are described more accurately. OPLSAA gave a mean unsigned error of 2 kcal mol(-1) for all 165 complexes studied, and outperforms DFT calculations employing very large basis sets for the S22 complexes. The magnitude of hydrogen bonding interactions are severely underestimated by all of the force fields tested, which contributes significantly to the overall mean error; if complexes which are predominantly bound by hydrogen bonding interactions are discounted, the mean unsigned error of OPLSAA is reduced to 1 kcal mol(-1). For added clarity, web-based interactive displays of the results have been developed which allow comparisons of force field and ab initio geometries to be performed and the structures viewed and rotated in three dimensions.

  19. Rapid analysis and quantification of fluorescent brighteners in wheat flour by Tri-step infrared spectroscopy and computer vision technology

    NASA Astrophysics Data System (ADS)

    Guo, Xiao-Xi; Hu, Wei; Liu, Yuan; Gu, Dong-Chen; Sun, Su-Qin; Xu, Chang-Hua; Wang, Xi-Chang

    2015-11-01

    Fluorescent brightener, industrial whitening agent, has been illegally used to whitening wheat flour. In this article, computer vision technology (E-eyes) and colorimetry were employed to investigate color difference among different concentrations of fluorescent brightener in wheat flour using DMS as an example. Tri-step infrared spectroscopy (Fourier transform-infrared spectroscopy coupled with second derivative infrared spectroscopy (SD-IR) and two dimensional correlation infrared spectroscopy (2DCOS-IR)) was used to identify and quantitate DMS in wheat flour. According to color analysis, the whitening effect was significant when added with less than 30 mg/g DMS but when more than 100 mg/g, the flour began greenish. Thus it was speculated that the concentration of DMS should be below 100 mg/g in real flour adulterant with DMS. With the increase of the concentration, the spectral similarity of wheat flour with DMS to DMS standard was increasing. SD-IR peaks at 1153 cm-1, 1141 cm-1, 1112 cm-1, 1085 cm-1 and 1025 cm-1 attributed to DMS were regularly enhanced. Furthermore, it could be differentiated by 2DOS-IR between DMS standard and wheat flour added with DMS low to 0.05 mg/g and the bands in the range of 1000-1500 cm-1 could be an exclusive range to identify whether wheat flour contained DMS. Finally, a quantitative prediction model based on IR spectra was established successfully by Partial least squares (PLS) with a concentration range from 1 mg/g to 100 mg/g. The calibration set gave a determination coefficient of 0.9884 with a standard error (RMSEC) of 5.56 and the validation set presented a determination coefficient of 0.9881 with a standard error of 5.73. It was demonstrated that computer vision technology and colorimetry were effective to estimate the content of DMS in wheat flour and the Tri-step infrared macro-fingerprinting combined with PLS was applicable for rapid and nondestructive fluorescent brightener identification and quantitation.

  20. Atlas based brain volumetry: How to distinguish regional volume changes due to biological or physiological effects from inherent noise of the methodology.

    PubMed

    Opfer, Roland; Suppa, Per; Kepp, Timo; Spies, Lothar; Schippling, Sven; Huppertz, Hans-Jürgen

    2016-05-01

    Fully-automated regional brain volumetry based on structural magnetic resonance imaging (MRI) plays an important role in quantitative neuroimaging. In clinical trials as well as in clinical routine multiple MRIs of individual patients at different time points need to be assessed longitudinally. Measures of inter- and intrascanner variability are crucial to understand the intrinsic variability of the method and to distinguish volume changes due to biological or physiological effects from inherent noise of the methodology. To measure regional brain volumes an atlas based volumetry (ABV) approach was deployed using a highly elastic registration framework and an anatomical atlas in a well-defined template space. We assessed inter- and intrascanner variability of the method in 51 cognitively normal subjects and 27 Alzheimer dementia (AD) patients from the Alzheimer's Disease Neuroimaging Initiative by studying volumetric results of repeated scans for 17 compartments and brain regions. Median percentage volume differences of scan-rescans from the same scanner ranged from 0.24% (whole brain parenchyma in healthy subjects) to 1.73% (occipital lobe white matter in AD), with generally higher differences in AD patients as compared to normal subjects (e.g., 1.01% vs. 0.78% for the hippocampus). Minimum percentage volume differences detectable with an error probability of 5% were in the one-digit percentage range for almost all structures investigated, with most of them being below 5%. Intrascanner variability was independent of magnetic field strength. The median interscanner variability was up to ten times higher than the intrascanner variability. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. A simulation of GPS and differential GPS sensors

    NASA Technical Reports Server (NTRS)

    Rankin, James M.

    1993-01-01

    The Global Positioning System (GPS) is a revolutionary advance in navigation. Users can determine latitude, longitude, and altitude by receiving range information from at least four satellites. The statistical accuracy of the user's position is directly proportional to the statistical accuracy of the range measurement. Range errors are caused by clock errors, ephemeris errors, atmospheric delays, multipath errors, and receiver noise. Selective Availability, which the military uses to intentionally degrade accuracy for non-authorized users, is a major error source. The proportionality constant relating position errors to range errors is the Dilution of Precision (DOP) which is a function of the satellite geometry. Receivers separated by relatively short distances have the same satellite and atmospheric errors. Differential GPS (DGPS) removes these errors by transmitting pseudorange corrections from a fixed receiver to a mobile receiver. The corrected pseudorange at the moving receiver is now corrupted only by errors from the receiver clock, multipath, and measurement noise. This paper describes a software package that models position errors for various GPS and DGPS systems. The error model is used in the Real-Time Simulator and Cockpit Technology workstation simulations at NASA-LaRC. The GPS/DGPS sensor can simulate enroute navigation, instrument approaches, or on-airport navigation.

  2. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Somayaji, Anil B.; Amai, Wendy A.; Walther, Eleanor A.

    This reports describes the successful extension of artificial immune systems from the domain of computer security to the domain of real time control systems for robotic vehicles. A biologically-inspired computer immune system was added to the control system of two different mobile robots. As an additional layer in a multi-layered approach, the immune system is complementary to traditional error detection and error handling techniques. This can be thought of as biologically-inspired defense in depth. We demonstrated an immune system can be added with very little application developer effort, resulting in little to no performance impact. The methods described here aremore » extensible to any system that processes a sequence of data through a software interface.« less

  3. Chain pooling to minimize prediction error in subset regression. [Monte Carlo studies using population models

    NASA Technical Reports Server (NTRS)

    Holms, A. G.

    1974-01-01

    Monte Carlo studies using population models intended to represent response surface applications are reported. Simulated experiments were generated by adding pseudo random normally distributed errors to population values to generate observations. Model equations were fitted to the observations and the decision procedure was used to delete terms. Comparison of values predicted by the reduced models with the true population values enabled the identification of deletion strategies that are approximately optimal for minimizing prediction errors.

  4. Prescription data improve the medication history in primary care.

    PubMed

    Glintborg, Bente; Andersen, S K; Poulsen, H E

    2010-06-01

    Incomplete medication lists increase the risk of medication errors and adverse drug effects. In Denmark, dispensing data and pharmacy records are available directly online to treating physicians. We aimed (1) to describe if use of pharmacy records improved the medication history among patients consulting their general practitioner and (2) to characterise inconsistencies between the medication history reported by the patient and the general practitioner's recordings. Patients attending a general practitioner clinic were interviewed about their current medication use. Subsequently, the patients were contacted by phone and asked to verify the medication list previously obtained. Half of the patients were randomly selected for further questioning guided by their dispensing data: during the telephone interview, these patients were asked to clarify whether drugs registered in their pharmacy records were still in use. Pharmacy records show all drugs acquired on prescription from any national pharmacy in the preceding 2 years. The medication list was corrected accordingly. In all patients, the medication lists obtained on the in-clinic and telephone interviews were compared to the general practitioner's registrations. The 150 patients included in the study had a median age of 56 years (range 18-93 years), and 90 (60%) were women. Patients reported use of 849 drugs (median 5, range 0-16) at the in-clinic interview. Another 41 drugs (median 0, range 0-4) were added during the telephone interview. In the subgroup of 75 patients interviewed guided by pharmacy records, additionally 53 drugs (10%) were added to the 474 drugs already mentioned. The 27 patients adding more drugs guided by pharmacy records were significantly older and used more drugs (both p<0.05) than the 48 patients not adding drugs. When the medication lists were compared with the general practitioner's lists, specifically use of over-the-counter products and prescription-only medications from Anatomical Therapeutic Chemical Classification System group J, A, D, N and R were not registered by the general practitioner. Dispensing data provide further improvement to a medication history based on thorough in-clinic and telephone interviews. Use of pharmacy records as a supplement when recording a medication history seems beneficial, especially among older patients treated with polypharmacy.

  5. What's the Value of VAM (Value-Added Modeling)?

    ERIC Educational Resources Information Center

    Scherrer, Jimmy

    2012-01-01

    The use of value-added modeling (VAM) in school accountability is expanding, but deciding how to embrace VAM is difficult. Various experts say it's too unreliable, causes more harm than good, and has a big margin for error. Others assert VAM is imperfect but useful, and provides valuable feedback. A closer look at the models, and their use,…

  6. Human dental age estimation combining third molar(s) development and tooth morphological age predictors.

    PubMed

    Thevissen, P W; Galiti, D; Willems, G

    2012-11-01

    In the subadult age group, third molar development, as well as age-related morphological tooth information can be observed on panoramic radiographs. The aim of present study was to combine, in subadults, panoramic radiographic data based on developmental stages of third molar(s) and morphological measurements from permanent teeth, in order to evaluate its added age-predicting performances. In the age range between 15 and 23 years, 25 gender-specific radiographs were collected within each age category of 1 year. Third molar development was classified and registered according the 10-point staging and scoring technique proposed by Gleiser and Hunt (1955), modified by Köhler (1994). The Kvaal (1995) measuring technique was applied on the indicated teeth from the individuals' left side. Linear regression models with age as response and third molar-scored stages as explanatory variables were developed, and morphological measurements from permanent teeth were added. From the models, determination coefficients (R (2)) and root-mean-square errors (RMSE) were calculated. Maximal-added age information was reported as a 6 % R² increase and a 0.10-year decrease of RMSE. Forensic dental age estimations on panoramic radiographic data in the subadult group (15-23 year) should only be based on third molar development.

  7. Common MRI acquisition non-idealities significantly impact the output of the boundary shift integral method of measuring brain atrophy on serial MRI.

    PubMed

    Preboske, Gregory M; Gunter, Jeff L; Ward, Chadwick P; Jack, Clifford R

    2006-05-01

    Measuring rates of brain atrophy from serial magnetic resonance imaging (MRI) studies is an attractive way to assess disease progression in neurodegenerative disorders, particularly Alzheimer's disease (AD). A widely recognized approach is the boundary shift integral (BSI). The objective of this study was to evaluate how several common scan non-idealities affect the output of the BSI algorithm. We created three types of image non-idealities between the image volumes in a serial pair used to measure between-scan change: inconsistent image contrast between serial scans, head motion, and poor signal-to-noise (SNR). In theory the BSI volume difference measured between each pair of images should be zero and any deviation from zero should represent corruption of the BSI measurement by some non-ideality intentionally introduced into the second scan in the pair. Two different BSI measures were evaluated, whole brain and ventricle. As the severity of motion, noise, and non-congruent image contrast increased in the second scan, the calculated BSI values deviated progressively more from the expected value of zero. This study illustrates the magnitude of the error in measures of change in brain and ventricle volume across serial MRI scans that can result from commonly encountered deviations from ideal image quality. The magnitudes of some of the measurement errors seen in this study exceed the disease effect in AD shown in various publications, which range from 1% to 2.78% per year for whole brain atrophy and 5.4% to 13.8% per year for ventricle expansion (Table 1). For example, measurement error may exceed 100% if image contrast properties dramatically differ between the two scans in a measurement pair. Methods to maximize consistency of image quality over time are an essential component of any quantitative serial MRI study.

  8. The Impact of Model and Rainfall Forcing Errors on Characterizing Soil Moisture Uncertainty in Land Surface Modeling

    NASA Technical Reports Server (NTRS)

    Maggioni, V.; Anagnostou, E. N.; Reichle, R. H.

    2013-01-01

    The contribution of rainfall forcing errors relative to model (structural and parameter) uncertainty in the prediction of soil moisture is investigated by integrating the NASA Catchment Land Surface Model (CLSM), forced with hydro-meteorological data, in the Oklahoma region. Rainfall-forcing uncertainty is introduced using a stochastic error model that generates ensemble rainfall fields from satellite rainfall products. The ensemble satellite rain fields are propagated through CLSM to produce soil moisture ensembles. Errors in CLSM are modeled with two different approaches: either by perturbing model parameters (representing model parameter uncertainty) or by adding randomly generated noise (representing model structure and parameter uncertainty) to the model prognostic variables. Our findings highlight that the method currently used in the NASA GEOS-5 Land Data Assimilation System to perturb CLSM variables poorly describes the uncertainty in the predicted soil moisture, even when combined with rainfall model perturbations. On the other hand, by adding model parameter perturbations to rainfall forcing perturbations, a better characterization of uncertainty in soil moisture simulations is observed. Specifically, an analysis of the rank histograms shows that the most consistent ensemble of soil moisture is obtained by combining rainfall and model parameter perturbations. When rainfall forcing and model prognostic perturbations are added, the rank histogram shows a U-shape at the domain average scale, which corresponds to a lack of variability in the forecast ensemble. The more accurate estimation of the soil moisture prediction uncertainty obtained by combining rainfall and parameter perturbations is encouraging for the application of this approach in ensemble data assimilation systems.

  9. Atmospheric refraction effects on baseline error in satellite laser ranging systems

    NASA Technical Reports Server (NTRS)

    Im, K. E.; Gardner, C. S.

    1982-01-01

    Because of the mathematical complexities involved in exact analyses of baseline errors, it is not easy to isolate atmospheric refraction effects; however, by making certain simplifying assumptions about the ranging system geometry, relatively simple expressions can be derived which relate the baseline errors directly to the refraction errors. The results indicate that even in the absence of other errors, the baseline error for intercontinental baselines can be more than an order of magnitude larger than the refraction error.

  10. Estimating Energy Expenditure with ActiGraph GT9X Inertial Measurement Unit.

    PubMed

    Hibbing, Paul R; Lamunion, Samuel R; Kaplan, Andrew S; Crouter, Scott E

    2018-05-01

    The purpose of this study was to explore whether gyroscope and magnetometer data from the ActiGraph GT9X improved accelerometer-based predictions of energy expenditure (EE). Thirty participants (mean ± SD: age, 23.0 ± 2.3 yr; body mass index, 25.2 ± 3.9 kg·m) volunteered to complete the study. Participants wore five GT9X monitors (right hip, both wrists, and both ankles) while performing 10 activities ranging from rest to running. A Cosmed K4b was worn during the trial, as a criterion measure of EE (30-s averages) expressed in METs. Triaxial accelerometer data (80 Hz) were converted to milli-G using Euclidean norm minus one (ENMO; 1-s epochs). Gyroscope data (100 Hz) were expressed as a vector magnitude (GVM) in degrees per second (1-s epochs) and magnetometer data (100 Hz) were expressed as direction changes per 5 s. Minutes 4-6 of each activity were used for analysis. Three two-regression algorithms were developed for each wear location: 1) ENMO, 2) ENMO and GVM, and 3) ENMO, GVM, and direction changes. Leave-one-participant-out cross-validation was used to evaluate the root mean square error (RMSE) and mean absolute percent error (MAPE) of each algorithm. Adding gyroscope to accelerometer-only algorithms resulted in RMSE reductions between 0.0 METs (right wrist) and 0.17 METs (right ankle), and MAPE reductions between 0.1% (right wrist) and 6.0% (hip). When direction changes were added, RMSE changed by ≤0.03 METs and MAPE by ≤0.21%. The combined use of gyroscope and accelerometer at the hip and ankles improved individual-level prediction of EE compared with accelerometer only. For the wrists, adding gyroscope produced negligible changes. The magnetometer did not meaningfully improve estimates for any algorithms.

  11. 77 FR 65506 - Airworthiness Directives; The Boeing Company Airplanes

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-10-29

    ...We propose to supersede an existing airworthiness directive (AD) that applies to certain The Boeing Company Model 757-200 and - 200PF series airplanes. The existing AD currently requires modification of the nacelle strut and wing structure, and repair of any damage found during the modification. Since we issued that AD, a compliance time error involving the optional threshold formula was discovered, which could allow an airplane to exceed the acceptable compliance time for addressing the unsafe condition. This proposed AD would specify a maximum compliance time limit that overrides the optional threshold formula results. We are proposing this AD to prevent fatigue cracking in primary strut structure and consequent reduced structural integrity of the strut.

  12. Application of parameter estimation to highly unstable aircraft

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Murray, J. E.

    1986-01-01

    This paper discusses the application of parameter estimation to highly unstable aircraft. It includes a discussion of the problems in applying the output error method to such aircraft and demonstrates that the filter error method eliminates these problems. The paper shows that the maximum likelihood estimator with no process noise does not reduce to the output error method when the system is unstable. It also proposes and demonstrates an ad hoc method that is similar in form to the filter error method, but applicable to nonlinear problems. Flight data from the X-29 forward-swept-wing demonstrator is used to illustrate the problems and methods discussed.

  13. Application of parameter estimation to highly unstable aircraft

    NASA Technical Reports Server (NTRS)

    Maine, R. E.; Murray, J. E.

    1986-01-01

    The application of parameter estimation to highly unstable aircraft is discussed. Included are a discussion of the problems in applying the output error method to such aircraft and demonstrates that the filter error method eliminates these problems. The paper shows that the maximum likelihood estimator with no process noise does not reduce to the output error method when the system is unstable. It also proposes and demonstrates an ad hoc method that is similar in form to the filter error method, but applicable to nonlinear problems. Flight data from the X-29 forward-swept-wing demonstrator is used to illustrate the problems and methods discussed.

  14. Inherent Conservatism in Deterministic Quasi-Static Structural Analysis

    NASA Technical Reports Server (NTRS)

    Verderaime, V.

    1997-01-01

    The cause of the long-suspected excessive conservatism in the prevailing structural deterministic safety factor has been identified as an inherent violation of the error propagation laws when reducing statistical data to deterministic values and then combining them algebraically through successive structural computational processes. These errors are restricted to the applied stress computations, and because mean and variations of the tolerance limit format are added, the errors are positive, serially cumulative, and excessively conservative. Reliability methods circumvent these errors and provide more efficient and uniform safe structures. The document is a tutorial on the deficiencies and nature of the current safety factor and of its improvement and transition to absolute reliability.

  15. Alzheimer’s Disease Drug Development in 2008 and Beyond: Problems and Opportunities

    PubMed Central

    Becker, Robert E.; Greig, Nigel H.

    2008-01-01

    Recently, a number of Alzheimer’s disease (AD) multi-center clinical trials (CT) have failed to provide statistically significant evidence of drug efficacy. To test for possible design or execution flaws we analyzed in detail CTs for two failed drugs that were strongly supported by preclinical evidence and by proven CT AD efficacy for other drugs in their class. Studies of the failed commercial trials suggest that methodological flaws may contribute to the failures and that these flaws lurk within current drug development practices ready to impact other AD drug development [1]. To identify and counter risks we considered the relevance to AD drug development of the following factors: (1) effective dosing of the drug product, (2) reliable evaluations of research subjects, (3) effective implementation of quality controls over data at research sites, (4) resources for practitioners to effectively use CT results in patient care, (5) effective disease modeling, (6) effective research designs. New drugs currently under development for AD address a variety of specific mechanistic targets. Mechanistic targets provide AD drug development opportunities to escape from many of the factors that currently undermine AD clinical pharmacology, especially the problems of inaccuracy and imprecision associated with using rated outcomes. In this paper we conclude that many of the current problems encountered in AD drug development can be avoided by changing practices. Current problems with human errors in clinical trials make it difficult to differentiate drugs that fail to evidence efficacy from apparent failures due to Type II errors. This uncertainty and the lack of publication of negative data impede researchers’ abilities to improve methodologies in clinical pharmacology and to develop a sound body of knowledge about drug actions. We consider the identification of molecular targets as offering further opportunities for overcoming current failures in drug development. PMID:18690832

  16. Calculation of neutral weak nucleon form factors with the AdS/QCD correspondence

    NASA Astrophysics Data System (ADS)

    Lohmann, Mark

    The AdS/QCD (Anti-de Sitter/Quantum Chromodynamics) is a mathematical formalism applied to a theory based on the original AdS/CFT (Anti-de Sitter/ Conformal Field Theory) correspondence. The aim is to describe properties of the strong force in an essentially non-perturbative way. AdS/QCD theories break the conformal symmetry of the AdS metric (a sacrifice) to arrive at a boundary theory which is QCD-like (a payoff). This correspondence has been used to calculate well-known quantities in nucleon spectra and structure like Regge trajectories, form factors, and many others within an error of less than 20% from experiment. This is impressive considering that ordinary perturbation theory in QCD applied to the strongly interacting domain usually obtains an error of about 30%. In this thesis, the AdS/QCD correspondence method of light-front holography established by Brodsky and de Teramond is used in an attempt to calculate the Dirac and Pauli neutral weak form factors, FZ1 (Q2) and FZ2 (Q 2) respectively, for both the proton and the neutron. With this approach, we were able to determine the neutral weak Dirac form factor for both nucleons and the Pauli form factor for the proton, while the method did not succeed at determining the neutral weak Pauli form factor for the neutron. With these we were also able to extract the proton's strange electric and magnetic form factor, which addresses important questions in nucleon sub-structure that are currently being investigated through experiments at the Thomas Jefferson National Accelerator Facility.

  17. Using a 'value-added' approach for contextual design of geographic information.

    PubMed

    May, Andrew J

    2013-11-01

    The aim of this article is to demonstrate how a 'value-added' approach can be used for user-centred design of geographic information. An information science perspective was used, with value being the difference in outcomes arising from alternative information sets. Sixteen drivers navigated a complex, unfamiliar urban route, using visual and verbal instructions representing the distance-to-turn and junction layout information presented by typical satellite navigation systems. Data measuring driving errors, navigation errors and driver confidence were collected throughout the trial. The results show how driver performance varied considerably according to the geographic context at specific locations, and that there are specific opportunities to add value with enhanced geographical information. The conclusions are that a value-added approach facilitates a more explicit focus on 'desired' (and feasible) levels of end user performance with different information sets, and is a potentially effective approach to user-centred design of geographic information. Copyright © 2012 Elsevier Ltd and The Ergonomics Society. All rights reserved.

  18. Explosive Transient Camera (ETC) Program

    NASA Technical Reports Server (NTRS)

    Ricker, George

    1991-01-01

    Since the inception of the ETC program, a wide range of new technologies was developed to support this astronomical instrument. The prototype unit was installed at ETC Site 1. The first partially automated observations were made and some major renovations were later added to the ETC hardware. The ETC was outfitted with new thermoelectrically-cooled CCD cameras and a sophisticated vacuum manifold, which, together, made the ETC a much more reliable unit than the prototype. The ETC instrumentation and building were placed under full computer control, allowing the ETC to operate as an automated, autonomous instrument with virtually no human intervention necessary. The first fully-automated operation of the ETC was performed, during which the ETC monitored the error region of the repeating soft gamma-ray burster SGR 1806-21.

  19. Orbit determination and prediction of GEO satellite of BeiDou during repositioning maneuver

    NASA Astrophysics Data System (ADS)

    Cao, Fen; Yang, XuHai; Li, ZhiGang; Sun, BaoQi; Kong, Yao; Chen, Liang; Feng, Chugang

    2014-11-01

    In order to establish a continuous GEO satellite orbit during repositioning maneuvers, a suitable maneuver force model has been established associated with an optimal orbit determination method and strategy. A continuous increasing acceleration is established by constructing a constant force that is equivalent to the pulse force, with the mass of the satellite decreasing throughout maneuver. This acceleration can be added to other accelerations, such as solar radiation, to obtain the continuous acceleration of the satellite. The orbit determination method and strategy are illuminated, with subsequent assessment of the orbit being determined and predicted accordingly. The orbit of the GEO satellite during repositioning maneuver can be determined and predicted by using C-Band pseudo-range observations of the BeiDou GEO satellite with COSPAR ID 2010-001A in 2011 and 2012. The results indicate that observations before maneuver do affect orbit determination and prediction, and should therefore be selected appropriately. A more precise orbit and prediction can be obtained compared to common short arc methods when observations starting 1 day prior the maneuver and 2 h after the maneuver are adopted in POD (Precise Orbit Determination). The achieved URE (User Range Error) under non-consideration of satellite clock errors is better than 2 m within the first 2 h after maneuver, and less than 3 m for further 2 h of orbit prediction.

  20. Error sensitivity analysis in 10-30-day extended range forecasting by using a nonlinear cross-prediction error model

    NASA Astrophysics Data System (ADS)

    Xia, Zhiye; Xu, Lisheng; Chen, Hongbin; Wang, Yongqian; Liu, Jinbao; Feng, Wenlan

    2017-06-01

    Extended range forecasting of 10-30 days, which lies between medium-term and climate prediction in terms of timescale, plays a significant role in decision-making processes for the prevention and mitigation of disastrous meteorological events. The sensitivity of initial error, model parameter error, and random error in a nonlinear crossprediction error (NCPE) model, and their stability in the prediction validity period in 10-30-day extended range forecasting, are analyzed quantitatively. The associated sensitivity of precipitable water, temperature, and geopotential height during cases of heavy rain and hurricane is also discussed. The results are summarized as follows. First, the initial error and random error interact. When the ratio of random error to initial error is small (10-6-10-2), minor variation in random error cannot significantly change the dynamic features of a chaotic system, and therefore random error has minimal effect on the prediction. When the ratio is in the range of 10-1-2 (i.e., random error dominates), attention should be paid to the random error instead of only the initial error. When the ratio is around 10-2-10-1, both influences must be considered. Their mutual effects may bring considerable uncertainty to extended range forecasting, and de-noising is therefore necessary. Second, in terms of model parameter error, the embedding dimension m should be determined by the factual nonlinear time series. The dynamic features of a chaotic system cannot be depicted because of the incomplete structure of the attractor when m is small. When m is large, prediction indicators can vanish because of the scarcity of phase points in phase space. A method for overcoming the cut-off effect ( m > 4) is proposed. Third, for heavy rains, precipitable water is more sensitive to the prediction validity period than temperature or geopotential height; however, for hurricanes, geopotential height is most sensitive, followed by precipitable water.

  1. 76 FR 40262 - Determination of Attainment, Approval and Promulgation of Air Quality Implementation Plans...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-07-08

    .... Therefore, EPA is correcting this error by adding paragraph (jj) to 40 CFR 52.726 for Illinois. Section 553...: 42 U.S.C. 7401 et seq. Subpart O--Illinois 0 2. Section 52.726 is amended by adding paragraph (jj) to read as follows: Sec. 52.726 Control strategy. Ozone. * * * * * (jj) Determination of attainment. On...

  2. Verbal and nonverbal communication of events in learning-disability subtypes.

    PubMed

    Loveland, K A; Fletcher, J M; Bailey, V

    1990-08-01

    This study compared a group of nondisabled children (ND) with groups of learning-disabled children who were primarily impaired in reading and arithmetic skills (Reading-Arithmetic Disabled; RAD) and arithmetic but not reading (Arithmetic Disabled; AD) on a set of tasks involving comprehension and production of verbally and nonverbally presented events. Children viewed videotaped scenarios presented in verbal (narrative) and nonverbal (puppet actors) formats and were asked to describe or enact with puppets the events depicted in the stories. Rourke (1978, 1982) has shown that RAD children have problems with verbal skills, whereas AD children have problems with nonverbal skills. Consequently, it was hypothesized that children's performance in comprehending and reproducing stories would be related to the type of learning disability. Results showed that RAD children made more errors than AD children with verbal presentations and describe-responses, whereas AD children made more errors than RAD children with nonverbal presentations and enact-responses. In addition, learning disabled children were more likely than controls to misinterpret affect and motivation depicted in the stories. These results show that learning disabled children have problems with social communication skills, but that the nature of these problems varies with the type of learning disability.

  3. Validation of a MOSFET dosemeter system for determining the absorbed and effective radiation doses in diagnostic radiology.

    PubMed

    Manninen, A-L; Kotiaho, A; Nikkinen, J; Nieminen, M T

    2015-04-01

    This study aimed to validate a MOSFET dosemeter system for determining absorbed and effective doses (EDs) in the dose and energy range used in diagnostic radiology. Energy dependence, dose linearity and repeatability of the dosemeter were examined. The absorbed doses (ADs) were compared at anterior-posterior projection and the EDs were determined at posterior-anterior, anterior-posterior and lateral projections of thoracic imaging using an anthropomorphic phantom. The radiation exposures were made using digital radiography systems. This study revealed that the MOSFET system with high sensitivity bias supply set-up is sufficiently accurate for AD and ED determination. The dosemeter is recommended to be calibrated for energies <60 and >80 kVp. The entrance skin dose level should be at least 5 mGy to minimise the deviation of the individual dosemeter dose. For ED determination, dosemeters should be implanted perpendicular to the surface of the phantom to prevent the angular dependence error. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  4. CO2 laser ranging systems study

    NASA Technical Reports Server (NTRS)

    Filippi, C. A.

    1975-01-01

    The conceptual design and error performance of a CO2 laser ranging system are analyzed. Ranging signal and subsystem processing alternatives are identified, and their comprehensive evaluation yields preferred candidate solutions which are analyzed to derive range and range rate error contributions. The performance results are presented in the form of extensive tables and figures which identify the ranging accuracy compromises as a function of the key system design parameters and subsystem performance indexes. The ranging errors obtained are noted to be within the high accuracy requirements of existing NASA/GSFC missions with a proper system design.

  5. Plant traits determine forest flammability

    NASA Astrophysics Data System (ADS)

    Zylstra, Philip; Bradstock, Ross

    2016-04-01

    Carbon and nutrient cycles in forest ecosystems are influenced by their inherent flammability - a property determined by the traits of the component plant species that form the fuel and influence the micro climate of a fire. In the absence of a model capable of explaining the complexity of such a system however, flammability is frequently represented by simple metrics such as surface fuel load. The implications of modelling fire - flammability feedbacks using surface fuel load were examined and compared to a biophysical, mechanistic model (Forest Flammability Model) that incorporates the influence of structural plant traits (e.g. crown shape and spacing) and leaf traits (e.g. thickness, dimensions and moisture). Fuels burn with values of combustibility modelled from leaf traits, transferring convective heat along vectors defined by flame angle and with plume temperatures that decrease with distance from the flame. Flames are re-calculated in one-second time-steps, with new leaves within the plant, neighbouring plants or higher strata ignited when the modelled time to ignition is reached, and other leaves extinguishing when their modelled flame duration is exceeded. The relative influence of surface fuels, vegetation structure and plant leaf traits were examined by comparing flame heights modelled using three treatments that successively added these components within the FFM. Validation was performed across a diverse range of eucalypt forests burnt under widely varying conditions during a forest fire in the Brindabella Ranges west of Canberra (ACT) in 2003. Flame heights ranged from 10 cm to more than 20 m, with an average of 4 m. When modelled from surface fuels alone, flame heights were on average 1.5m smaller than observed values, and were predicted within the error range 28% of the time. The addition of plant structure produced predicted flame heights that were on average 1.5m larger than observed, but were correct 53% of the time. The over-prediction in this case was the result of a small number of large errors, where higher strata such as forest canopy were modelled to ignite but did not. The addition of leaf traits largely addressed this error, so that the mean flame height over-prediction was reduced to 0.3m and the fully parameterised FFM gave correct predictions 62% of the time. When small (<1m) flames were excluded, the fully parameterised model correctly predicted flame heights 12 times more often than could be predicted using surface fuels alone, and the Mean Absolute Error was 4 times smaller. The inadequate consideration of plant traits within a mechanistic framework introduces significant error to forest fire behaviour modelling. The FFM provides a solution to this, and an avenue by which plant trait information can be used to better inform Global Vegetation Models and decision-making tools used to mitigate the impacts of fire.

  6. Building a LiDAR point cloud simulator: Testing algorithms for high resolution topographic change

    NASA Astrophysics Data System (ADS)

    Carrea, Dario; Abellán, Antonio; Derron, Marc-Henri; Jaboyedoff, Michel

    2014-05-01

    Terrestrial laser technique (TLS) is becoming a common tool in Geosciences, with clear applications ranging from the generation of a high resolution 3D models to the monitoring of unstable slopes and the quantification of morphological changes. Nevertheless, like every measurement techniques, TLS still has some limitations that are not clearly understood and affect the accuracy of the dataset (point cloud). A challenge in LiDAR research is to understand the influence of instrumental parameters on measurement errors during LiDAR acquisition. Indeed, different critical parameters interact with the scans quality at different ranges: the existence of shadow areas, the spatial resolution (point density), and the diameter of the laser beam, the incidence angle and the single point accuracy. The objective of this study is to test the main limitations of different algorithms usually applied on point cloud data treatment, from alignment to monitoring. To this end, we built in MATLAB(c) environment a LiDAR point cloud simulator able to recreate the multiple sources of errors related to instrumental settings that we normally observe in real datasets. In a first step we characterized the error from single laser pulse by modelling the influence of range and incidence angle on single point data accuracy. In a second step, we simulated the scanning part of the system in order to analyze the shifting and angular error effects. Other parameters have been added to the point cloud simulator, such as point spacing, acquisition window, etc., in order to create point clouds of simple and/or complex geometries. We tested the influence of point density and vitiating point of view on the Iterative Closest Point (ICP) alignment and also in some deformation tracking algorithm with same point cloud geometry, in order to determine alignment and deformation detection threshold. We also generated a series of high resolution point clouds in order to model small changes on different environments (erosion, landslide monitoring, etc) and we then tested the use of filtering techniques using 3D moving windows along the space and time, which considerably reduces data scattering due to the benefits of data redundancy. In conclusion, the simulator allowed us to improve our different algorithms and to understand how instrumental error affects final results. And also, improve the methodology of scans acquisition to find the best compromise between point density, positioning and acquisition time with the best accuracy possible to characterize the topographic change.

  7. Differential sea-state bias: A case study using TOPEX/POSEIDON data

    NASA Technical Reports Server (NTRS)

    Stewart, Robert H.; Devalla, B.

    1994-01-01

    We used selected data from the NASA altimeter TOPEX/POSEIDON to calculate differences in range measured by the C and Ku-band altimeters when the satellite overflew 5 to 15 m waves late at night. The range difference is due to free electrons in the ionosphere and to errors in sea-state bias. For the selected data the ionospheric influence on Ku range is less than 2 cm. Any difference in range over short horizontal distances is due only to a small along-track variability of the ionosphere and to errors in calculating the differential sea-state bias. We find that there is a barely detectable error in the bias in the geophysical data records. The wave-induced error in the ionospheric correction is less than 0.2% of significant wave height. The equivalent error in differential range is less than 1% of wave height. Errors in the differential sea-state bias calculations appear to be small even for extreme wave heights that greatly exceed the conditions on which the bias is based. The results also improved our confidence in the sea-state bias correction used for calculating the geophysical data records. Any error in the correction must influence Ku and C-band ranges almost equally.

  8. AMOEBA 2.0: A physics-first approach to biomolecular simulations

    NASA Astrophysics Data System (ADS)

    Rackers, Joshua; Ponder, Jay

    The goal of the AMOEBA force field project is to use classical physics to understand and predict the nature of interactions between biological molecules. While making significant advances over the past decade, the ultimate goal of predicting binding energies with ``chemical accuracy'' remains elusive. The primary source of this inaccuracy comes from the physics of how molecules interact at short range. For example, despite AMOEBA's advanced treatment of electrostatics, the force field dramatically overpredicts the electrostatic energy of DNA stacking interactions. AMOEBA 2.0 works to correct these errors by including simple, first principles physics-based terms to account for the quantum mechanical nature of these short-range molecular interactions. We have added a charge penetration term that considerably improves the description of electrostatic interactions at short range. We are reformulating the polarization term of AMOEBA in terms of basic physics assertions. And we are reevaluating the van der Waals term to match ab initio energy decompositions. These additions and changes promise to make AMOEBA more predictive. By including more physical detail of the important short-range interactions of biological molecules, we hope to move closer to the ultimate goal of true predictive power.

  9. Virtual sensors for on-line wheel wear and part roughness measurement in the grinding process.

    PubMed

    Arriandiaga, Ander; Portillo, Eva; Sánchez, Jose A; Cabanes, Itziar; Pombo, Iñigo

    2014-05-19

    Grinding is an advanced machining process for the manufacturing of valuable complex and accurate parts for high added value sectors such as aerospace, wind generation, etc. Due to the extremely severe conditions inside grinding machines, critical process variables such as part surface finish or grinding wheel wear cannot be easily and cheaply measured on-line. In this paper a virtual sensor for on-line monitoring of those variables is presented. The sensor is based on the modelling ability of Artificial Neural Networks (ANNs) for stochastic and non-linear processes such as grinding; the selected architecture is the Layer-Recurrent neural network. The sensor makes use of the relation between the variables to be measured and power consumption in the wheel spindle, which can be easily measured. A sensor calibration methodology is presented, and the levels of error that can be expected are discussed. Validation of the new sensor is carried out by comparing the sensor's results with actual measurements carried out in an industrial grinding machine. Results show excellent estimation performance for both wheel wear and surface roughness. In the case of wheel wear, the absolute error is within the range of microns (average value 32 μm). In the case of surface finish, the absolute error is well below Ra 1 μm (average value 0.32 μm). The present approach can be easily generalized to other grinding operations.

  10. An Empirical State Error Covariance Matrix Orbit Determination Example

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.

  11. Installation Restoration Program. Site Investigation Report. Volume 4. 152nd Tactical Reconnaissance Group, Nevada Air National Guard, Reno Cannon International Airport, Reno, Nevada

    DTIC Science & Technology

    1994-04-01

    1597 SDG LA3 ID KaMBER COmOVouD lZXP PORN SP 443 m/m 442 u/x CALC LAB CALC LIMIT RLAIz R3LATrVZ 0 AD= I AIDO ERROR -BU AWN 1 1545 DF930308A21 DFTPP...1597 SAWPLZ SAM) E 0 CONPOUUD SPIKU 8 ANPLZ NATRIX M HD 15 mm M I0 MI D CAL APC NUMIER TYPE ADDED RESULT SPIKE t 1EA t VRI RPD VWi 1566 1545 1,4-DICKL...1597 E1DO LAB ID XMNDI CONPOUND ZXP PORN EPZC1176 u/s 1174 u/u CALC LAB CALC ILIMIT RELATIVE RELATIVE I ABUN I £303 ERROR 1545 B7930307356 3PB y y

  12. SOL - SIZING AND OPTIMIZATION LANGUAGE COMPILER

    NASA Technical Reports Server (NTRS)

    Scotti, S. J.

    1994-01-01

    SOL is a computer language which is geared to solving design problems. SOL includes the mathematical modeling and logical capabilities of a computer language like FORTRAN but also includes the additional power of non-linear mathematical programming methods (i.e. numerical optimization) at the language level (as opposed to the subroutine level). The language-level use of optimization has several advantages over the traditional, subroutine-calling method of using an optimizer: first, the optimization problem is described in a concise and clear manner which closely parallels the mathematical description of optimization; second, a seamless interface is automatically established between the optimizer subroutines and the mathematical model of the system being optimized; third, the results of an optimization (objective, design variables, constraints, termination criteria, and some or all of the optimization history) are output in a form directly related to the optimization description; and finally, automatic error checking and recovery from an ill-defined system model or optimization description is facilitated by the language-level specification of the optimization problem. Thus, SOL enables rapid generation of models and solutions for optimum design problems with greater confidence that the problem is posed correctly. The SOL compiler takes SOL-language statements and generates the equivalent FORTRAN code and system calls. Because of this approach, the modeling capabilities of SOL are extended by the ability to incorporate existing FORTRAN code into a SOL program. In addition, SOL has a powerful MACRO capability. The MACRO capability of the SOL compiler effectively gives the user the ability to extend the SOL language and can be used to develop easy-to-use shorthand methods of generating complex models and solution strategies. The SOL compiler provides syntactic and semantic error-checking, error recovery, and detailed reports containing cross-references to show where each variable was used. The listings summarize all optimizations, listing the objective functions, design variables, and constraints. The compiler offers error-checking specific to optimization problems, so that simple mistakes will not cost hours of debugging time. The optimization engine used by and included with the SOL compiler is a version of Vanderplatt's ADS system (Version 1.1) modified specifically to work with the SOL compiler. SOL allows the use of the over 100 ADS optimization choices such as Sequential Quadratic Programming, Modified Feasible Directions, interior and exterior penalty function and variable metric methods. Default choices of the many control parameters of ADS are made for the user, however, the user can override any of the ADS control parameters desired for each individual optimization. The SOL language and compiler were developed with an advanced compiler-generation system to ensure correctness and simplify program maintenance. Thus, SOL's syntax was defined precisely by a LALR(1) grammar and the SOL compiler's parser was generated automatically from the LALR(1) grammar with a parser-generator. Hence unlike ad hoc, manually coded interfaces, the SOL compiler's lexical analysis insures that the SOL compiler recognizes all legal SOL programs, can recover from and correct for many errors and report the location of errors to the user. This version of the SOL compiler has been implemented on VAX/VMS computer systems and requires 204 KB of virtual memory to execute. Since the SOL compiler produces FORTRAN code, it requires the VAX FORTRAN compiler to produce an executable program. The SOL compiler consists of 13,000 lines of Pascal code. It was developed in 1986 and last updated in 1988. The ADS and other utility subroutines amount to 14,000 lines of FORTRAN code and were also updated in 1988.

  13. Forecasting Kp from solar wind data: input parameter study using 3-hour averages and 3-hour range values

    NASA Astrophysics Data System (ADS)

    Wintoft, Peter; Wik, Magnus; Matzka, Jürgen; Shprits, Yuri

    2017-11-01

    We have developed neural network models that predict Kp from upstream solar wind data. We study the importance of various input parameters, starting with the magnetic component Bz, particle density n, and velocity V and then adding total field B and the By component. As we also notice a seasonal and UT variation in average Kp we include functions of day-of-year and UT. Finally, as Kp is a global representation of the maximum range of geomagnetic variation over 3-hour UT intervals we conclude that sudden changes in the solar wind can have a big effect on Kp, even though it is a 3-hour value. Therefore, 3-hour solar wind averages will not always appropriately represent the solar wind condition, and we introduce 3-hour maxima and minima values to some degree address this problem. We find that introducing total field B and 3-hour maxima and minima, derived from 1-minute solar wind data, have a great influence on the performance. Due to the low number of samples for high Kp values there can be considerable variation in predicted Kp for different networks with similar validation errors. We address this issue by using an ensemble of networks from which we use the median predicted Kp. The models (ensemble of networks) provide prediction lead times in the range 20-90 min given by the time it takes a solar wind structure to travel from L1 to Earth. Two models are implemented that can be run with real time data: (1) IRF-Kp-2017-h3 uses the 3-hour averages of the solar wind data and (2) IRF-Kp-2017 uses in addition to the averages, also the minima and maxima values. The IRF-Kp-2017 model has RMS error of 0.55 and linear correlation of 0.92 based on an independent test set with final Kp covering 2 years using ACE Level 2 data. The IRF-Kp-2017-h3 model has RMSE = 0.63 and correlation = 0.89. We also explore the errors when tested on another two-year period with real-time ACE data which gives RMSE = 0.59 for IRF-Kp-2017 and RMSE = 0.73 for IRF-Kp-2017-h3. The errors as function of Kp and for different years are also studied.

  14. Nuclear binding energy using semi empirical mass formula

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ankita,, E-mail: ankitagoyal@gmail.com; Suthar, B.

    2016-05-06

    In the present communication, semi empirical mass formula using the liquid drop model has been presented. Nuclear binding energies are calculated using semi empirical mass formula with various constants given by different researchers. We also compare these calculated values with experimental data and comparative study for finding suitable constants is added using the error plot. The study is extended to find the more suitable constant to reduce the error.

  15. Astigmatism error modification for absolute shape reconstruction using Fourier transform method

    NASA Astrophysics Data System (ADS)

    He, Yuhang; Li, Qiang; Gao, Bo; Liu, Ang; Xu, Kaiyuan; Wei, Xiaohong; Chai, Liqun

    2014-12-01

    A method is proposed to modify astigmatism errors in absolute shape reconstruction of optical plane using Fourier transform method. If a transmission and reflection flat are used in an absolute test, two translation measurements lead to obtain the absolute shapes by making use of the characteristic relationship between the differential and original shapes in spatial frequency domain. However, because the translation device cannot guarantee the test and reference flats rigidly parallel to each other after the translations, a tilt error exists in the obtained differential data, which caused power and astigmatism errors in the reconstructed shapes. In order to modify the astigmatism errors, a rotation measurement is added. Based on the rotation invariability of the form of Zernike polynomial in circular domain, the astigmatism terms are calculated by solving polynomial coefficient equations related to the rotation differential data, and subsequently the astigmatism terms including error are modified. Computer simulation proves the validity of the proposed method.

  16. Circular carrier squeezing interferometry: Suppressing phase shift error in simultaneous phase-shifting point-diffraction interferometer

    NASA Astrophysics Data System (ADS)

    Zheng, Donghui; Chen, Lei; Li, Jinpeng; Sun, Qinyuan; Zhu, Wenhua; Anderson, James; Zhao, Jian; Schülzgen, Axel

    2018-03-01

    Circular carrier squeezing interferometry (CCSI) is proposed and applied to suppress phase shift error in simultaneous phase-shifting point-diffraction interferometer (SPSPDI). By introducing a defocus, four phase-shifting point-diffraction interferograms with circular carrier are acquired, and then converted into linear carrier interferograms by a coordinate transform. Rearranging the transformed interferograms into a spatial-temporal fringe (STF), so the error lobe will be separated from the phase lobe in the Fourier spectrum of the STF, and filtering the phase lobe to calculate the extended phase, when combined with the corresponding inverse coordinate transform, exactly retrieves the initial phase. Both simulations and experiments validate the ability of CCSI to suppress the ripple error generated by the phase shift error. Compared with carrier squeezing interferometry (CSI), CCSI is effective on some occasions in which a linear carrier is difficult to introduce, and with the added benefit of eliminating retrace error.

  17. Parameter Estimation for GRACE-FO Geometric Ranging Errors

    NASA Astrophysics Data System (ADS)

    Wegener, H.; Mueller, V.; Darbeheshti, N.; Naeimi, M.; Heinzel, G.

    2017-12-01

    Onboard GRACE-FO, the novel Laser Ranging Instrument (LRI) serves as a technology demonstrator, but it is a fully functional instrument to provide an additional high-precision measurement of the primary mission observable: the biased range between the two spacecraft. Its (expectedly) two largest error sources are laser frequency noise and tilt-to-length (TTL) coupling. While not much can be done about laser frequency noise, the mechanics of the TTL error are widely understood. They depend, however, on unknown parameters. In order to improve the quality of the ranging data, it is hence essential to accurately estimate these parameters and remove the resulting TTL error from the data.Means to do so will be discussed. In particular, the possibility of using calibration maneuvers, the utility of the attitude information provided by the LRI via Differential Wavefront Sensing (DWS), and the benefit from combining ranging data from LRI with ranging data from the established microwave ranging, will be mentioned.

  18. High Precision Ranging and Range-Rate Measurements over Free-Space-Laser Communication Link

    NASA Technical Reports Server (NTRS)

    Yang, Guangning; Lu, Wei; Krainak, Michael; Sun, Xiaoli

    2016-01-01

    We present a high-precision ranging and range-rate measurement system via an optical-ranging or combined ranging-communication link. A complete bench-top optical communication system was built. It included a ground terminal and a space terminal. Ranging and range rate tests were conducted in two configurations. In the communication configuration with 622 data rate, we achieved a two-way range-rate error of 2 microns/s, or a modified Allan deviation of 9 x 10 (exp -15) with 10 second averaging time. Ranging and range-rate as a function of Bit Error Rate of the communication link is reported. They are not sensitive to the link error rate. In the single-frequency amplitude modulation mode, we report a two-way range rate error of 0.8 microns/s, or a modified Allan deviation of 2.6 x 10 (exp -15) with 10 second averaging time. We identified the major noise sources in the current system as the transmitter modulation injected noise and receiver electronics generated noise. A new improved system will be constructed to further improve the system performance for both operating modes.

  19. [Clinical economics: a concept to optimize healthcare services].

    PubMed

    Porzsolt, F; Bauer, K; Henne-Bruns, D

    2012-03-01

    Clinical economics strives to support healthcare decisions by economic considerations. Making economic decisions does not mean saving costs but rather comparing the gained added value with the burden which has to be accepted. The necessary rules are offered in various disciplines, such as economy, epidemiology and ethics. Medical doctors have recognized these rules but are not applying them in daily clinical practice. This lacking orientation leads to preventable errors. Examples of these errors are shown for diagnosis, screening, prognosis and therapy. As these errors can be prevented by application of clinical economic principles the possible consequences for optimization of healthcare are discussed.

  20. Detection of spatio-temporal change of ocean acoustic velocity for observing seafloor crustal deformation applying seismological methods

    NASA Astrophysics Data System (ADS)

    Eto, S.; Nagai, S.; Tadokoro, K.

    2011-12-01

    Our group has developed a system for observing seafloor crustal deformation with a combination of acoustic ranging and kinematic GPS positioning techniques. One of the effective factors to reduce estimation error of submarine benchmark in our system is modeling variation of ocean acoustic velocity. We estimated various 1-dimensional velocity models with depth under some constraints, because it is difficult to estimate 3-dimensional acoustic velocity structure including temporal change due to our simple acquisition procedure of acoustic ranging data. We, then, applied the joint hypocenter determination method in seismology [Kissling et al., 1994] to acoustic ranging data. We assume two conditions as constraints in inversion procedure as follows: 1) fixed acoustic velocity in deeper part because it is usually stable both in space and time, 2) each inverted velocity model should be decreased with depth. The following two remarkable spatio-temporal changes of acoustic velocity 1) variations of travel-time residuals at the same points within short time and 2) larger differences between residuals at the neighboring points, which are one's of travel-time from different benchmarks. The First results cannot be explained only by the effect of atmospheric condition change including heating by sunlight. To verify the residual variations mentioned as the second result, we have performed forward modeling of acoustic ranging data with velocity models added velocity anomalies. We calculate travel time by a pseudo-bending ray tracing method [Um and Thurber, 1987] to examine effects of velocity anomaly on the travel-time differences. Comparison between these residuals and travel-time difference in forward modeling, velocity anomaly bodies in shallower depth can make these anomalous residuals, which may indicate moving water bodies. We need to apply an acoustic velocity structure model with velocity anomaly(s) in acoustic ranging data analysis and/or to develop a new system with a large number of sea surface stations to detect them, which may be able to reduce error of seafloor benchmarker position.

  1. Deterministic ion beam material adding technology for high-precision optical surfaces.

    PubMed

    Liao, Wenlin; Dai, Yifan; Xie, Xuhui; Zhou, Lin

    2013-02-20

    Although ion beam figuring (IBF) provides a highly deterministic method for the precision figuring of optical components, several problems still need to be addressed, such as the limited correcting capability for mid-to-high spatial frequency surface errors and low machining efficiency for pit defects on surfaces. We propose a figuring method named deterministic ion beam material adding (IBA) technology to solve those problems in IBF. The current deterministic optical figuring mechanism, which is dedicated to removing local protuberances on optical surfaces, is enriched and developed by the IBA technology. Compared with IBF, this method can realize the uniform convergence of surface errors, where the particle transferring effect generated in the IBA process can effectively correct the mid-to-high spatial frequency errors. In addition, IBA can rapidly correct the pit defects on the surface and greatly improve the machining efficiency of the figuring process. The verification experiments are accomplished on our experimental installation to validate the feasibility of the IBA method. First, a fused silica sample with a rectangular pit defect is figured by using IBA. Through two iterations within only 47.5 min, this highly steep pit is effectively corrected, and the surface error is improved from the original 24.69 nm root mean square (RMS) to the final 3.68 nm RMS. Then another experiment is carried out to demonstrate the correcting capability of IBA for mid-to-high spatial frequency surface errors, and the final results indicate that the surface accuracy and surface quality can be simultaneously improved.

  2. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM.

    PubMed

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei; Song, Houbing

    2018-01-15

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model's performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM's parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models' performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors.

  3. The effect of memory and context changes on color matches to real objects.

    PubMed

    Allred, Sarah R; Olkkonen, Maria

    2015-07-01

    Real-world color identification tasks often require matching the color of objects between contexts and after a temporal delay, thus placing demands on both perceptual and memory processes. Although the mechanisms of matching colors between different contexts have been widely studied under the rubric of color constancy, little research has investigated the role of long-term memory in such tasks or how memory interacts with color constancy. To investigate this relationship, observers made color matches to real study objects that spanned color space, and we independently manipulated the illumination impinging on the objects, the surfaces in which objects were embedded, and the delay between seeing the study object and selecting its color match. Adding a 10-min delay increased both the bias and variability of color matches compared to a baseline condition. These memory errors were well accounted for by modeling memory as a noisy but unbiased version of perception constrained by the matching methods. Surprisingly, we did not observe significant increases in errors when illumination and surround changes were added to the 10-minute delay, although the context changes alone did elicit significant errors.

  4. Unitary reconstruction of secret for stabilizer-based quantum secret sharing

    NASA Astrophysics Data System (ADS)

    Matsumoto, Ryutaroh

    2017-08-01

    We propose a unitary procedure to reconstruct quantum secret for a quantum secret sharing scheme constructed from stabilizer quantum error-correcting codes. Erasure correcting procedures for stabilizer codes need to add missing shares for reconstruction of quantum secret, while unitary reconstruction procedures for certain class of quantum secret sharing are known to work without adding missing shares. The proposed procedure also works without adding missing shares.

  5. Determination of thorium and of rare earth elements in cerium earth minerals and ores

    USGS Publications Warehouse

    Carron, M.K.; Skinner, D.L.; Stevens, R.E.

    1955-01-01

    The conventional oxalate method for precipitating thorium and the rare earth elements in acid solution exhibits definite solubilities of these elements. The present work was undertaken to establish conditions overcoming these solubilities and to find optimum conditions for precipitating thorium and the rare earth elements as hydroxides and sebacates. The investigations resulted in a reliable procedure applicable to samples in which the cerium group elements predominate. The oxalate precipitations are made from homogeneous solution at pH 2 by adding a prepared solution of anhydrous oxalic acid in methanol instead of the more expensive crystalline methyl oxalate. Calcium is added as a carrier. Quantitative precipitation of thorium and the rare earth elements is ascertained by further small additions of calcium to the supernatant liquid, until the added calcium precipitates as oxalate within 2 minutes. Calcium is removed by precipitating the hydroxides of thorium and rare earths at room temperature by adding ammonium hydroxide to pH > 10. Thorium is separated as the sebacate at pH 2.5, and the rare earths are precipitated with ammonium sebacate at pH 9. Maximum errors for combined weights of thorium and rare earth oxides on synthetic mixtures are ??0.6 mg. Maximum error for separated thoria is ??0.5 mg.

  6. Adding Postal Follow-Up to a Web-Based Survey of Primary Care and Gastroenterology Clinic Physician Chiefs Improved Response Rates but not Response Quality or Representativeness.

    PubMed

    Partin, Melissa R; Powell, Adam A; Burgess, Diana J; Haggstrom, David A; Gravely, Amy A; Halek, Krysten; Bangerter, Ann; Shaukat, Aasma; Nelson, David B

    2015-09-01

    This study assessed whether postal follow-up to a web-based physician survey improves response rates, response quality, and representativeness. We recruited primary care and gastroenterology chiefs at 125 Veterans Affairs medical facilities to complete a 10-min web-based survey on colorectal cancer screening and diagnostic practices in 2010. We compared response rates, response errors, and representativeness in the primary care and gastroenterology samples before and after adding postal follow-up. Adding postal follow-up increased response rates by 20-25 percentage points; markedly greater increases than predicted from a third e-mail reminder. In the gastroenterology sample, the mean number of response errors made by web responders (0.25) was significantly smaller than the mean number made by postal responders (2.18), and web responders provided significantly longer responses to open-ended questions. There were no significant differences in these outcomes in the primary care sample. Adequate representativeness was achieved before postal follow-up in both samples, as indicated by the lack of significant differences between web responders and the recruitment population on facility characteristics. We conclude adding postal follow-up to this web-based physician leader survey improved response rates but not response quality or representativeness. © The Author(s) 2013.

  7. An Experiment in Scientific Program Understanding

    NASA Technical Reports Server (NTRS)

    Stewart, Mark E. M.; Owen, Karl (Technical Monitor)

    2000-01-01

    This paper concerns a procedure that analyzes aspects of the meaning or semantics of scientific and engineering code. This procedure involves taking a user's existing code, adding semantic declarations for some primitive variables, and parsing this annotated code using multiple, independent expert parsers. These semantic parsers encode domain knowledge and recognize formulae in different disciplines including physics, numerical methods, mathematics, and geometry. The parsers will automatically recognize and document some static, semantic concepts and help locate some program semantic errors. Results are shown for three intensively studied codes and seven blind test cases; all test cases are state of the art scientific codes. These techniques may apply to a wider range of scientific codes. If so, the techniques could reduce the time, risk, and effort required to develop and modify scientific codes.

  8. Non-linear dynamic compensation system

    NASA Technical Reports Server (NTRS)

    Lin, Yu-Hwan (Inventor); Lurie, Boris J. (Inventor)

    1992-01-01

    A non-linear dynamic compensation subsystem is added in the feedback loop of a high precision optical mirror positioning control system to smoothly alter the control system response bandwidth from a relatively wide response bandwidth optimized for speed of control system response to a bandwidth sufficiently narrow to reduce position errors resulting from the quantization noise inherent in the inductosyn used to measure mirror position. The non-linear dynamic compensation system includes a limiter for limiting the error signal within preselected limits, a compensator for modifying the limiter output to achieve the reduced bandwidth response, and an adder for combining the modified error signal with the difference between the limited and unlimited error signals. The adder output is applied to control system motor so that the system response is optimized for accuracy when the error signal is within the preselected limits, optimized for speed of response when the error signal is substantially beyond the preselected limits and smoothly varied therebetween as the error signal approaches the preselected limits.

  9. Self-checking self-repairing computer nodes using the mirror processor

    NASA Technical Reports Server (NTRS)

    Tamir, Yuval

    1992-01-01

    Circuitry added to fault-tolerant systems for concurrent error deduction usually reduces performance. Using a technique called micro rollback, it is possible to eliminate most of the performance penalty of concurrent error detection. Error detection is performed in parallel with intermodule communication, and erroneous state changes are later undone. The author reports on the design and implementation of a VLSI RISC microprocessor, called the Mirror Processor (MP), which is capable of micro rollback. In order to achieve concurrent error detection, two MP chips operate in lockstep, comparing external signals and a signature of internal signals every clock cycle. If a mismatch is detected, both processors roll back to the beginning of the cycle when the error occurred. In some cases the erroneous state is corrected by copying a value from the fault-free processor to the faulty processor. The architecture, microarchitecture, and VLSI implementation of the MP, emphasizing its error-detection, error-recovery, and self-diagnosis capabilities, are described.

  10. GDF v2.0, an enhanced version of GDF

    NASA Astrophysics Data System (ADS)

    Tsoulos, Ioannis G.; Gavrilis, Dimitris; Dermatas, Evangelos

    2007-12-01

    An improved version of the function estimation program GDF is presented. The main enhancements of the new version include: multi-output function estimation, capability of defining custom functions in the grammar and selection of the error function. The new version has been evaluated on a series of classification and regression datasets, that are widely used for the evaluation of such methods. It is compared to two known neural networks and outperforms them in 5 (out of 10) datasets. Program summaryTitle of program: GDF v2.0 Catalogue identifier: ADXC_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXC_v2_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 98 147 No. of bytes in distributed program, including test data, etc.: 2 040 684 Distribution format: tar.gz Programming language: GNU C++ Computer: The program is designed to be portable in all systems running the GNU C++ compiler Operating system: Linux, Solaris, FreeBSD RAM: 200000 bytes Classification: 4.9 Does the new version supersede the previous version?: Yes Nature of problem: The technique of function estimation tries to discover from a series of input data a functional form that best describes them. This can be performed with the use of parametric models, whose parameters can adapt according to the input data. Solution method: Functional forms are being created by genetic programming which are approximations for the symbolic regression problem. Reasons for new version: The GDF package was extended in order to be more flexible and user customizable than the old package. The user can extend the package by defining his own error functions and he can extend the grammar of the package by adding new functions to the function repertoire. Also, the new version can perform function estimation of multi-output functions and it can be used for classification problems. Summary of revisions: The following features have been added to the package GDF: Multi-output function approximation. The package can now approximate any function f:R→R. This feature gives also to the package the capability of performing classification and not only regression. User defined function can be added to the repertoire of the grammar, extending the regression capabilities of the package. This feature is limited to 3 functions, but easily this number can be increased. Capability of selecting the error function. The package offers now to the user apart from the mean square error other error functions such as: mean absolute square error, maximum square error. Also, user defined error functions can be added to the set of error functions. More verbose output. The main program displays more information to the user as well as the default values for the parameters. Also, the package gives to the user the capability to define an output file, where the output of the gdf program for the testing set will be stored after the termination of the process. Additional comments: A technical report describing the revisions, experiments and test runs is packaged with the source code. Running time: Depending on the train data.

  11. Statistics of the residual refraction errors in laser ranging data

    NASA Technical Reports Server (NTRS)

    Gardner, C. S.

    1977-01-01

    A theoretical model for the range error covariance was derived by assuming that the residual refraction errors are due entirely to errors in the meteorological data which are used to calculate the atmospheric correction. The properties of the covariance function are illustrated by evaluating the theoretical model for the special case of a dense network of weather stations uniformly distributed within a circle.

  12. On the determinants of the conjunction fallacy: probability versus inductive confirmation.

    PubMed

    Tentori, Katya; Crupi, Vincenzo; Russo, Selena

    2013-02-01

    Major recent interpretations of the conjunction fallacy postulate that people assess the probability of a conjunction according to (non-normative) averaging rules as applied to the constituents' probabilities or represent the conjunction fallacy as an effect of random error in the judgment process. In the present contribution, we contrast such accounts with a different reading of the phenomenon based on the notion of inductive confirmation as defined by contemporary Bayesian theorists. Averaging rule hypotheses along with the random error model and many other existing proposals are shown to all imply that conjunction fallacy rates would rise as the perceived probability of the added conjunct does. By contrast, our account predicts that the conjunction fallacy depends on the added conjunct being perceived as inductively confirmed. Four studies are reported in which the judged probability versus confirmation of the added conjunct have been systematically manipulated and dissociated. The results consistently favor a confirmation-theoretic account of the conjunction fallacy against competing views. Our proposal is also discussed in connection with related issues in the study of human inductive reasoning. 2013 APA, all rights reserved

  13. Articulation in schoolchildren and adults with neurofibromatosis type 1.

    PubMed

    Cosyns, Marjan; Mortier, Geert; Janssens, Sandra; Bogaert, Famke; D'Hondt, Stephanie; Van Borsel, John

    2012-01-01

    Several authors mentioned the occurrence of articulation problems in the neurofibromatosis type 1 (NF1) population. However, few studies have undertaken a detailed analysis of the articulation skills of NF1 patients, especially in schoolchildren and adults. Therefore, the aim of the present study was to examine in depth the articulation skills of NF1 schoolchildren and adults, both phonetically and phonologically. Speech samples were collected from 43 Flemish NF1 patients (14 children and 29 adults), ranging in age between 7 and 53 years, using a standardized speech test in which all Flemish single speech sounds and most clusters occur in all their permissible syllable positions. Analyses concentrated on consonants only and included a phonetic inventory, a phonetic, and a phonological analysis. It was shown that phonetic inventories were incomplete in 16.28% (7/43) of participants, in which totally correct realizations of the sibilants /ʃ/ and/or /ʒ/ were missing. Phonetic analysis revealed that distortions were the predominant phonetic error type. Sigmatismus stridens, multiple ad- or interdentality, and, in children, rhotacismus non vibrans were frequently observed. From a phonological perspective, the most common error types were substitution and syllable structure errors. Particularly, devoicing, cluster simplification, and, in children, deletion of the final consonant of words were perceived. Further, it was demonstrated that significantly more men than women presented with an incomplete phonetic inventory, and that girls tended to display more articulation errors than boys. Additionally, children exhibited significantly more articulation errors than adults, suggesting that although the articulation skills of NF1 patients evolve positively with age, articulation problems do not resolve completely from childhood to adulthood. As such, the articulation errors made by NF1 adults may be regarded as residual articulation disorders. It can be concluded that the speech of NF1 patients is characterized by mild articulation disorders at an age where this is no longer expected. Readers will be able to describe neurofibromatosis type 1 (NF1) and explain the articulation errors displayed by schoolchildren and adults with this genetic syndrome. © 2011 Elsevier Inc. All rights reserved.

  14. Noise-induced errors in geophysical parameter estimation from retarding potential analyzers in low Earth orbit

    NASA Astrophysics Data System (ADS)

    Debchoudhury, Shantanab; Earle, Gregory

    2017-04-01

    Retarding Potential Analyzers (RPA) have a rich flight heritage. Standard curve-fitting analysis techniques exist that can infer state variables in the ionospheric plasma environment from RPA data, but the estimation process is prone to errors arising from a number of sources. Previous work has focused on the effects of grid geometry on uncertainties in estimation; however, no prior study has quantified the estimation errors due to additive noise. In this study, we characterize the errors in estimation of thermal plasma parameters by adding noise to the simulated data derived from the existing ionospheric models. We concentrate on low-altitude, mid-inclination orbits since a number of nano-satellite missions are focused on this region of the ionosphere. The errors are quantified and cross-correlated for varying geomagnetic conditions.

  15. Acquisition, calibration, and performance of airborne high-resolution ADS40 SH52 sensor data for monitoring the Colorado River below Glen Canyon Dam

    NASA Astrophysics Data System (ADS)

    Davis, P. A.; Cagney, L. E.; Kohl, K. A.; Gushue, T. M.; Fritzinger, C.; Bennett, G. E.; Hamill, J. F.; Melis, T. S.

    2010-12-01

    Periodically, the Grand Canyon Monitoring and Research Center of the U.S. Geological Survey collects and interprets high-resolution (20-cm), airborne multispectral imagery and digital surface models (DSMs) to monitor the effects of Glen Canyon Dam operations on natural and cultural resources of the Colorado River in Grand Canyon. We previously employed the first generation of the ADS40 in 2000 and the Zeiss-Imaging Digital Mapping Camera (DMC) in 2005. Data from both sensors displayed band-image misregistration owing to multiple sensor optics and image smearing along abrupt scarps due to errors in image rectification software, both of which increased post-processing time, cost, and errors from image classification. Also, the near-infrared gain on the early, 8-bit ADS40 was not properly set and its signal was saturated for the more chlorophyll-rich vegetation, which limited our vegetation mapping. Both sensors had stereo panchromatic capability for generating a DSM. The ADS40 performed to specifications; the DMC failed. In 2009, we employed the new ADS40 SH52 to acquire 11-bit multispectral data with a single lens (20-cm positional accuracy), as well as stereo panchromatic data that provided a 1-m cell DSM (40-cm root-mean-square vertical error at one sigma). Analyses of the multispectral data showed near-perfect registration of its four band images at our 20-cm resolution, a linear response to ground reflectance, and a large dynamic range and good sensitivity (except for the blue band). Data were acquired over a 10-day period for the 450-km-long river corridor in which acquisition time and atmospheric conditions varied considerably during inclement weather. We received 266 orthorectified flightlines for the corridor, choosing to calibrate and mosaic the data ourselves to ensure a flawless mosaic with consistent, realistic spectral information. A linear least-squares cross-calibration of overlapping flightlines for the corridor showed that the dominate factors in inter-flightline variability were solar zenith angle and atmospheric scattering, which respectively affect the slope and intercept of the calibration. The inter-flightline calibration slopes were consistently close to the square of the ratio of the cosines of the zenith angles of each pair of overlapping flightlines. Our results corroborate previous observations that the cosine of solar zenith angle is a good approximation for atmospheric transmission and the use of its square in radiometric calibrations may compensate for that effect and the effect of non-nadir sun angle on surface reflectance. It was more expedient to acquire imagery for each sub-linear river segment by collecting 5-6 parallel flightlines; river sinuosity caused us to use 2-3 flightlines for each segment. Surfaces near flightline edges were often smeared and replaced with adjacent, more nadir-viewed flightline data. Eliminating surface smearing was the most time consuming aspect of creating a flawless image mosaic for the river corridor, but its removal will increase the efficiency and accuracy of image analyses of monitoring parameters of interest to river managers.

  16. Detecting higher-order wavefront errors with an astigmatic hybrid wavefront sensor.

    PubMed

    Barwick, Shane

    2009-06-01

    The reconstruction of wavefront errors from measurements over subapertures can be made more accurate if a fully characterized quadratic surface can be fitted to the local wavefront surface. An astigmatic hybrid wavefront sensor with added neural network postprocessing is shown to have this capability, provided that the focal image of each subaperture is sufficiently sampled. Furthermore, complete local curvature information is obtained with a single image without splitting beam power.

  17. Autonomous Underwater Vehicle Navigation

    DTIC Science & Technology

    2008-02-01

    three standard deviations are ignored as indicated by the × marker. 25 7. REFERENCES [1] R. G. Brown and P. Y. C. Hwang , Introduction to Random Signals...autonomous underwater vehicle with six degrees of freedom. We approach this problem using an error state formulation of the Kalman filter. Integration...each position fix, but is this ad-hoc method optimal? Here, we present an approach using an error state formulation of the Kalman filter to provide an

  18. Domain-Level Assessment of the Weather Running Estimate-Nowcast (WREN) Model

    DTIC Science & Technology

    2016-11-01

    Added by Decreased Grid Spacing 14 4.4 Performance Comparison of 2 WRE–N Configurations 18 4.5 Performance Comparison: Dumais WRE–N with FDDA vs. the...FDDA for 2 -m-AGL TMP (K) ..................................................... 15 Fig. 11 Bias and RMSE errors for the 3 grids for Dumais and Passner...WRE–N with FDDA for 2 -m-AGL DPT (K) ...................................................... 16 Fig. 12 Bias and RMSE errors for the 3 grids for Dumais

  19. Basic Studies on High Pressure Air Plasmas

    DTIC Science & Technology

    2006-08-30

    which must be added a 1.5 month salary to A. Bugayev for assistance in laser and optic techniques. 2 Part II Technical report Plasma-induced phase shift...two-wavelength heterodyne interferometry applied to atmospheric pressure air plasma 11.1 .A. Plasma-induced phase shift - Electron density...a driver, since the error on the frequency leads to an error on the phase shift. (c) Optical elements Mirrors Protected mirrors must be used to stand

  20. Advanced Technology for Portable Personal Visualization

    DTIC Science & Technology

    1991-12-01

    sites. VPL Research began in 1989 selling0IDapitP incodaesyemW commemcillys a KD system that used a glove to control the actions of flying and grabbing...problem of beacon switching error or its equivalent . Steps we took to control these errors would apply to other (3) Ascension Technology Corporation. The...AD-A245 905 / /7 Advanced Technology for Portable Personal Visualization I) ICReport of Research Progress JAN 3.ELEC April - December 1991I ELECTE I

  1. A Sensitivity Analysis of Circular Error Probable Approximation Techniques

    DTIC Science & Technology

    1992-03-01

    SENSITIVITY ANALYSIS OF CIRCULAR ERROR PROBABLE APPROXIMATION TECHNIQUES THESIS Presented to the Faculty of the School of Engineering of the Air Force...programming skills. Major Paul Auclair patiently advised me in this endeavor, and Major Andy Howell added numerous insightful contributions. I thank my...techniques. The two ret(st accuratec techniiques require numerical integration and can take several hours to run ov a personal comlputer [2:1-2,4-6]. Some

  2. Clustering and Recurring Anomaly Identification: Recurring Anomaly Detection System (ReADS)

    NASA Technical Reports Server (NTRS)

    McIntosh, Dawn

    2006-01-01

    This viewgraph presentation reviews the Recurring Anomaly Detection System (ReADS). The Recurring Anomaly Detection System is a tool to analyze text reports, such as aviation reports and maintenance records: (1) Text clustering algorithms group large quantities of reports and documents; Reduces human error and fatigue (2) Identifies interconnected reports; Automates the discovery of possible recurring anomalies; (3) Provides a visualization of the clusters and recurring anomalies We have illustrated our techniques on data from Shuttle and ISS discrepancy reports, as well as ASRS data. ReADS has been integrated with a secure online search

  3. Smithsonian Astrophysical Observatory (SAO) star catalog (Sao staff 1966, edition ADC 1989): Documentation for the machine-readable version

    NASA Technical Reports Server (NTRS)

    Roman, Nancy G.; Warren, Wayne H., Jr.

    1989-01-01

    An updated, corrected, and extended machine readable version of the catalog is described. Published and unpublished errors discovered in the previous version were corrected, and multiple star and supplemental BD identifications were added to stars where more than one SAO entry has the same Durchmusterung number. Henry Draper Extension (HDE) numbers were added for stars found in both volumes of the extension. Data for duplicate SAO entries (those referring to the same star) were flagged. J2000 positions in usual units and in radians were added.

  4. Image registration of naval IR images

    NASA Astrophysics Data System (ADS)

    Rodland, Arne J.

    1996-06-01

    In a real world application an image from a stabilized sensor on a moving platform will not be 100 percent stabilized. There will always be a small unknown error in the stabilization due to factors such as dynamic deformations in the structure between sensor and reference Inertial Navigation Unit, servo inaccuracies, etc. For a high resolution imaging sensor this stabilization error causes the image to move several pixels in unknown direction between frames. TO be able to detect and track small moving objects from such a sensor, this unknown movement of the sensor image must be estimated. An algorithm that searches for land contours in the image has been evaluated. The algorithm searches for high contrast points distributed over the whole image. As long as moving objects in the scene only cover a small area of the scene, most of the points are located on solid ground. By matching the list of points from frame to frame, the movement of the image due to stabilization errors can be estimated and compensated. The point list is searched for points with diverging movement from the estimated stabilization error. These points are then assumed to be located on moving objects. Points assumed to be located on moving objects are gradually exchanged with new points located in the same area. Most of the processing is performed on the list of points and not on the complete image. The algorithm is therefore very fast and well suited for real time implementation. The algorithm has been tested on images from an experimental IR scanner. Stabilization errors were added artificially to the image such that the output from the algorithm could be compared with the artificially added stabilization errors.

  5. Entropy-Based TOA Estimation and SVM-Based Ranging Error Mitigation in UWB Ranging Systems

    PubMed Central

    Yin, Zhendong; Cui, Kai; Wu, Zhilu; Yin, Liang

    2015-01-01

    The major challenges for Ultra-wide Band (UWB) indoor ranging systems are the dense multipath and non-line-of-sight (NLOS) problems of the indoor environment. To precisely estimate the time of arrival (TOA) of the first path (FP) in such a poor environment, a novel approach of entropy-based TOA estimation and support vector machine (SVM) regression-based ranging error mitigation is proposed in this paper. The proposed method can estimate the TOA precisely by measuring the randomness of the received signals and mitigate the ranging error without the recognition of the channel conditions. The entropy is used to measure the randomness of the received signals and the FP can be determined by the decision of the sample which is followed by a great entropy decrease. The SVM regression is employed to perform the ranging-error mitigation by the modeling of the regressor between the characteristics of received signals and the ranging error. The presented numerical simulation results show that the proposed approach achieves significant performance improvements in the CM1 to CM4 channels of the IEEE 802.15.4a standard, as compared to conventional approaches. PMID:26007726

  6. Modeling methodology for MLS range navigation system errors using flight test data

    NASA Technical Reports Server (NTRS)

    Karmali, M. S.; Phatak, A. V.

    1982-01-01

    Flight test data was used to develop a methodology for modeling MLS range navigation system errors. The data used corresponded to the constant velocity and glideslope approach segment of a helicopter landing trajectory. The MLS range measurement was assumed to consist of low frequency and random high frequency components. The random high frequency component was extracted from the MLS range measurements. This was done by appropriate filtering of the range residual generated from a linearization of the range profile for the final approach segment. This range navigation system error was then modeled as an autoregressive moving average (ARMA) process. Maximum likelihood techniques were used to identify the parameters of the ARMA process.

  7. How to derive biological information from the value of the normalization constant in allometric equations.

    PubMed

    Kaitaniemi, Pekka

    2008-04-09

    Allometric equations are widely used in many branches of biological science. The potential information content of the normalization constant b in allometric equations of the form Y = bX(a) has, however, remained largely neglected. To demonstrate the potential for utilizing this information, I generated a large number of artificial datasets that resembled those that are frequently encountered in biological studies, i.e., relatively small samples including measurement error or uncontrolled variation. The value of X was allowed to vary randomly within the limits describing different data ranges, and a was set to a fixed theoretical value. The constant b was set to a range of values describing the effect of a continuous environmental variable. In addition, a normally distributed random error was added to the values of both X and Y. Two different approaches were then used to model the data. The traditional approach estimated both a and b using a regression model, whereas an alternative approach set the exponent a at its theoretical value and only estimated the value of b. Both approaches produced virtually the same model fit with less than 0.3% difference in the coefficient of determination. Only the alternative approach was able to precisely reproduce the effect of the environmental variable, which was largely lost among noise variation when using the traditional approach. The results show how the value of b can be used as a source of valuable biological information if an appropriate regression model is selected.

  8. Feasibility of RACT for 3D dose measurement and range verification in a water phantom.

    PubMed

    Alsanea, Fahed; Moskvin, Vadim; Stantz, Keith M

    2015-02-01

    The objective of this study is to establish the feasibility of using radiation-induced acoustics to measure the range and Bragg peak dose from a pulsed proton beam. Simulation studies implementing a prototype scanner design based on computed tomographic methods were performed to investigate the sensitivity to proton range and integral dose. Derived from thermodynamic wave equation, the pressure signals generated from the dose deposited from a pulsed proton beam with a 1 cm lateral beam width and a range of 16, 20, and 27 cm in water using Monte Carlo methods were simulated. The resulting dosimetric images were reconstructed implementing a 3D filtered backprojection algorithm and the pressure signals acquired from a 71-transducer array with a cylindrical geometry (30 × 40 cm) rotated over 2π about its central axis. Dependencies on the detector bandwidth and proton beam pulse width were performed, after which, different noise levels were added to the detector signals (using 1 μs pulse width and a 0.5 MHz cutoff frequency/hydrophone) to investigate the statistical and systematic errors in the proton range (at 20 cm) and Bragg peak dose (of 1 cGy). The reconstructed radioacoustic computed tomographic image intensity was shown to be linearly correlated to the dose within the Bragg peak. And, based on noise dependent studies, a detector sensitivity of 38 mPa was necessary to determine the proton range to within 1.0 mm (full-width at half-maximum) (systematic error < 150 μm) for a 1 cGy Bragg peak dose, where the integral dose within the Bragg peak was measured to within 2%. For existing hydrophone detector sensitivities, a Bragg peak dose of 1.6 cGy is possible. This study demonstrates that computed tomographic scanner based on ionizing radiation-induced acoustics can be used to verify dose distribution and proton range with centi-Gray sensitivity. Realizing this technology into the clinic has the potential to significantly impact beam commissioning, treatment verification during particle beam therapy and image guided techniques.

  9. Modeling Water Temperature in the Yakima River, Washington, from Roza Diversion Dam to Prosser Dam, 2005-06

    USGS Publications Warehouse

    Voss, Frank D.; Curran, Christopher A.; Mastin, Mark C.

    2008-01-01

    A mechanistic water-temperature model was constructed by the U.S. Geological Survey for use by the Bureau of Reclamation for studying the effect of potential water management decisions on water temperature in the Yakima River between Roza and Prosser, Washington. Flow and water temperature data for model input were obtained from the Bureau of Reclamation Hydromet database and from measurements collected by the U.S. Geological Survey during field trips in autumn 2005. Shading data for the model were collected by the U.S. Geological Survey in autumn 2006. The model was calibrated with data collected from April 1 through October 31, 2005, and tested with data collected from April 1 through October 31, 2006. Sensitivity analysis results showed that for the parameters tested, daily maximum water temperature was most sensitive to changes in air temperature and solar radiation. Root mean squared error for the five sites used for model calibration ranged from 1.3 to 1.9 degrees Celsius (?C) and mean error ranged from ?1.3 to 1.6?C. The root mean squared error for the five sites used for testing simulation ranged from 1.6 to 2.2?C and mean error ranged from 0.1 to 1.3?C. The accuracy of the stream temperatures estimated by the model is limited by four errors (model error, data error, parameter error, and user error).

  10. Estimating Effects of Multipath Propagation on GPS Signals

    NASA Technical Reports Server (NTRS)

    Byun, Sung; Hajj, George; Young, Lawrence

    2005-01-01

    Multipath Simulator Taking into Account Reflection and Diffraction (MUSTARD) is a computer program that simulates effects of multipath propagation on received Global Positioning System (GPS) signals. MUSTARD is a very efficient means of estimating multipath-induced position and phase errors as functions of time, given the positions and orientations of GPS satellites, the GPS receiver, and any structures near the receiver as functions of time. MUSTARD traces each signal from a GPS satellite to the receiver, accounting for all possible paths the signal can take, including all paths that include reflection and/or diffraction from surfaces of structures near the receiver and on the satellite. Reflection and diffraction are modeled by use of the geometrical theory of diffraction. The multipath signals are added to the direct signal after accounting for the gain of the receiving antenna. Then, in a simulation of a delay-lock tracking loop in the receiver, the multipath-induced range and phase errors as measured by the receiver are estimated. All of these computations are performed for both right circular polarization and left circular polarization of both the L1 (1.57542-GHz) and L2 (1.2276-GHz) GPS signals.

  11. Positioning accuracy in a registration-free CT-based navigation system

    NASA Astrophysics Data System (ADS)

    Brandenberger, D.; Birkfellner, W.; Baumann, B.; Messmer, P.; Huegli, R. W.; Regazzoni, P.; Jacob, A. L.

    2007-12-01

    In order to maintain overall navigation accuracy established by a calibration procedure in our CT-based registration-free navigation system, the CT scanner has to repeatedly generate identical volume images of a target at the same coordinates. We tested the positioning accuracy of the prototype of an advanced workplace for image-guided surgery (AWIGS) which features an operating table capable of direct patient transfer into a CT scanner. Volume images (N = 154) of a specialized phantom were analysed for translational shifting after various table translations. Variables included added weight and phantom position on the table. The navigation system's calibration accuracy was determined (bias 2.1 mm, precision ± 0.7 mm, N = 12). In repeated use, a bias of 3.0 mm and a precision of ± 0.9 mm (N = 10) were maintainable. Instances of translational image shifting were related to the table-to-CT scanner docking mechanism. A distance scaling error when altering the table's height was detected. Initial prototype problems visible in our study causing systematic errors were resolved by repeated system calibrations between interventions. We conclude that the accuracy achieved is sufficient for a wide range of clinical applications in surgery and interventional radiology.

  12. Poster error probability in the Mu-11 Sequential Ranging System

    NASA Technical Reports Server (NTRS)

    Coyle, C. W.

    1981-01-01

    An expression is derived for the posterior error probability in the Mu-2 Sequential Ranging System. An algorithm is developed which closely bounds the exact answer and can be implemented in the machine software. A computer simulation is provided to illustrate the improved level of confidence in a ranging acquisition using this figure of merit as compared to that using only the prior probabilities. In a simulation of 20,000 acquisitions with an experimentally determined threshold setting, the algorithm detected 90% of the actual errors and made false indication of errors on 0.2% of the acquisitions.

  13. An accurate global potential energy surface, dipole moment surface, and rovibrational frequencies for NH3

    NASA Astrophysics Data System (ADS)

    Huang, Xinchuan; Schwenke, David W.; Lee, Timothy J.

    2008-12-01

    A global potential energy surface (PES) that includes short and long range terms has been determined for the NH3 molecule. The singles and doubles coupled-cluster method that includes a perturbational estimate of connected triple excitations and the internally contracted averaged coupled-pair functional electronic structure methods have been used in conjunction with very large correlation-consistent basis sets, including diffuse functions. Extrapolation to the one-particle basis set limit was performed and core correlation and scalar relativistic contributions were included directly, while the diagonal Born-Oppenheimer correction was added. Our best purely ab initio PES, denoted "mixed," is constructed from two PESs which differ in whether the ic-ACPF higher-order correlation correction was added or not. Rovibrational transition energies computed from the mixed PES agree well with experiment and the best previous theoretical studies, but most importantly the quality does not deteriorate even up to 10300cm-1 above the zero-point energy (ZPE). The mixed PES was improved further by empirical refinement using the most reliable J =0-2 rovibrational transitions in the HITRAN 2004 database. Agreement between high-resolution experiment and rovibrational transition energies computed from our refined PES for J =0-6 is excellent. Indeed, the root mean square (rms) error for 13 HITRAN 2004 bands for J =0-2 is 0.023cm-1 and that for each band is always ⩽0.06cm-1. For J =3-5 the rms error is always ⩽0.15cm-1. This agreement means that transition energies computed with our refined PES should be useful in the assignment of new high-resolution NH3 spectra and in correcting mistakes in previous assignments. Ideas for further improvements to our refined PES and for extension to other isotopolog are discussed.

  14. Analysis of Performance of Stereoscopic-Vision Software

    NASA Technical Reports Server (NTRS)

    Kim, Won; Ansar, Adnan; Steele, Robert; Steinke, Robert

    2007-01-01

    A team of JPL researchers has analyzed stereoscopic vision software and produced a document describing its performance. This software is of the type used in maneuvering exploratory robotic vehicles on Martian terrain. The software in question utilizes correlations between portions of the images recorded by two electronic cameras to compute stereoscopic disparities, which, in conjunction with camera models, are used in computing distances to terrain points to be included in constructing a three-dimensional model of the terrain. The analysis included effects of correlation- window size, a pyramidal image down-sampling scheme, vertical misalignment, focus, maximum disparity, stereo baseline, and range ripples. Contributions of sub-pixel interpolation, vertical misalignment, and foreshortening to stereo correlation error were examined theoretically and experimentally. It was found that camera-calibration inaccuracy contributes to both down-range and cross-range error but stereo correlation error affects only the down-range error. Experimental data for quantifying the stereo disparity error were obtained by use of reflective metrological targets taped to corners of bricks placed at known positions relative to the cameras. For the particular 1,024-by-768-pixel cameras of the system analyzed, the standard deviation of the down-range disparity error was found to be 0.32 pixel.

  15. Solar Cycle Variability and Surface Differential Rotation from Ca II K-line Time Series Data

    NASA Astrophysics Data System (ADS)

    Scargle, Jeffrey D.; Keil, Stephen L.; Worden, Simon P.

    2013-07-01

    Analysis of over 36 yr of time series data from the NSO/AFRL/Sac Peak K-line monitoring program elucidates 5 components of the variation of the 7 measured chromospheric parameters: (a) the solar cycle (period ~ 11 yr), (b) quasi-periodic variations (periods ~ 100 days), (c) a broadband stochastic process (wide range of periods), (d) rotational modulation, and (e) random observational errors, independent of (a)-(d). Correlation and power spectrum analyses elucidate periodic and aperiodic variation of these parameters. Time-frequency analysis illuminates periodic and quasi-periodic signals, details of frequency modulation due to differential rotation, and in particular elucidates the rather complex harmonic structure (a) and (b) at timescales in the range ~0.1-10 yr. These results using only full-disk data suggest that similar analyses will be useful for detecting and characterizing differential rotation in stars from stellar light curves such as those being produced by NASA's Kepler observatory. Component (c) consists of variations over a range of timescales, in the manner of a 1/f random process with a power-law slope index that varies in a systematic way. A time-dependent Wilson-Bappu effect appears to be present in the solar cycle variations (a), but not in the more rapid variations of the stochastic process (c). Component (d) characterizes differential rotation of the active regions. Component (e) is of course not characteristic of solar variability, but the fact that the observational errors are quite small greatly facilitates the analysis of the other components. The data analyzed in this paper can be found at the National Solar Observatory Web site http://nsosp.nso.edu/cak_mon/, or by file transfer protocol at ftp://ftp.nso.edu/idl/cak.parameters.

  16. SOLAR CYCLE VARIABILITY AND SURFACE DIFFERENTIAL ROTATION FROM Ca II K-LINE TIME SERIES DATA

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Scargle, Jeffrey D.; Worden, Simon P.; Keil, Stephen L.

    Analysis of over 36 yr of time series data from the NSO/AFRL/Sac Peak K-line monitoring program elucidates 5 components of the variation of the 7 measured chromospheric parameters: (a) the solar cycle (period {approx} 11 yr), (b) quasi-periodic variations (periods {approx} 100 days), (c) a broadband stochastic process (wide range of periods), (d) rotational modulation, and (e) random observational errors, independent of (a)-(d). Correlation and power spectrum analyses elucidate periodic and aperiodic variation of these parameters. Time-frequency analysis illuminates periodic and quasi-periodic signals, details of frequency modulation due to differential rotation, and in particular elucidates the rather complex harmonic structuremore » (a) and (b) at timescales in the range {approx}0.1-10 yr. These results using only full-disk data suggest that similar analyses will be useful for detecting and characterizing differential rotation in stars from stellar light curves such as those being produced by NASA's Kepler observatory. Component (c) consists of variations over a range of timescales, in the manner of a 1/f random process with a power-law slope index that varies in a systematic way. A time-dependent Wilson-Bappu effect appears to be present in the solar cycle variations (a), but not in the more rapid variations of the stochastic process (c). Component (d) characterizes differential rotation of the active regions. Component (e) is of course not characteristic of solar variability, but the fact that the observational errors are quite small greatly facilitates the analysis of the other components. The data analyzed in this paper can be found at the National Solar Observatory Web site http://nsosp.nso.edu/cak{sub m}on/, or by file transfer protocol at ftp://ftp.nso.edu/idl/cak.parameters.« less

  17. Simultaneous analysis of cerebrospinal fluid biomarkers using microsphere-based xMAP multiplex technology for early detection of Alzheimer's disease.

    PubMed

    Kang, Ju-Hee; Vanderstichele, Hugo; Trojanowski, John Q; Shaw, Leslie M

    2012-04-01

    The xMAP-Luminex multiplex platform for measurement of Alzheimer's disease (AD) cerebrospinal fluid (CSF) biomarkers using Innogenetics AlzBio3 immunoassay reagents that are for research use only has been shown to be an effective tool for early detection of an AD-like biomarker signature based on concentrations of CSF Aβ(1-42), t-tau and p-tau(181). Among the several advantages of the xMAP-Luminex platform for AD CSF biomarkers are: a wide dynamic range of ready-to-use calibrators, time savings for the simultaneous analyses of three biomarkers in one analytical run, reduction of human error, potential of reduced cost of reagents, and a modest reduction of sample volume as compared to conventional enzyme-linked immunosorbant assay (ELISA) methodology. Recent clinical studies support the use of CSF Aβ(1-42), t-tau and p-tau(181) measurement using the xMAP-Luminex platform for the early detection of AD pathology in cognitively normal individuals, and for prediction of progression to AD dementia in subjects with mild cognitive impairment (MCI). Studies that have shown the prediction of risk for progression to AD dementia by MCI patients provide the basis for the use of CSF Aβ(1-42), t-tau and p-tau(181) testing to assign risk for progression in patients enrolled in therapeutic trials. Furthermore emerging study data suggest that these pathologic changes occur in cognitively normal subjects 20 or more years before the onset of clinically detectable memory changes thus providing an objective measurement for use in the assessment of treatment effects in primary treatment trials. However, numerous previous ELISA and Luminex-based multiplex studies reported a wide range of absolute values of CSF Aβ(1-42), t-tau and p-tau(181) indicative of substantial inter-laboratory variability as well as varying degrees of intra-laboratory imprecision. In order to address these issues a recent inter-laboratory investigation that included a common set of CSF pool aliquots from controls as well as AD patients over a range of normal and pathological Aβ(1-42), t-tau and p-tau(181) values as well as agreed-on standard operating procedures (SOPs) assessed the reproducibility of the multiplex methodology and Innogenetics AlzBio3 immunoassay reagents. This study showed within-center precision values of 5% to a little more than 10% and good inter-laboratory %CV values (10-20%). There are several likely factors influencing the variability of CSF Aβ(1-42), t-tau and p-tau(181) measurements. In this review, we describe the pre-analytical, analytical and post-analytical sources of variability including sources inherent to kits, and describe procedures to decrease the variability. A CSF AD biomarker Quality Control program has been established and funded by the Alzheimer Association, and global efforts are underway to further define optimal pre-analytical SOPs and best practices for the methodologies available or in development including plans for production of a standard reference material that could provide for a common standard against which manufacturers of immunoassay kits would assign calibration standard values. Copyright © 2012 Elsevier Inc. All rights reserved.

  18. Analysis of a range estimator which uses MLS angle measurements

    NASA Technical Reports Server (NTRS)

    Downing, David R.; Linse, Dennis

    1987-01-01

    A concept that uses the azimuth signal from a microwave landing system (MLS) combined with onboard airspeed and heading data to estimate the horizontal range to the runway threshold is investigated. The absolute range error is evaluated for trajectories typical of General Aviation (GA) and commercial airline operations (CAO). These include constant intercept angles for GA and CAO, and complex curved trajectories for CAO. It is found that range errors of 4000 to 6000 feet at the entry of MLS coverage which then reduce to 1000-foot errors at runway centerline intercept are possible for GA operations. For CAO, errors at entry into MLS coverage of 2000 feet which reduce to 300 feet at runway centerline interception are possible.

  19. Research on Measurement Accuracy of Laser Tracking System Based on Spherical Mirror with Rotation Errors of Gimbal Mount Axes

    NASA Astrophysics Data System (ADS)

    Shi, Zhaoyao; Song, Huixu; Chen, Hongfang; Sun, Yanqiang

    2018-02-01

    This paper presents a novel experimental approach for confirming that spherical mirror of a laser tracking system can reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy. By simplifying the optical system model of laser tracking system based on spherical mirror, we can easily extract the laser ranging measurement error caused by rotation errors of gimbal mount axes with the positions of spherical mirror, biconvex lens, cat's eye reflector, and measuring beam. The motions of polarization beam splitter and biconvex lens along the optical axis and vertical direction of optical axis are driven by error motions of gimbal mount axes. In order to simplify the experimental process, the motion of biconvex lens is substituted by the motion of spherical mirror according to the principle of relative motion. The laser ranging measurement error caused by the rotation errors of gimbal mount axes could be recorded in the readings of laser interferometer. The experimental results showed that the laser ranging measurement error caused by rotation errors was less than 0.1 μm if radial error motion and axial error motion were within ±10 μm. The experimental method simplified the experimental procedure and the spherical mirror could reduce the influences of rotation errors of gimbal mount axes on the measurement accuracy of the laser tracking system.

  20. The State Geologic Map Compilation (SGMC) geodatabase of the conterminous United States

    USGS Publications Warehouse

    Horton, John D.; San Juan, Carma A.; Stoeser, Douglas B.

    2017-06-30

    The State Geologic Map Compilation (SGMC) geodatabase of the conterminous United States (https://doi. org/10.5066/F7WH2N65) represents a seamless, spatial database of 48 State geologic maps that range from 1:50,000 to 1:1,000,000 scale. A national digital geologic map database is essential in interpreting other datasets that support numerous types of national-scale studies and assessments, such as those that provide geochemistry, remote sensing, or geophysical data. The SGMC is a compilation of the individual U.S. Geological Survey releases of the Preliminary Integrated Geologic Map Databases for the United States. The SGMC geodatabase also contains updated data for seven States and seven entirely new State geologic maps that have been added since the preliminary databases were published. Numerous errors have been corrected and enhancements added to the preliminary datasets using thorough quality assurance/quality control procedures. The SGMC is not a truly integrated geologic map database because geologic units have not been reconciled across State boundaries. However, the geologic data contained in each State geologic map have been standardized to allow spatial analyses of lithology, age, and stratigraphy at a national scale.

  1. Extended Range Prediction of Indian Summer Monsoon: Current status

    NASA Astrophysics Data System (ADS)

    Sahai, A. K.; Abhilash, S.; Borah, N.; Joseph, S.; Chattopadhyay, R.; S, S.; Rajeevan, M.; Mandal, R.; Dey, A.

    2014-12-01

    The main focus of this study is to develop forecast consensus in the extended range prediction (ERP) of monsoon Intraseasonal oscillations using a suit of different variants of Climate Forecast system (CFS) model. In this CFS based Grand MME prediction system (CGMME), the ensemble members are generated by perturbing the initial condition and using different configurations of CFSv2. This is to address the role of different physical mechanisms known to have control on the error growth in the ERP in the 15-20 day time scale. The final formulation of CGMME is based on 21 ensembles of the standalone Global Forecast System (GFS) forced with bias corrected forecasted SST from CFS, 11 low resolution CFST126 and 11 high resolution CFST382. Thus, we develop the multi-model consensus forecast for the ERP of Indian summer monsoon (ISM) using a suite of different variants of CFS model. This coordinated international effort lead towards the development of specific tailor made regional forecast products over Indian region. Skill of deterministic and probabilistic categorical rainfall forecast as well the verification of large-scale low frequency monsoon intraseasonal oscillations has been carried out using hindcast from 2001-2012 during the monsoon season in which all models are initialized at every five days starting from 16May to 28 September. The skill of deterministic forecast from CGMME is better than the best participating single model ensemble configuration (SME). The CGMME approach is believed to quantify the uncertainty in both initial conditions and model formulation. Main improvement is attained in probabilistic forecast which is because of an increase in the ensemble spread, thereby reducing the error due to over-confident ensembles in a single model configuration. For probabilistic forecast, three tercile ranges are determined by ranking method based on the percentage of ensemble members from all the participating models falls in those three categories. CGMME further added value to both deterministic and probability forecast compared to raw SME's and this better skill is probably flows from large spread and improved spread-error relationship. CGMME system is currently capable of generating ER prediction in real time and successfully delivering its experimental operational ER forecast of ISM for the last few years.

  2. Fine-mapping of the human leukocyte antigen locus as a risk factor for Alzheimer disease: A case–control study

    PubMed Central

    Steele, Natasha Z. R.; Geier, Ethan G.; Damotte, Vincent; Boehme, Kevin L.; Mukherjee, Shubhabrata; Crane, Paul K.; Kauwe, John S. K.; Kramer, Joel H.; Miller, Bruce L.; Hollenbach, Jill A.; Huang, Yadong

    2017-01-01

    Background Alzheimer disease (AD) is a progressive disorder that affects cognitive function. There is increasing support for the role of neuroinflammation and aberrant immune regulation in the pathophysiology of AD. The immunoregulatory human leukocyte antigen (HLA) complex has been linked to susceptibility for a number of neurodegenerative diseases, including AD; however, studies to date have failed to consistently identify a risk HLA haplotype for AD. Contributing to this difficulty are the complex genetic organization of the HLA region, differences in sequencing and allelic imputation methods, and diversity across ethnic populations. Methods and findings Building on prior work linking the HLA to AD, we used a robust imputation method on two separate case–control cohorts to examine the relationship between HLA haplotypes and AD risk in 309 individuals (191 AD, 118 cognitively normal [CN] controls) from the San Francisco-based University of California, San Francisco (UCSF) Memory and Aging Center (collected between 1999–2015) and 11,381 individuals (5,728 AD, 5,653 CN controls) from the Alzheimer’s Disease Genetics Consortium (ADGC), a National Institute on Aging (NIA)-funded national data repository (reflecting samples collected between 1984–2012). We also examined cerebrospinal fluid (CSF) biomarker measures for patients seen between 2005–2007 and longitudinal cognitive data from the Alzheimer’s Disease Neuroimaging Initiative (n = 346, mean follow-up 3.15 ± 2.04 y in AD individuals) to assess the clinical relevance of identified risk haplotypes. The strongest association with AD risk occurred with major histocompatibility complex (MHC) haplotype A*03:01~B*07:02~DRB1*15:01~DQA1*01:02~DQB1*06:02 (p = 9.6 x 10−4, odds ratio [OR] [95% confidence interval] = 1.21 [1.08–1.37]) in the combined UCSF + ADGC cohort. Secondary analysis suggested that this effect may be driven primarily by individuals who are negative for the established AD genetic risk factor, apolipoprotein E (APOE) ɛ4. Separate analyses of class I and II haplotypes further supported the role of class I haplotype A*03:01~B*07:02 (p = 0.03, OR = 1.11 [1.01–1.23]) and class II haplotype DRB1*15:01- DQA1*01:02- DQB1*06:02 (DR15) (p = 0.03, OR = 1.08 [1.01–1.15]) as risk factors for AD. We followed up these findings in the clinical dataset representing the spectrum of cognitively normal controls, individuals with mild cognitive impairment, and individuals with AD to assess their relevance to disease. Carrying A*03:01~B*07:02 was associated with higher CSF amyloid levels (p = 0.03, β ± standard error = 47.19 ± 21.78). We also found a dose-dependent association between the DR15 haplotype and greater rates of cognitive decline (greater impairment on the 11-item Alzheimer’s Disease Assessment Scale cognitive subscale [ADAS11] over time [p = 0.03, β ± standard error = 0.7 ± 0.3]; worse forgetting score on the Rey Auditory Verbal Learning Test (RAVLT) over time [p = 0.02, β ± standard error = −0.2 ± 0.06]). In a subset of the same cohort, dose of DR15 was also associated with higher baseline levels of chemokine CC-4, a biomarker of inflammation (p = 0.005, β ± standard error = 0.08 ± 0.03). The main study limitations are that the results represent only individuals of European-ancestry and clinically diagnosed individuals, and that our study used imputed genotypes for a subset of HLA genes. Conclusions We provide evidence that variation in the HLA locus—including risk haplotype DR15—contributes to AD risk. DR15 has also been associated with multiple sclerosis, and its component alleles have been implicated in Parkinson disease and narcolepsy. Our findings thus raise the possibility that DR15-associated mechanisms may contribute to pan-neuronal disease vulnerability. PMID:28350795

  3. The Origin of Systematic Errors in the GCM Simulation of ITCZ Precipitation

    NASA Technical Reports Server (NTRS)

    Chao, Winston C.; Suarez, M. J.; Bacmeister, J. T.; Chen, B.; Takacs, L. L.

    2006-01-01

    Previous GCM studies have found that the systematic errors in the GCM simulation of the seasonal mean ITCZ intensity and location could be substantially corrected by adding suitable amount of rain re-evaporation or cumulus momentum transport. However, the reason(s) for these systematic errors and solutions has remained a puzzle. In this work the knowledge gained from previous studies of the ITCZ in an aqua-planet model with zonally uniform SST is applied to solve this puzzle. The solution is supported by further aqua-planet and full model experiments using the latest version of the Goddard Earth Observing System GCM.

  4. Uncertainty Quantification of Multi-Phase Closures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nadiga, Balasubramanya T.; Baglietto, Emilio

    In the ensemble-averaged dispersed phase formulation used for CFD of multiphase ows in nuclear reactor thermohydraulics, closures of interphase transfer of mass, momentum, and energy constitute, by far, the biggest source of error and uncertainty. Reliable estimators of this source of error and uncertainty are currently non-existent. Here, we report on how modern Validation and Uncertainty Quanti cation (VUQ) techniques can be leveraged to not only quantify such errors and uncertainties, but also to uncover (unintended) interactions between closures of di erent phenomena. As such this approach serves as a valuable aide in the research and development of multiphase closures.more » The joint modeling of lift, drag, wall lubrication, and turbulent dispersion|forces that lead to tranfer of momentum between the liquid and gas phases|is examined in the frame- work of validation of the adiabatic but turbulent experiments of Liu and Banko , 1993. An extensive calibration study is undertaken with a popular combination of closure relations and the popular k-ϵ turbulence model in a Bayesian framework. When a wide range of super cial liquid and gas velocities and void fractions is considered, it is found that this set of closures can be validated against the experimental data only by allowing large variations in the coe cients associated with the closures. We argue that such an extent of variation is a measure of uncertainty induced by the chosen set of closures. We also nd that while mean uid velocity and void fraction pro les are properly t, uctuating uid velocity may or may not be properly t. This aspect needs to be investigated further. The popular set of closures considered contains ad-hoc components and are undesirable from a predictive modeling point of view. Consequently, we next consider improvements that are being developed by the MIT group under CASL and which remove the ad-hoc elements. We use non-intrusive methodologies for sensitivity analysis and calibration (using Dakota) to study sensitivities of the CFD representation (STARCCM+) of uid velocity pro les and void fraction pro les in the context of Shaver and Podowski, 2015 correction to lift, and the Lubchenko et al., 2017 formulation of wall lubrication.« less

  5. Mitigating TCP Degradation over Intermittent Link Failures using Intermediate Buffers

    DTIC Science & Technology

    2006-06-01

    signal strength [10]. The Preemptive routing in ad hoc networks [10] attempts to predict that a route will fail by looking at the signal power of the...when the error rate is high there are non -optimal back offs in the Retransmission Timeout. And third, in the high error situation the slow start...network storage follows. In Beck et. al. [3], Logistical Networking is outlined as a means of storing data throughout the network. End to end

  6. Virtual Sensors for On-line Wheel Wear and Part Roughness Measurement in the Grinding Process

    PubMed Central

    Arriandiaga, Ander; Portillo, Eva; Sánchez, Jose A.; Cabanes, Itziar; Pombo, Iñigo

    2014-01-01

    Grinding is an advanced machining process for the manufacturing of valuable complex and accurate parts for high added value sectors such as aerospace, wind generation, etc. Due to the extremely severe conditions inside grinding machines, critical process variables such as part surface finish or grinding wheel wear cannot be easily and cheaply measured on-line. In this paper a virtual sensor for on-line monitoring of those variables is presented. The sensor is based on the modelling ability of Artificial Neural Networks (ANNs) for stochastic and non-linear processes such as grinding; the selected architecture is the Layer-Recurrent neural network. The sensor makes use of the relation between the variables to be measured and power consumption in the wheel spindle, which can be easily measured. A sensor calibration methodology is presented, and the levels of error that can be expected are discussed. Validation of the new sensor is carried out by comparing the sensor's results with actual measurements carried out in an industrial grinding machine. Results show excellent estimation performance for both wheel wear and surface roughness. In the case of wheel wear, the absolute error is within the range of microns (average value 32 μm). In the case of surface finish, the absolute error is well below Ra 1 μm (average value 0.32 μm). The present approach can be easily generalized to other grinding operations. PMID:24854055

  7. Effective Prediction of Errors by Non-native Speakers Using Decision Tree for Speech Recognition-Based CALL System

    NASA Astrophysics Data System (ADS)

    Wang, Hongcui; Kawahara, Tatsuya

    CALL (Computer Assisted Language Learning) systems using ASR (Automatic Speech Recognition) for second language learning have received increasing interest recently. However, it still remains a challenge to achieve high speech recognition performance, including accurate detection of erroneous utterances by non-native speakers. Conventionally, possible error patterns, based on linguistic knowledge, are added to the lexicon and language model, or the ASR grammar network. However, this approach easily falls in the trade-off of coverage of errors and the increase of perplexity. To solve the problem, we propose a method based on a decision tree to learn effective prediction of errors made by non-native speakers. An experimental evaluation with a number of foreign students learning Japanese shows that the proposed method can effectively generate an ASR grammar network, given a target sentence, to achieve both better coverage of errors and smaller perplexity, resulting in significant improvement in ASR accuracy.

  8. Generalized Structured Component Analysis with Uniqueness Terms for Accommodating Measurement Error

    PubMed Central

    Hwang, Heungsun; Takane, Yoshio; Jung, Kwanghee

    2017-01-01

    Generalized structured component analysis (GSCA) is a component-based approach to structural equation modeling (SEM), where latent variables are approximated by weighted composites of indicators. It has no formal mechanism to incorporate errors in indicators, which in turn renders components prone to the errors as well. We propose to extend GSCA to account for errors in indicators explicitly. This extension, called GSCAM, considers both common and unique parts of indicators, as postulated in common factor analysis, and estimates a weighted composite of indicators with their unique parts removed. Adding such unique parts or uniqueness terms serves to account for measurement errors in indicators in a manner similar to common factor analysis. Simulation studies are conducted to compare parameter recovery of GSCAM and existing methods. These methods are also applied to fit a substantively well-established model to real data. PMID:29270146

  9. Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study

    PubMed Central

    Hosseinyalamdary, Siavash

    2018-01-01

    Bayes filters, such as the Kalman and particle filters, have been used in sensor fusion to integrate two sources of information and obtain the best estimate of unknowns. The efficient integration of multiple sensors requires deep knowledge of their error sources. Some sensors, such as Inertial Measurement Unit (IMU), have complicated error sources. Therefore, IMU error modelling and the efficient integration of IMU and Global Navigation Satellite System (GNSS) observations has remained a challenge. In this paper, we developed deep Kalman filter to model and remove IMU errors and, consequently, improve the accuracy of IMU positioning. To achieve this, we added a modelling step to the prediction and update steps of the Kalman filter, so that the IMU error model is learned during integration. The results showed our deep Kalman filter outperformed the conventional Kalman filter and reached a higher level of accuracy. PMID:29695119

  10. Detecting and Characterizing Semantic Inconsistencies in Ported Code

    NASA Technical Reports Server (NTRS)

    Ray, Baishakhi; Kim, Miryung; Person, Suzette J.; Rungta, Neha

    2013-01-01

    Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (I) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows thai SPA can dell-oct porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points

  11. Detecting and Characterizing Semantic Inconsistencies in Ported Code

    NASA Technical Reports Server (NTRS)

    Ray, Baishakhi; Kim, Miryung; Person,Suzette; Rungta, Neha

    2013-01-01

    Adding similar features and bug fixes often requires porting program patches from reference implementations and adapting them to target implementations. Porting errors may result from faulty adaptations or inconsistent updates. This paper investigates (1) the types of porting errors found in practice, and (2) how to detect and characterize potential porting errors. Analyzing version histories, we define five categories of porting errors, including incorrect control- and data-flow, code redundancy, inconsistent identifier renamings, etc. Leveraging this categorization, we design a static control- and data-dependence analysis technique, SPA, to detect and characterize porting inconsistencies. Our evaluation on code from four open-source projects shows that SPA can detect porting inconsistencies with 65% to 73% precision and 90% recall, and identify inconsistency types with 58% to 63% precision and 92% to 100% recall. In a comparison with two existing error detection tools, SPA improves precision by 14 to 17 percentage points.

  12. Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study.

    PubMed

    Hosseinyalamdary, Siavash

    2018-04-24

    Bayes filters, such as the Kalman and particle filters, have been used in sensor fusion to integrate two sources of information and obtain the best estimate of unknowns. The efficient integration of multiple sensors requires deep knowledge of their error sources. Some sensors, such as Inertial Measurement Unit (IMU), have complicated error sources. Therefore, IMU error modelling and the efficient integration of IMU and Global Navigation Satellite System (GNSS) observations has remained a challenge. In this paper, we developed deep Kalman filter to model and remove IMU errors and, consequently, improve the accuracy of IMU positioning. To achieve this, we added a modelling step to the prediction and update steps of the Kalman filter, so that the IMU error model is learned during integration. The results showed our deep Kalman filter outperformed the conventional Kalman filter and reached a higher level of accuracy.

  13. A Sensor Dynamic Measurement Error Prediction Model Based on NAPSO-SVM

    PubMed Central

    Jiang, Minlan; Jiang, Lan; Jiang, Dingde; Li, Fei

    2018-01-01

    Dynamic measurement error correction is an effective way to improve sensor precision. Dynamic measurement error prediction is an important part of error correction, and support vector machine (SVM) is often used for predicting the dynamic measurement errors of sensors. Traditionally, the SVM parameters were always set manually, which cannot ensure the model’s performance. In this paper, a SVM method based on an improved particle swarm optimization (NAPSO) is proposed to predict the dynamic measurement errors of sensors. Natural selection and simulated annealing are added in the PSO to raise the ability to avoid local optima. To verify the performance of NAPSO-SVM, three types of algorithms are selected to optimize the SVM’s parameters: the particle swarm optimization algorithm (PSO), the improved PSO optimization algorithm (NAPSO), and the glowworm swarm optimization (GSO). The dynamic measurement error data of two sensors are applied as the test data. The root mean squared error and mean absolute percentage error are employed to evaluate the prediction models’ performances. The experimental results show that among the three tested algorithms the NAPSO-SVM method has a better prediction precision and a less prediction errors, and it is an effective method for predicting the dynamic measurement errors of sensors. PMID:29342942

  14. Animal movement constraints improve resource selection inference in the presence of telemetry error

    USGS Publications Warehouse

    Brost, Brian M.; Hooten, Mevin B.; Hanks, Ephraim M.; Small, Robert J.

    2016-01-01

    Multiple factors complicate the analysis of animal telemetry location data. Recent advancements address issues such as temporal autocorrelation and telemetry measurement error, but additional challenges remain. Difficulties introduced by complicated error structures or barriers to animal movement can weaken inference. We propose an approach for obtaining resource selection inference from animal location data that accounts for complicated error structures, movement constraints, and temporally autocorrelated observations. We specify a model for telemetry data observed with error conditional on unobserved true locations that reflects prior knowledge about constraints in the animal movement process. The observed telemetry data are modeled using a flexible distribution that accommodates extreme errors and complicated error structures. Although constraints to movement are often viewed as a nuisance, we use constraints to simultaneously estimate and account for telemetry error. We apply the model to simulated data, showing that it outperforms common ad hoc approaches used when confronted with measurement error and movement constraints. We then apply our framework to an Argos satellite telemetry data set on harbor seals (Phoca vitulina) in the Gulf of Alaska, a species that is constrained to move within the marine environment and adjacent coastlines.

  15. Towards Automated Structure-Based NMR Resonance Assignment

    NASA Astrophysics Data System (ADS)

    Jang, Richard; Gao, Xin; Li, Ming

    We propose a general framework for solving the structure-based NMR backbone resonance assignment problem. The core is a novel 0-1 integer programming model that can start from a complete or partial assignment, generate multiple assignments, and model not only the assignment of spins to residues, but also pairwise dependencies consisting of pairs of spins to pairs of residues. It is still a challenge for automated resonance assignment systems to perform the assignment directly from spectra without any manual intervention. To test the feasibility of this for structure-based assignment, we integrated our system with our automated peak picking and sequence-based resonance assignment system to obtain an assignment for the protein TM1112 with 91% recall and 99% precision without manual intervention. Since using a known structure has the potential to allow one to use only N-labeled NMR data and avoid the added expense of using C-labeled data, we work towards the goal of automated structure-based assignment using only such labeled data. Our system reduced the assignment error of Xiong-Pandurangan-Bailey-Kellogg's contact replacement (CR) method, which to our knowledge is the most error-tolerant method for this problem, by 5 folds on average. By using an iterative algorithm, our system has the added capability of using the NOESY data to correct assignment errors due to errors in predicting the amino acid and secondary structure type of each spin system. On a publicly available data set for Ubiquitin, where the type prediction accuracy is 83%, we achieved 91% assignment accuracy, compared to the 59% accuracy that was obtained without correcting for typing errors.

  16. Impact of geophysical model error for recovering temporal gravity field model

    NASA Astrophysics Data System (ADS)

    Zhou, Hao; Luo, Zhicai; Wu, Yihao; Li, Qiong; Xu, Chuang

    2016-07-01

    The impact of geophysical model error on recovered temporal gravity field models with both real and simulated GRACE observations is assessed in this paper. With real GRACE observations, we build four temporal gravity field models, i.e., HUST08a, HUST11a, HUST04 and HUST05. HUST08a and HUST11a are derived from different ocean tide models (EOT08a and EOT11a), while HUST04 and HUST05 are derived from different non-tidal models (AOD RL04 and AOD RL05). The statistical result shows that the discrepancies of the annual mass variability amplitudes in six river basins between HUST08a and HUST11a models, HUST04 and HUST05 models are all smaller than 1 cm, which demonstrates that geophysical model error slightly affects the current GRACE solutions. The impact of geophysical model error for future missions with more accurate satellite ranging is also assessed by simulation. The simulation results indicate that for current mission with range rate accuracy of 2.5 × 10- 7 m/s, observation error is the main reason for stripe error. However, when the range rate accuracy improves to 5.0 × 10- 8 m/s in the future mission, geophysical model error will be the main source for stripe error, which will limit the accuracy and spatial resolution of temporal gravity model. Therefore, observation error should be the primary error source taken into account at current range rate accuracy level, while more attention should be paid to improving the accuracy of background geophysical models for the future mission.

  17. A New Design of the Test Rig to Measure the Transmission Error of Automobile Gearbox

    NASA Astrophysics Data System (ADS)

    Hou, Yixuan; Zhou, Xiaoqin; He, Xiuzhi; Liu, Zufei; Liu, Qiang

    2017-12-01

    Noise and vibration affect the performance of automobile gearbox. And transmission error has been regarded as an important excitation source in gear system. Most of current research is focused on the measurement and analysis of single gear drive, and few investigations on the transmission error measurement in complete gearbox were conducted. In order to measure transmission error in a complete automobile gearbox, a kind of electrically closed test rig is developed. Based on the principle of modular design, the test rig can be used to test different types of gearbox by adding necessary modules. The test rig for front engine, rear-wheel-drive gearbox is constructed. And static and modal analysis methods are taken to verify the performance of a key component.

  18. Comparison of in-situ delay monitors for use in Adaptive Voltage Scaling

    NASA Astrophysics Data System (ADS)

    Pour Aryan, N.; Heiß, L.; Schmitt-Landsiedel, D.; Georgakos, G.; Wirnshofer, M.

    2012-09-01

    In Adaptive Voltage Scaling (AVS) the supply voltage of digital circuits is tuned according to the circuit's actual operating condition, which enables dynamic compensation to PVTA variations. By exploiting the excessive safety margins added in state-of-the-art worst-case designs considerable power saving is achieved. In our approach, the operating condition of the circuit is monitored by in-situ delay monitors. This paper presents different designs to implement the in-situ delay monitors capable of detecting late but still non-erroneous transitions, called Pre-Errors. The developed Pre-Error monitors are integrated in a 16 bit multiplier test circuit and the resulting Pre-Error AVS system is modeled by a Markov chain in order to determine the power saving potential of each Pre-Error detection approach.

  19. The Performance of a PN Spread Spectrum Receiver Preceded by an Adaptive Interference Suppression Filter.

    DTIC Science & Technology

    1982-12-01

    Sequence dj Estimate of the Desired Signal DEL Sampling Time Interval DS Direct Sequence c Sufficient Statistic E/T Signal Power Erfc Complimentary Error...Namely, a white Gaussian noise (WGN) generator was added. Also, a statistical subroutine was added in order to assess performance improvement at the...reference code and then passed through a correlation detector whose output is the sufficient 1 statistic , e . Using a threshold device and the sufficient

  20. Statistical design and analysis for plant cover studies with multiple sources of observation errors

    USGS Publications Warehouse

    Wright, Wilson; Irvine, Kathryn M.; Warren, Jeffrey M .; Barnett, Jenny K.

    2017-01-01

    Effective wildlife habitat management and conservation requires understanding the factors influencing distribution and abundance of plant species. Field studies, however, have documented observation errors in visually estimated plant cover including measurements which differ from the true value (measurement error) and not observing a species that is present within a plot (detection error). Unlike the rapid expansion of occupancy and N-mixture models for analysing wildlife surveys, development of statistical models accounting for observation error in plants has not progressed quickly. Our work informs development of a monitoring protocol for managed wetlands within the National Wildlife Refuge System.Zero-augmented beta (ZAB) regression is the most suitable method for analysing areal plant cover recorded as a continuous proportion but assumes no observation errors. We present a model extension that explicitly includes the observation process thereby accounting for both measurement and detection errors. Using simulations, we compare our approach to a ZAB regression that ignores observation errors (naïve model) and an “ad hoc” approach using a composite of multiple observations per plot within the naïve model. We explore how sample size and within-season revisit design affect the ability to detect a change in mean plant cover between 2 years using our model.Explicitly modelling the observation process within our framework produced unbiased estimates and nominal coverage of model parameters. The naïve and “ad hoc” approaches resulted in underestimation of occurrence and overestimation of mean cover. The degree of bias was primarily driven by imperfect detection and its relationship with cover within a plot. Conversely, measurement error had minimal impacts on inferences. We found >30 plots with at least three within-season revisits achieved reasonable posterior probabilities for assessing change in mean plant cover.For rapid adoption and application, code for Bayesian estimation of our single-species ZAB with errors model is included. Practitioners utilizing our R-based simulation code can explore trade-offs among different survey efforts and parameter values, as we did, but tuned to their own investigation. Less abundant plant species of high ecological interest may warrant the additional cost of gathering multiple independent observations in order to guard against erroneous conclusions.

  1. Attitude determination for high-accuracy submicroradian jitter pointing on space-based platforms

    NASA Astrophysics Data System (ADS)

    Gupta, Avanindra A.; van Houten, Charles N.; Germann, Lawrence M.

    1990-10-01

    A description of the requirement definition process is given for a new wideband attitude determination subsystem (ADS) for image motion compensation (IMC) systems. The subsystem consists of either lateral accelerometers functioning in differential pairs or gas-bearing gyros for high-frequency sensors using CCD-based star trackers for low-frequency sensors. To minimize error the sensor signals are combined so that the mixing filter does not allow phase distortion. The two ADS models are introduced in an IMC simulation to predict measurement error, correction capability, and residual image jitter for a variety of system parameters. The IMC three-axis testbed is utilized to simulate an incoming beam in inertial space. Results demonstrate that both mechanical and electronic IMC meet the requirements of image stabilization for space-based observation at submicroradian-jitter levels. Currently available technology may be employed to implement IMC systems.

  2. Modeling error analysis of stationary linear discrete-time filters

    NASA Technical Reports Server (NTRS)

    Patel, R.; Toda, M.

    1977-01-01

    The performance of Kalman-type, linear, discrete-time filters in the presence of modeling errors is considered. The discussion is limited to stationary performance, and bounds are obtained for the performance index, the mean-squared error of estimates for suboptimal and optimal (Kalman) filters. The computation of these bounds requires information on only the model matrices and the range of errors for these matrices. Consequently, a design can easily compare the performance of a suboptimal filter with that of the optimal filter, when only the range of errors in the elements of the model matrices is available.

  3. Period variations of Algol-type eclipsing binaries AD And, TWCas and IV Cas

    NASA Astrophysics Data System (ADS)

    Parimucha, Štefan; Gajdoš, Pavol; Kudak, Viktor; Fedurco, Miroslav; Vaňko, Martin

    2018-04-01

    We present new analyses of variations in O – C diagrams of three Algol-type eclipsing binary stars: AD And, TW Cas and IV Cas. We have used all published minima times (including visual and photographic) as well as newly determined ones from our and SuperWasp observations. We determined orbital parameters of 3rd bodies in the systems with statistically significant errors, using our code based on genetic algorithms and Markov chain Monte Carlo simulations. We confirmed the multiple nature of AD And and the triple-star model of TW Cas, and we proposed a quadruple-star model of IV Cas.

  4. Performance of MIMO-OFDM using convolution codes with QAM modulation

    NASA Astrophysics Data System (ADS)

    Astawa, I. Gede Puja; Moegiharto, Yoedy; Zainudin, Ahmad; Salim, Imam Dui Agus; Anggraeni, Nur Annisa

    2014-04-01

    Performance of Orthogonal Frequency Division Multiplexing (OFDM) system can be improved by adding channel coding (error correction code) to detect and correct errors that occur during data transmission. One can use the convolution code. This paper present performance of OFDM using Space Time Block Codes (STBC) diversity technique use QAM modulation with code rate ½. The evaluation is done by analyzing the value of Bit Error Rate (BER) vs Energy per Bit to Noise Power Spectral Density Ratio (Eb/No). This scheme is conducted 256 subcarrier which transmits Rayleigh multipath fading channel in OFDM system. To achieve a BER of 10-3 is required 10dB SNR in SISO-OFDM scheme. For 2×2 MIMO-OFDM scheme requires 10 dB to achieve a BER of 10-3. For 4×4 MIMO-OFDM scheme requires 5 dB while adding convolution in a 4x4 MIMO-OFDM can improve performance up to 0 dB to achieve the same BER. This proves the existence of saving power by 3 dB of 4×4 MIMO-OFDM system without coding, power saving 7 dB of 2×2 MIMO-OFDM and significant power savings from SISO-OFDM system.

  5. Analytical Problems and Suggestions in the Analysis of Behavioral Economic Demand Curves.

    PubMed

    Yu, Jihnhee; Liu, Liu; Collins, R Lorraine; Vincent, Paula C; Epstein, Leonard H

    2014-01-01

    Behavioral economic demand curves (Hursh, Raslear, Shurtleff, Bauman, & Simmons, 1988) are innovative approaches to characterize the relationships between consumption of a substance and its price. In this article, we investigate common analytical issues in the use of behavioral economic demand curves, which can cause inconsistent interpretations of demand curves, and then we provide methodological suggestions to address those analytical issues. We first demonstrate that log transformation with different added values for handling zeros changes model parameter estimates dramatically. Second, demand curves are often analyzed using an overparameterized model that results in an inefficient use of the available data and a lack of assessment of the variability among individuals. To address these issues, we apply a nonlinear mixed effects model based on multivariate error structures that has not been used previously to analyze behavioral economic demand curves in the literature. We also propose analytical formulas for the relevant standard errors of derived values such as P max, O max, and elasticity. The proposed model stabilizes the derived values regardless of using different added increments and provides substantially smaller standard errors. We illustrate the data analysis procedure using data from a relative reinforcement efficacy study of simulated marijuana purchasing.

  6. Chemical Source Inversion using Assimilated Constituent Observations in an Idealized Two-dimensional System

    NASA Technical Reports Server (NTRS)

    Tangborn, Andrew; Cooper, Robert; Pawson, Steven; Sun, Zhibin

    2009-01-01

    We present a source inversion technique for chemical constituents that uses assimilated constituent observations rather than directly using the observations. The method is tested with a simple model problem, which is a two-dimensional Fourier-Galerkin transport model combined with a Kalman filter for data assimilation. Inversion is carried out using a Green's function method and observations are simulated from a true state with added Gaussian noise. The forecast state uses the same spectral spectral model, but differs by an unbiased Gaussian model error, and emissions models with constant errors. The numerical experiments employ both simulated in situ and satellite observation networks. Source inversion was carried out by either direct use of synthetically generated observations with added noise, or by first assimilating the observations and using the analyses to extract observations. We have conducted 20 identical twin experiments for each set of source and observation configurations, and find that in the limiting cases of a very few localized observations, or an extremely large observation network there is little advantage to carrying out assimilation first. However, in intermediate observation densities, there decreases in source inversion error standard deviation using the Kalman filter algorithm followed by Green's function inversion by 50% to 95%.

  7. An alluvial record of El Niño events from northern coastal Peru

    NASA Astrophysics Data System (ADS)

    Wells, Lisa E.

    1987-12-01

    Overbank flood deposits of northern coastal Peru provide the potential for the development of a late Quaternary chronology of El Niño events. Alluvial deposits from the 1982-1983 El Niño event are the basis for establishing a type El Niño deposit. Sedimentary structures suggesting depositional processes range from sheet flows to debris flows, with sheet flood deposits being the most common. The 1982-1983 deposits are characterized by a 50- to 100-cm- thick basal gravel, overlain by a 10- to 100-cm-thick sand bed, grading into a 1- to 10-cm-thick silty sand bed and capped by a very thin layer of silt or clay. The surface of the deposit commonly displays the original shear flow lines crosscut by postdepositional mud cracks and footprints (human and animal). Stacked sequences of flood deposits are present in Pleistocene and Holocene alluvial fill, suggesting that El Niño type events likely occurred throughout the late Quaternary. A relative chronology of the deposits is developed based on terrace and soil stratigraphy and on the degree of preservation of surficial features. A minimum of 15 El Niño events occurred during the Holocene; a minimum of 21 events occurred during the late Pleistocene. Timing of the Holocene events is bracketed by isochrons derived from the archaeologic stratigraphy. Corrected radiocarbon ages from included detrital wood provide the following absolute dates for El Niño events: 1720 ± 60 A.D., 1460 ± 20 A.D., 1380 ± 140 A.D. (error overlaps with the A.D. 1460 event; these may represent a single event), and 1230 ± 60 B.C.

  8. An error covariance model for sea surface topography and velocity derived from TOPEX/POSEIDON altimetry

    NASA Technical Reports Server (NTRS)

    Tsaoussi, Lucia S.; Koblinsky, Chester J.

    1994-01-01

    In order to facilitate the use of satellite-derived sea surface topography and velocity oceanographic models, methodology is presented for deriving the total error covariance and its geographic distribution from TOPEX/POSEIDON measurements. The model is formulated using a parametric model fit to the altimeter range observations. The topography and velocity modeled with spherical harmonic expansions whose coefficients are found through optimal adjustment to the altimeter range residuals using Bayesian statistics. All other parameters, including the orbit, geoid, surface models, and range corrections are provided as unadjusted parameters. The maximum likelihood estimates and errors are derived from the probability density function of the altimeter range residuals conditioned with a priori information. Estimates of model errors for the unadjusted parameters are obtained from the TOPEX/POSEIDON postlaunch verification results and the error covariances for the orbit and the geoid, except for the ocean tides. The error in the ocean tides is modeled, first, as the difference between two global tide models and, second, as the correction to the present tide model, the correction derived from the TOPEX/POSEIDON data. A formal error covariance propagation scheme is used to derive the total error. Our global total error estimate for the TOPEX/POSEIDON topography relative to the geoid for one 10-day period is found tio be 11 cm RMS. When the error in the geoid is removed, thereby providing an estimate of the time dependent error, the uncertainty in the topography is 3.5 cm root mean square (RMS). This level of accuracy is consistent with direct comparisons of TOPEX/POSEIDON altimeter heights with tide gauge measurements at 28 stations. In addition, the error correlation length scales are derived globally in both east-west and north-south directions, which should prove useful for data assimilation. The largest error correlation length scales are found in the tropics. Errors in the velocity field are smallest in midlatitude regions. For both variables the largest errors caused by uncertainty in the geoid. More accurate representations of the geoid await a dedicated geopotential satellite mission. Substantial improvements in the accuracy of ocean tide models are expected in the very near future from research with TOPEX/POSEIDON data.

  9. Narrowband (LPC-10) Vocoder Performance under Combined Effects of Random Bit Errors and Jet Aircraft Cabin Noise.

    DTIC Science & Technology

    1983-12-01

    rAD-141 333 NRRROWRAND (LPC-iB) VOCODER PERFORMANCE UNDER COMBINED i/ EFFECTS OF RRNDOM.(U) ROME AIR DEVELOPMENT CENTER GRIFFISS RFB NY C P SMITH DEC...LPC-10) VOCODER In House. PERFORMANCE UNDER COMBINED EFFECTS June 82 - Sept. 83 OF RANDOM BIT ERRORS AND JET AIRCRAFT Z PERFORMING ORG REPO- NUMSEF...PAGE(Wh.n Does Eneerd) 20. (contd) Compartment, and NCA Compartment were alike in their effects on overall vocoder performance . Composite performance

  10. Dissociative Global and Local Task-Switching Costs Across Younger Adults, Middle-Aged Adults, Older Adults, and Very Mild Alzheimer Disease Individuals

    PubMed Central

    Huff, Mark J.; Balota, David A.; Minear, Meredith; Aschenbrenner, Andrew J.; Duchek, Janet M.

    2015-01-01

    A task-switching paradigm was used to examine differences in attentional control across younger adults, middle-aged adults, healthy older adults, and individuals classified in the earliest detectable stage of Alzheimer's disease (AD). A large sample of participants (570) completed a switching task in which participants were cued to classify the letter (consonant/vowel) or number (odd/even) task-set dimension of a bivalent stimulus (e.g., A 14), respectively. A Pure block consisting of single-task trials and a Switch block consisting of nonswitch and switch trials were completed. Local (switch vs. nonswitch trials) and global (nonswitch vs. pure trials) costs in mean error rates, mean response latencies, underlying reaction time distributions, along with stimulus-response congruency effects were computed. Local costs in errors were group invariant, but global costs in errors systematically increased as a function of age and AD. Response latencies yielded a strong dissociation: Local costs decreased across groups whereas global costs increased across groups. Vincentile distribution analyses revealed that the dissociation of local and global costs primarily occurred in the slowest response latencies. Stimulus-response congruency effects within the Switch block were particularly robust in accuracy in the very mild AD group. We argue that the results are consistent with the notion that the impaired groups show a reduced local cost because the task sets are not as well tuned, and hence produce minimal cost on switch trials. In contrast, global costs increase because of the additional burden on working memory of maintaining two task sets. PMID:26652720

  11. Geodetic positioning using a global positioning system of satellites

    NASA Technical Reports Server (NTRS)

    Fell, P. J.

    1980-01-01

    Geodetic positioning using range, integrated Doppler, and interferometric observations from a constellation of twenty-four Global Positioning System satellites is analyzed. A summary of the proposals for geodetic positioning and baseline determination is given which includes a description of measurement techniques and comments on rank deficiency and error sources. An analysis of variance comparison of range, Doppler, and interferometric time delay to determine their relative geometric strength for baseline determination is included. An analytic examination to the effect of a priori constraints on positioning using simultaneous observations from two stations is presented. Dynamic point positioning and baseline determination using range and Doppler is examined in detail. Models for the error sources influencing dynamic positioning are developed. Included is a discussion of atomic clock stability, and range and Doppler observation error statistics based on random correlated atomic clock error are derived.

  12. Autonomous Navigation Error Propagation Assessment for Lunar Surface Mobility Applications

    NASA Technical Reports Server (NTRS)

    Welch, Bryan W.; Connolly, Joseph W.

    2006-01-01

    The NASA Vision for Space Exploration is focused on the return of astronauts to the Moon. While navigation systems have already been proven in the Apollo missions to the moon, the current exploration campaign will involve more extensive and extended missions requiring new concepts for lunar navigation. In this document, the results of an autonomous navigation error propagation assessment are provided. The analysis is intended to be the baseline error propagation analysis for which Earth-based and Lunar-based radiometric data are added to compare these different architecture schemes, and quantify the benefits of an integrated approach, in how they can handle lunar surface mobility applications when near the Lunar South pole or on the Lunar Farside.

  13. Error in total ozone measurements arising from aerosol attenuation

    NASA Technical Reports Server (NTRS)

    Thomas, R. W. L.; Basher, R. E.

    1979-01-01

    A generalized least squares method for deducing both total ozone and aerosol extinction spectrum parameters from Dobson spectrophotometer measurements was developed. An error analysis applied to this system indicates that there is little advantage to additional measurements once a sufficient number of line pairs have been employed to solve for the selected detail in the attenuation model. It is shown that when there is a predominance of small particles (less than about 0.35 microns in diameter) the total ozone from the standard AD system is too high by about one percent. When larger particles are present the derived total ozone may be an overestimate or an underestimate but serious errors occur only for narrow polydispersions.

  14. Dietary sources of energy, solid fats, and added sugars among children and adolescents in the United States.

    PubMed

    Reedy, Jill; Krebs-Smith, Susan M

    2010-10-01

    The objective of this research was to identify top dietary sources of energy, solid fats, and added sugars among 2- to 18-year-olds in the United States. Data from the National Health and Nutrition Examination Survey, a cross-sectional study, were used to examine food sources (percentage contribution and mean intake with standard errors) of total energy (data from 2005-2006) and energy from solid fats and added sugars (data from 2003-2004). Differences were investigated by age, sex, race/ethnicity, and family income, and the consumption of empty calories-defined as the sum of energy from solid fats and added sugars-was compared with the corresponding discretionary calorie allowance. The top sources of energy for 2- to 18-year-olds were grain desserts (138 kcal/day), pizza (136 kcal/day), and soda (118 kcal/day). Sugar-sweetened beverages (soda and fruit drinks combined) provided 173 kcal/day. Major contributors varied by age, sex, race/ethnicity, and income. Nearly 40% of total energy consumed (798 of 2,027 kcal/day) by 2- to 18-year-olds were in the form of empty calories (433 kcal from solid fat and 365 kcal from added sugars). Consumption of empty calories far exceeded the corresponding discretionary calorie allowance for all sex-age groups (which range from 8% to 20%). Half of empty calories came from six foods: soda, fruit drinks, dairy desserts, grain desserts, pizza, and whole milk. There is an overlap between the major sources of energy and empty calories: soda, grain desserts, pizza, and whole milk. The landscape of choices available to children and adolescents must change to provide fewer unhealthy foods and more healthy foods with less energy. Identifying top sources of energy and empty calories can provide targets for changes in the marketplace and food environment. However, product reformulation alone is not sufficient-the flow of empty calories into the food supply must be reduced.

  15. The S-Connect study: results from a randomized, controlled trial of Souvenaid in mild-to-moderate Alzheimer's disease.

    PubMed

    Shah, Raj C; Kamphuis, Patrick J; Leurgans, Sue; Swinkels, Sophie H; Sadowsky, Carl H; Bongers, Anke; Rappaport, Stephen A; Quinn, Joseph F; Wieggers, Rico L; Scheltens, Philip; Bennett, David A

    2013-01-01

    Souvenaid® containing Fortasyn® Connect is a medical food designed to support synapse synthesis in persons with Alzheimer's disease (AD). Fortasyn Connect includes precursors (uridine monophosphate; choline; phospholipids; eicosapentaenoic acid; docosahexaenoic acid) and cofactors (vitamins E, C, B12, and B6; folic acid; selenium) for the formation of neuronal membranes. Whether Souvenaid slows cognitive decline in treated persons with mild-to-moderate AD has not been addressed. In a 24-week, double-masked clinical trial at 48 clinical centers, 527 participants taking AD medications [52% women, mean age 76.7 years (Standard Deviation, SD = 8.2), and mean Mini-Mental State Examination score 19.5 (SD = 3.1, range 14-24)] were randomized 1:1 to daily, 125-mL (125 kcal), oral intake of the active product (Souvenaid) or an iso-caloric control. The primary outcome of cognition was assessed by the 11-item Alzheimer's Disease Assessment Scale-Cognitive Subscale (ADAS-cog). Compliance was calculated from daily diary recordings of product intake. Statistical analyses were performed using mixed models for repeated measures. Cognitive performance as assessed by ADAS-cog showed decline over time in both control and active study groups, with no significant difference between study groups (difference =0.37 points, Standard Error, SE = 0.57, p = 0.513). No group differences in adverse event rates were found and no clinically relevant differences in blood safety parameters were noted. Overall compliance was high (94.1% [active] and 94.5% [control]), which was confirmed by significant changes in blood (nutritional) biomarkers. Add-on intake of Souvenaid during 24 weeks did not slow cognitive decline in persons treated for mild-to-moderate AD. Souvenaid was well tolerated in combination with standard care AD medications. DUTCH TRIAL REGISTER NUMBER: NTR1683.

  16. The S-Connect study: results from a randomized, controlled trial of Souvenaid in mild-to-moderate Alzheimer’s disease

    PubMed Central

    2013-01-01

    Introduction Souvenaid® containing Fortasyn® Connect is a medical food designed to support synapse synthesis in persons with Alzheimer’s disease (AD). Fortasyn Connect includes precursors (uridine monophosphate; choline; phospholipids; eicosapentaenoic acid; docosahexaenoic acid) and cofactors (vitamins E, C, B12, and B6; folic acid; selenium) for the formation of neuronal membranes. Whether Souvenaid slows cognitive decline in treated persons with mild-to-moderate AD has not been addressed. Methods In a 24-week, double-masked clinical trial at 48 clinical centers, 527 participants taking AD medications [52% women, mean age 76.7 years (Standard Deviation, SD = 8.2), and mean Mini-Mental State Examination score 19.5 (SD = 3.1, range 14–24)] were randomized 1:1 to daily, 125-mL (125 kcal), oral intake of the active product (Souvenaid) or an iso-caloric control. The primary outcome of cognition was assessed by the 11-item Alzheimer’s Disease Assessment Scale-Cognitive Subscale (ADAS-cog). Compliance was calculated from daily diary recordings of product intake. Statistical analyses were performed using mixed models for repeated measures. Results Cognitive performance as assessed by ADAS-cog showed decline over time in both control and active study groups, with no significant difference between study groups (difference =0.37 points, Standard Error, SE = 0.57, p = 0.513). No group differences in adverse event rates were found and no clinically relevant differences in blood safety parameters were noted. Overall compliance was high (94.1% [active] and 94.5% [control]), which was confirmed by significant changes in blood (nutritional) biomarkers. Conclusions Add-on intake of Souvenaid during 24 weeks did not slow cognitive decline in persons treated for mild-to-moderate AD. Souvenaid was well tolerated in combination with standard care AD medications. Trial registration Dutch Trial Register number: NTR1683. PMID:24280255

  17. Dietary Sources of Energy, Solid Fats, and Added Sugars Among Children and Adolescents in the United States

    PubMed Central

    Reedy, Jill; Krebs-Smith, Susan M.

    2010-01-01

    Objective The objective of this research was to identify top dietary sources of energy, solid fats, and added sugars among 2–18 year olds in the United States. Methods Data from the National Health and Nutrition Examination Survey (NHANES), a cross-sectional study, were used to examine food sources (percentage contribution and mean intake with standard errors) of total energy (2005–06) and calories from solid fats and added sugars (2003–04). Differences were investigated by age, sex, race/ethnicity, and family income, and the consumption of empty calories—defined as the sum of calories from solid fats and added sugars—was compared with the corresponding discretionary calorie allowance. Results The top sources of energy for 2–18 year olds were grain desserts (138 kcal/day), pizza (136 kcal), and soda (118 kcal). Sugar-sweetened beverages (soda and fruit drinks combined) provided 173 kcal/day. Major contributors varied by age, sex, race/ethnicity, and income. Nearly 40% of total calories consumed (798 kcal/day of 2027 kcal) by 2–18 year olds were in the form of empty calories (433 kcal from solid fat and 365 kcal from added sugars). Consumption of empty calories far exceeded the corresponding discretionary calorie allowance for all sex-age groups (which range from 8–20%). Half of empty calories came from six foods: soda, fruit drinks, dairy desserts, grain desserts, pizza, and whole milk. Conclusion There is an overlap between the major sources of energy and empty calories: soda, grain desserts, pizza, and whole milk. The landscape of choices available to children and adolescents must change to provide fewer unhealthy foods and more healthy foods with fewer calories. Identifying top sources of energy and empty calories can provide targets for changes in the marketplace and food environment. However, product reformulation alone is not sufficient—the flow of empty calories into the food supply must be reduced. PMID:20869486

  18. Referral Coordination in the Next TRICARE Contract Environment: A Case Study Applying Failure Mode Effects Analysis

    DTIC Science & Technology

    2004-06-13

    antiquity. Plutarch is credited for saying in Morals--Against Colotes the Epicurean, "For to err in opinion, though it be not the part of wise men, it is at...least human" ( Plutarch , AD 110). Of the 5 definitions for error given in Merriam-Webster’s Collegiate Dictionary, the third one listed "an act that...Identifying and managing inappropriate hospital utilization: A policy synthesis. Health Services Research, 22(5), 710-57. Plutarch . (AD 110) . Worldofquotes

  19. Patterns of verbal memory performance in mild cognitive impairment, Alzheimer disease, and normal aging.

    PubMed

    Greenaway, Melanie C; Lacritz, Laura H; Binegar, Dani; Weiner, Myron F; Lipton, Anne; Munro Cullum, C

    2006-06-01

    Individuals with mild cognitive impairment (MCI) typically demonstrate memory loss that falls between normal aging (NA) and Alzheimer disease (AD), but little is known about the pattern of memory dysfunction in MCI. To explore this issue, California Verbal Learning Test (CVLT) performance was examined across groups of MCI, AD, and NA. MCI subjects displayed a pattern of deficits closely resembling that of AD, characterized by reduced learning, rapid forgetting, increased recency recall, elevated intrusion errors, and poor recognition discriminability with increased false-positives. MCI performance was significantly worse than that of controls and better than that of AD patients across memory indices. Although qualitative analysis of CVLT profiles may be useful in individual cases, discriminant function analysis revealed that delayed recall and total learning were the best aspects of learning/memory on the CVLT in differentiating MCI, AD, and NA. These findings support the position that amnestic MCI represents an early point of decline on the continuum of AD that is different from normal aging.

  20. Heterodyne range imaging as an alternative to photogrammetry

    NASA Astrophysics Data System (ADS)

    Dorrington, Adrian; Cree, Michael; Carnegie, Dale; Payne, Andrew; Conroy, Richard

    2007-01-01

    Solid-state full-field range imaging technology, capable of determining the distance to objects in a scene simultaneously for every pixel in an image, has recently achieved sub-millimeter distance measurement precision. With this level of precision, it is becoming practical to use this technology for high precision three-dimensional metrology applications. Compared to photogrammetry, range imaging has the advantages of requiring only one viewing angle, a relatively short measurement time, and simplistic fast data processing. In this paper we fist review the range imaging technology, then describe an experiment comparing both photogrammetric and range imaging measurements of a calibration block with attached retro-reflective targets. The results show that the range imaging approach exhibits errors of approximately 0.5 mm in-plane and almost 5 mm out-of-plane; however, these errors appear to be mostly systematic. We then proceed to examine the physical nature and characteristics of the image ranging technology and discuss the possible causes of these systematic errors. Also discussed is the potential for further system characterization and calibration to compensate for the range determination and other errors, which could possibly lead to three-dimensional measurement precision approaching that of photogrammetry.

  1. Quantity and unit extraction for scientific and technical intelligence analysis

    NASA Astrophysics Data System (ADS)

    David, Peter; Hawes, Timothy

    2017-05-01

    Scientific and Technical (S and T) intelligence analysts consume huge amounts of data to understand how scientific progress and engineering efforts affect current and future military capabilities. One of the most important types of information S and T analysts exploit is the quantities discussed in their source material. Frequencies, ranges, size, weight, power, and numerous other properties and measurements describing the performance characteristics of systems and the engineering constraints that define them must be culled from source documents before quantified analysis can begin. Automating the process of finding and extracting the relevant quantities from a wide range of S and T documents is difficult because information about quantities and their units is often contained in unstructured text with ad hoc conventions used to convey their meaning. Currently, even simple tasks, such as searching for documents discussing RF frequencies in a band of interest, is a labor intensive and error prone process. This research addresses the challenges facing development of a document processing capability that extracts quantities and units from S and T data, and how Natural Language Processing algorithms can be used to overcome these challenges.

  2. Spectral Analysis of Forecast Error Investigated with an Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, N. C.; Errico, Ronald M.

    2015-01-01

    The spectra of analysis and forecast error are examined using the observing system simulation experiment (OSSE) framework developed at the National Aeronautics and Space Administration Global Modeling and Assimilation Office (NASAGMAO). A global numerical weather prediction model, the Global Earth Observing System version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) data assimilation, is cycled for two months with once-daily forecasts to 336 hours to generate a control case. Verification of forecast errors using the Nature Run as truth is compared with verification of forecast errors using self-analysis; significant underestimation of forecast errors is seen using self-analysis verification for up to 48 hours. Likewise, self analysis verification significantly overestimates the error growth rates of the early forecast, as well as mischaracterizing the spatial scales at which the strongest growth occurs. The Nature Run-verified error variances exhibit a complicated progression of growth, particularly for low wave number errors. In a second experiment, cycling of the model and data assimilation over the same period is repeated, but using synthetic observations with different explicitly added observation errors having the same error variances as the control experiment, thus creating a different realization of the control. The forecast errors of the two experiments become more correlated during the early forecast period, with correlations increasing for up to 72 hours before beginning to decrease.

  3. Regionalized PM2.5 Community Multiscale Air Quality model performance evaluation across a continuous spatiotemporal domain.

    PubMed

    Reyes, Jeanette M; Xu, Yadong; Vizuete, William; Serre, Marc L

    2017-01-01

    The regulatory Community Multiscale Air Quality (CMAQ) model is a means to understanding the sources, concentrations and regulatory attainment of air pollutants within a model's domain. Substantial resources are allocated to the evaluation of model performance. The Regionalized Air quality Model Performance (RAMP) method introduced here explores novel ways of visualizing and evaluating CMAQ model performance and errors for daily Particulate Matter ≤ 2.5 micrometers (PM2.5) concentrations across the continental United States. The RAMP method performs a non-homogenous, non-linear, non-homoscedastic model performance evaluation at each CMAQ grid. This work demonstrates that CMAQ model performance, for a well-documented 2001 regulatory episode, is non-homogeneous across space/time. The RAMP correction of systematic errors outperforms other model evaluation methods as demonstrated by a 22.1% reduction in Mean Square Error compared to a constant domain wide correction. The RAMP method is able to accurately reproduce simulated performance with a correlation of r = 76.1%. Most of the error coming from CMAQ is random error with only a minority of error being systematic. Areas of high systematic error are collocated with areas of high random error, implying both error types originate from similar sources. Therefore, addressing underlying causes of systematic error will have the added benefit of also addressing underlying causes of random error.

  4. Research of laser echo signal simulator

    NASA Astrophysics Data System (ADS)

    Xu, Rui; Shi, Rui; Wang, Xin; Li, Zhou

    2015-11-01

    Laser echo signal simulator is one of the most significant components of hardware-in-the-loop (HWIL) simulation systems for LADAR. System model and time series model of laser echo signal simulator are established. Some influential factors which could induce fixed error and random error on the simulated return signals are analyzed, and then these system insertion errors are analyzed quantitatively. Using this theoretical model, the simulation system is investigated experimentally. The results corrected by subtracting fixed error indicate that the range error of the simulated laser return signal is less than 0.25m, and the distance range that the system can simulate is from 50m to 20km.

  5. Error analysis and correction of lever-type stylus profilometer based on Nelder-Mead Simplex method

    NASA Astrophysics Data System (ADS)

    Hu, Chunbing; Chang, Suping; Li, Bo; Wang, Junwei; Zhang, Zhongyu

    2017-10-01

    Due to the high measurement accuracy and wide range of applications, lever-type stylus profilometry is commonly used in industrial research areas. However, the error caused by the lever structure has a great influence on the profile measurement, thus this paper analyzes the error of high-precision large-range lever-type stylus profilometry. The errors are corrected by the Nelder-Mead Simplex method, and the results are verified by the spherical surface calibration. It can be seen that this method can effectively reduce the measurement error and improve the accuracy of the stylus profilometry in large-scale measurement.

  6. Developing a regional scale approach for modelling the impacts of fertiliser regime on N2O emissions in Ireland

    NASA Astrophysics Data System (ADS)

    Zimmermann, Jesko; Jones, Michael

    2016-04-01

    Agriculture can be significant contributor to greenhouse gas emissions, this is especially prevalent in Ireland where the agricultural sector accounts for a third of total emissions. The high emissions are linked to both the importance of agriculture in the Irish economy and the focus on dairy and beef production. In order to reduce emissions three main categories are explored: (1) reduction of methane emissions from cattle, (2) reduction of nitrous oxide emissions from fertilisation, and (3) fostering the carbon sequestration potential of soils. The presented research focuses on the latter two categories, especially changes in fertiliser amount and composition. Soil properties and climate conditions measured at the four experimental sites (two silage and two spring barley) were used to parameterise four biogeochemical models (DayCent, ECOSSE, DNDC 9.4, and DNDC 9.5). All sites had a range of different fertiliser regimes applied. This included changes in amount (0 to 500 kg N/ha on grassland and 0 to 200 kg N/ha on arable fields), fertiliser type (calcium ammonium nitrate and urea), and added inhibitors (the nitrification inhibitor DCD, and the urease inhibitor Agrotain). Overall, 20 different treatments were applied to the grassland sites, and 17 to the arable sites. Nitrous oxide emissions, measured in 2013 and 2014 at all sites using closed chambers, were made available to validate model results for these emissions. To assess model performance for the daily measurements, the Root Mean Square Error (RMSE) was compared to the measured 95% confidence interval of the measured data (RMSE95). Bias was tested comparing the relative error (RE) the 95 % confidence interval of the relative error (RE95). Preliminary results show mixed model performance, depending on the model, site, and the fertiliser regime. However, with the exception of urea fertilisation and added inhibitors, all scenarios were reproduced by at least one model with no statistically significant total error (RMSE < RMSE95) or bias (RE< RE95). A general trend observed was that model performance declined with increased fertilisation rates. Overall, DayCent showed the best performance, however it does not provide the possibility to model the addition urease inhibitors. The results suggest that modelling changes in fertiliser regime on a large scale may require a multi-model approach to assure best performance. Ultimately, the research aims to develop a GIS based platform to apply such an approach on a regional scale.

  7. Moments and Root-Mean-Square Error of the Bayesian MMSE Estimator of Classification Error in the Gaussian Model.

    PubMed

    Zollanvari, Amin; Dougherty, Edward R

    2014-06-01

    The most important aspect of any classifier is its error rate, because this quantifies its predictive capacity. Thus, the accuracy of error estimation is critical. Error estimation is problematic in small-sample classifier design because the error must be estimated using the same data from which the classifier has been designed. Use of prior knowledge, in the form of a prior distribution on an uncertainty class of feature-label distributions to which the true, but unknown, feature-distribution belongs, can facilitate accurate error estimation (in the mean-square sense) in circumstances where accurate completely model-free error estimation is impossible. This paper provides analytic asymptotically exact finite-sample approximations for various performance metrics of the resulting Bayesian Minimum Mean-Square-Error (MMSE) error estimator in the case of linear discriminant analysis (LDA) in the multivariate Gaussian model. These performance metrics include the first, second, and cross moments of the Bayesian MMSE error estimator with the true error of LDA, and therefore, the Root-Mean-Square (RMS) error of the estimator. We lay down the theoretical groundwork for Kolmogorov double-asymptotics in a Bayesian setting, which enables us to derive asymptotic expressions of the desired performance metrics. From these we produce analytic finite-sample approximations and demonstrate their accuracy via numerical examples. Various examples illustrate the behavior of these approximations and their use in determining the necessary sample size to achieve a desired RMS. The Supplementary Material contains derivations for some equations and added figures.

  8. Using the EC decision on case definitions for communicable diseases as a terminology source--lessons learned.

    PubMed

    Balkanyi, Laszlo; Heja, Gergely; Nagy, Attlia

    2014-01-01

    Extracting scientifically accurate terminology from an EU public health regulation is part of the knowledge engineering work at the European Centre for Disease Prevention and Control (ECDC). ECDC operates information systems at the crossroads of many areas - posing a challenge for transparency and consistency. Semantic interoperability is based on the Terminology Server (TS). TS value sets (structured vocabularies) describe shared domains as "diseases", "organisms", "public health terms", "geo-entities" "organizations" and "administrative terms" and others. We extracted information from the relevant EC Implementing Decision on case definitions for reporting communicable diseases, listing 53 notifiable infectious diseases, containing clinical, diagnostic, laboratory and epidemiological criteria. We performed a consistency check; a simplification - abstraction; we represented lab criteria in triplets: as 'y' procedural result /of 'x' organism-substance/on 'z' specimen and identified negations. The resulting new case definition value set represents the various formalized criteria, meanwhile the existing disease value set has been extended, new signs and symptoms were added. New organisms enriched the organism value set. Other new categories have been added to the public health value set, as transmission modes; substances; specimens and procedures. We identified problem areas, as (a) some classification error(s); (b) inconsistent granularity of conditions; (c) seemingly nonsense criteria, medical trivialities; (d) possible logical errors, (e) seemingly factual errors that might be phrasing errors. We think our hypothesis regarding room for possible improvements is valid: there are some open issues and a further improved legal text might lead to more precise epidemiologic data collection. It has to be noted that formal representation for automatic classification of cases was out of scope, such a task would require other formalism, as e.g. those used by rule-based decision support systems.

  9. Sequencing artifacts in the type A influenza databases and attempts to correct them.

    PubMed

    Suarez, David L; Chester, Nikki; Hatfield, Jason

    2014-07-01

    There are over 276 000 influenza gene sequences in public databases, with the quality of the sequences determined by the contributor. As part of a high school class project, influenza sequences with possible errors were identified in the public databases based on the size of the gene being longer than expected, with the hypothesis that these sequences would have an error. Students contacted sequence submitters alerting them of the possible sequence issue(s) and requested they the suspect sequence(s) be correct as appropriate. Type A influenza viruses were screened, and gene segments longer than the accepted size were identified for further analysis. Attention was placed on sequences with additional nucleotides upstream or downstream of the highly conserved non-coding ends of the viral segments. A total of 1081 sequences were identified that met this criterion. Three types of errors were commonly observed: non-influenza primer sequence wasn't removed from the sequence; PCR product was cloned and plasmid sequence was included in the sequence; and Taq polymerase added an adenine at the end of the PCR product. Internal insertions of nucleotide sequence were also commonly observed, but in many cases it was unclear if the sequence was correct or actually contained an error. A total of 215 sequences, or 22.8% of the suspect sequences, were corrected in the public databases in the first year of the student project. Unfortunately 138 additional sequences with possible errors were added to the databases in the second year. Additional awareness of the need for data integrity of sequences submitted to public databases is needed to fully reap the benefits of these large data sets. © 2014 The Authors. Influenza and Other Respiratory Viruses Published by John Wiley & Sons Ltd.

  10. Mitigating Errors in External Respiratory Surrogate-Based Models of Tumor Position

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Malinowski, Kathleen T.; Fischell Department of Bioengineering, University of Maryland, College Park, MD; McAvoy, Thomas J.

    2012-04-01

    Purpose: To investigate the effect of tumor site, measurement precision, tumor-surrogate correlation, training data selection, model design, and interpatient and interfraction variations on the accuracy of external marker-based models of tumor position. Methods and Materials: Cyberknife Synchrony system log files comprising synchronously acquired positions of external markers and the tumor from 167 treatment fractions were analyzed. The accuracy of Synchrony, ordinary-least-squares regression, and partial-least-squares regression models for predicting the tumor position from the external markers was evaluated. The quantity and timing of the data used to build the predictive model were varied. The effects of tumor-surrogate correlation and the precisionmore » in both the tumor and the external surrogate position measurements were explored by adding noise to the data. Results: The tumor position prediction errors increased during the duration of a fraction. Increasing the training data quantities did not always lead to more accurate models. Adding uncorrelated noise to the external marker-based inputs degraded the tumor-surrogate correlation models by 16% for partial-least-squares and 57% for ordinary-least-squares. External marker and tumor position measurement errors led to tumor position prediction changes 0.3-3.6 times the magnitude of the measurement errors, varying widely with model algorithm. The tumor position prediction errors were significantly associated with the patient index but not with the fraction index or tumor site. Partial-least-squares was as accurate as Synchrony and more accurate than ordinary-least-squares. Conclusions: The accuracy of surrogate-based inferential models of tumor position was affected by all the investigated factors, except for the tumor site and fraction index.« less

  11. Mars approach navigation using Doppler and range measurements to surface beacons and orbiting spacecraft

    NASA Technical Reports Server (NTRS)

    Thurman, Sam W.; Estefan, Jeffrey A.

    1991-01-01

    Approximate analytical models are developed and used to construct an error covariance analysis for investigating the range of orbit determination accuracies which might be achieved for typical Mars approach trajectories. The sensitivity or orbit determination accuracy to beacon/orbiter position errors and to small spacecraft force modeling errors is also investigated. The results indicate that the orbit determination performance obtained from both Doppler and range data is a strong function of the inclination of the approach trajectory to the Martian equator, for surface beacons, and for orbiters, the inclination relative to the orbital plane. Large variations in performance were also observed for different approach velocity magnitudes; Doppler data in particular were found to perform poorly in determining the downtrack (along the direction of flight) component of spacecraft position. In addition, it was found that small spacecraft acceleration modeling errors can induce large errors in the Doppler-derived downtrack position estimate.

  12. A solar super-flare as cause for the 14C variation in AD 774/5 ?

    NASA Astrophysics Data System (ADS)

    Neuhäuser, R.; Hambaryan, V. V.

    2014-11-01

    We present further considerations regarding the strong 14C variation in AD 774/5. For its cause, either a solar super-flare or a short gamma-ray burst were suggested. We show that all kinds of stellar or neutron star flares would be too weak for the observed energy input at Earth in AD 774/5. Even though Maehara et al. (2012) present two super-flares with {˜ 1035} erg of presumably solar-type stars, we would like to caution: These two stars are poorly studied and may well be close binaries, and/or having a M-type dwarf companion, and/or may be much younger and/or much more magnetic than the Sun - in any such case, they might not be true solar analog stars. From the frequency of large stellar flares averaged over all stellar activity phases (maybe obtained only during grand activity maxima), one can derive (a limit of) the probability for a large solar flare at a random time of normal activity: We find the probability for one flare within 3000 years to be possibly as low as 0.3 to 0.008 considering the full 1σ error range. Given the energy estimate in Miyake et al. (2012) for the AD 774/5 event, it would need to be {˜ 2000} stronger than the Carrington event as solar super-flare. If the AD 774/5 event as solar flare would be beamed (to an angle of only {˜ 24°}), 100 times lower energy would be needed. A new AD 774/5 energy estimate by Usoskin et al. (2013) with a different carbon cycle model, yielding 4 ot 6 time lower 14C production, predicts 4-6 times less energy. If both reductions are applied, the AD 774/5 event would need to be only ˜ 4 times stronger than the Carrington event in 1859 (if both had similar spectra). However, neither 14C nor 10Be peaks were found around AD 1859. Hence, the AD 774/5 event (as solar flare) either was not beamed that strongly, and/or it would have been much more than 4-6 times stronger than Carrington, and/or the lower energy estimate (Usoskin et al. 2013) is not correct, and/or such solar flares cannot form (enough) 14C and 10Be. The 1956 solar energetic particle event was followed by a small decrease in directly observed cosmic rays. We conclude that large solar super-flares remain very unlikely as the cause for the 14C increase in AD 774/5.

  13. Study of an instrument for sensing errors in a telescope wavefront

    NASA Technical Reports Server (NTRS)

    Golden, L. J.; Shack, R. V.; Slater, D. N.

    1973-01-01

    Partial results are presented of theoretical and experimental investigations of different focal plane sensor configurations for determining the error in a telescope wavefront. The coarse range sensor and fine range sensors are used in the experimentation. The design of a wavefront error simulator is presented along with the Hartmann test, the shearing polarization interferometer, the Zernike test, and the Zernike polarization test.

  14. Optimal application of Morrison's iterative noise removal for deconvolution. Appendices

    NASA Technical Reports Server (NTRS)

    Ioup, George E.; Ioup, Juliette W.

    1987-01-01

    Morrison's iterative method of noise removal, or Morrison's smoothing, is applied in a simulation to noise-added data sets of various noise levels to determine its optimum use. Morrison's smoothing is applied for noise removal alone, and for noise removal prior to deconvolution. For the latter, an accurate method is analyzed to provide confidence in the optimization. The method consists of convolving the data with an inverse filter calculated by taking the inverse discrete Fourier transform of the reciprocal of the transform of the response of the system. Various length filters are calculated for the narrow and wide Gaussian response functions used. Deconvolution of non-noisy data is performed, and the error in each deconvolution calculated. Plots are produced of error versus filter length; and from these plots the most accurate length filters determined. The statistical methodologies employed in the optimizations of Morrison's method are similar. A typical peak-type input is selected and convolved with the two response functions to produce the data sets to be analyzed. Both constant and ordinate-dependent Gaussian distributed noise is added to the data, where the noise levels of the data are characterized by their signal-to-noise ratios. The error measures employed in the optimizations are the L1 and L2 norms. Results of the optimizations for both Gaussians, both noise types, and both norms include figures of optimum iteration number and error improvement versus signal-to-noise ratio, and tables of results. The statistical variation of all quantities considered is also given.

  15. Dosimetric Implications of Residual Tracking Errors During Robotic SBRT of Liver Metastases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chan, Mark; Tuen Mun Hospital, Hong Kong; Grehn, Melanie

    Purpose: Although the metric precision of robotic stereotactic body radiation therapy in the presence of breathing motion is widely known, we investigated the dosimetric implications of breathing phase–related residual tracking errors. Methods and Materials: In 24 patients (28 liver metastases) treated with the CyberKnife, we recorded the residual correlation, prediction, and rotational tracking errors from 90 fractions and binned them into 10 breathing phases. The average breathing phase errors were used to shift and rotate the clinical tumor volume (CTV) and planning target volume (PTV) for each phase to calculate a pseudo 4-dimensional error dose distribution for comparison with themore » original planned dose distribution. Results: The median systematic directional correlation, prediction, and absolute aggregate rotation errors were 0.3 mm (range, 0.1-1.3 mm), 0.01 mm (range, 0.00-0.05 mm), and 1.5° (range, 0.4°-2.7°), respectively. Dosimetrically, 44%, 81%, and 92% of all voxels differed by less than 1%, 3%, and 5% of the planned local dose, respectively. The median coverage reduction for the PTV was 1.1% (range in coverage difference, −7.8% to +0.8%), significantly depending on correlation (P=.026) and rotational (P=.005) error. With a 3-mm PTV margin, the median coverage change for the CTV was 0.0% (range, −1.0% to +5.4%), not significantly depending on any investigated parameter. In 42% of patients, the 3-mm margin did not fully compensate for the residual tracking errors, resulting in a CTV coverage reduction of 0.1% to 1.0%. Conclusions: For liver tumors treated with robotic stereotactic body radiation therapy, a safety margin of 3 mm is not always sufficient to cover all residual tracking errors. Dosimetrically, this translates into only small CTV coverage reductions.« less

  16. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xing, Y; Macq, B; Bondar, L

    Purpose: To quantify the accuracy in predicting the Bragg peak position using simulated in-room measurements of prompt gamma (PG) emissions for realistic treatment error scenarios that combine several sources of errors. Methods: Prompt gamma measurements by a knife-edge slit camera were simulated using an experimentally validated analytical simulation tool. Simulations were performed, for 143 treatment error scenarios, on an anthropomorphic phantom and a pencil beam scanning plan for nasal cavity. Three types of errors were considered: translation along each axis, rotation around each axis, and CT-calibration errors with magnitude ranging respectively, between −3 and 3 mm, −5 and 5 degrees,more » and between −5 and +5%. We investigated the correlation between the Bragg peak (BP) shift and the horizontal shift of PG profiles. The shifts were calculated between the planned (reference) position and the position by the error scenario. The prediction error for one spot was calculated as the absolute difference between the PG profile shift and the BP shift. Results: The PG shift was significantly and strongly correlated with the BP shift for 92% of the cases (p<0.0001, Pearson correlation coefficient R>0.8). Moderate but significant correlations were obtained for all cases that considered only CT-calibration errors and for 1 case that combined translation and CT-errors (p<0.0001, R ranged between 0.61 and 0.8). The average prediction errors for the simulated scenarios ranged between 0.08±0.07 and 1.67±1.3 mm (grand mean 0.66±0.76 mm). The prediction error was moderately correlated with the value of the BP shift (p=0, R=0.64). For the simulated scenarios the average BP shift ranged between −8±6.5 mm and 3±1.1 mm. Scenarios that considered combinations of the largest treatment errors were associated with large BP shifts. Conclusion: Simulations of in-room measurements demonstrate that prompt gamma profiles provide reliable estimation of the Bragg peak position for complex error scenarios. Yafei Xing and Luiza Bondar are funded by BEWARE grants from the Walloon Region. The work presents simulations results for a prompt gamma camera prototype developed by IBA.« less

  17. Adding dynamic rules to self-organizing fuzzy systems

    NASA Technical Reports Server (NTRS)

    Buhusi, Catalin V.

    1992-01-01

    This paper develops a Dynamic Self-Organizing Fuzzy System (DSOFS) capable of adding, removing, and/or adapting the fuzzy rules and the fuzzy reference sets. The DSOFS background consists of a self-organizing neural structure with neuron relocation features which will develop a map of the input-output behavior. The relocation algorithm extends the topological ordering concept. Fuzzy rules (neurons) are dynamically added or released while the neural structure learns the pattern. The DSOFS advantages are the automatic synthesis and the possibility of parallel implementation. A high adaptation speed and a reduced number of neurons is needed in order to keep errors under some limits. The computer simulation results are presented in a nonlinear systems modelling application.

  18. A two-step A/D conversion and column self-calibration technique for low noise CMOS image sensors.

    PubMed

    Bae, Jaeyoung; Kim, Daeyun; Ham, Seokheon; Chae, Youngcheol; Song, Minkyu

    2014-07-04

    In this paper, a 120 frames per second (fps) low noise CMOS Image Sensor (CIS) based on a Two-Step Single Slope ADC (TS SS ADC) and column self-calibration technique is proposed. The TS SS ADC is suitable for high speed video systems because its conversion speed is much faster (by more than 10 times) than that of the Single Slope ADC (SS ADC). However, there exist some mismatching errors between the coarse block and the fine block due to the 2-step operation of the TS SS ADC. In general, this makes it difficult to implement the TS SS ADC beyond a 10-bit resolution. In order to improve such errors, a new 4-input comparator is discussed and a high resolution TS SS ADC is proposed. Further, a feedback circuit that enables column self-calibration to reduce the Fixed Pattern Noise (FPN) is also described. The proposed chip has been fabricated with 0.13 μm Samsung CIS technology and the chip satisfies the VGA resolution. The pixel is based on the 4-TR Active Pixel Sensor (APS). The high frame rate of 120 fps is achieved at the VGA resolution. The measured FPN is 0.38 LSB, and measured dynamic range is about 64.6 dB.

  19. Long Term Mean Local Time of the Ascending Node Prediction

    NASA Technical Reports Server (NTRS)

    McKinley, David P.

    2007-01-01

    Significant error has been observed in the long term prediction of the Mean Local Time of the Ascending Node on the Aqua spacecraft. This error of approximately 90 seconds over a two year prediction is a complication in planning and timing of maneuvers for all members of the Earth Observing System Afternoon Constellation, which use Aqua's MLTAN as the reference for their inclination maneuvers. It was determined that the source of the prediction error was the lack of a solid Earth tide model in the operational force models. The Love Model of the solid Earth tide potential was used to derive analytic corrections to the inclination and right ascension of the ascending node of Aqua's Sun-synchronous orbit. Additionally, it was determined that the resonance between the Sun and orbit plane of the Sun-synchronous orbit is the primary driver of this error. The analytic corrections have been added to the operational force models for the Aqua spacecraft reducing the two-year 90-second error to less than 7 seconds.

  20. Interferometric correction system for a numerically controlled machine

    DOEpatents

    Burleson, Robert R.

    1978-01-01

    An interferometric correction system for a numerically controlled machine is provided to improve the positioning accuracy of a machine tool, for example, for a high-precision numerically controlled machine. A laser interferometer feedback system is used to monitor the positioning of the machine tool which is being moved by command pulses to a positioning system to position the tool. The correction system compares the commanded position as indicated by a command pulse train applied to the positioning system with the actual position of the tool as monitored by the laser interferometer. If the tool position lags the commanded position by a preselected error, additional pulses are added to the pulse train applied to the positioning system to advance the tool closer to the commanded position, thereby reducing the lag error. If the actual tool position is leading in comparison to the commanded position, pulses are deleted from the pulse train where the advance error exceeds the preselected error magnitude to correct the position error of the tool relative to the commanded position.

  1. UMI-tools: modeling sequencing errors in Unique Molecular Identifiers to improve quantification accuracy

    PubMed Central

    2017-01-01

    Unique Molecular Identifiers (UMIs) are random oligonucleotide barcodes that are increasingly used in high-throughput sequencing experiments. Through a UMI, identical copies arising from distinct molecules can be distinguished from those arising through PCR amplification of the same molecule. However, bioinformatic methods to leverage the information from UMIs have yet to be formalized. In particular, sequencing errors in the UMI sequence are often ignored or else resolved in an ad hoc manner. We show that errors in the UMI sequence are common and introduce network-based methods to account for these errors when identifying PCR duplicates. Using these methods, we demonstrate improved quantification accuracy both under simulated conditions and real iCLIP and single-cell RNA-seq data sets. Reproducibility between iCLIP replicates and single-cell RNA-seq clustering are both improved using our proposed network-based method, demonstrating the value of properly accounting for errors in UMIs. These methods are implemented in the open source UMI-tools software package. PMID:28100584

  2. Multi-GNSS signal-in-space range error assessment - Methodology and results

    NASA Astrophysics Data System (ADS)

    Montenbruck, Oliver; Steigenberger, Peter; Hauschild, André

    2018-06-01

    The positioning accuracy of global and regional navigation satellite systems (GNSS/RNSS) depends on a variety of influence factors. For constellation-specific performance analyses it has become common practice to separate a geometry-related quality factor (the dilution of precision, DOP) from the measurement and modeling errors of the individual ranging measurements (known as user equivalent range error, UERE). The latter is further divided into user equipment errors and contributions related to the space and control segment. The present study reviews the fundamental concepts and underlying assumptions of signal-in-space range error (SISRE) analyses and presents a harmonized framework for multi-GNSS performance monitoring based on the comparison of broadcast and precise ephemerides. The implications of inconsistent geometric reference points, non-common time systems, and signal-specific range biases are analyzed, and strategies for coping with these issues in the definition and computation of SIS range errors are developed. The presented concepts are, furthermore, applied to current navigation satellite systems, and representative results are presented along with a discussion of constellation-specific problems in their determination. Based on data for the January to December 2017 time frame, representative global average root-mean-square (RMS) SISRE values of 0.2 m, 0.6 m, 1 m, and 2 m are obtained for Galileo, GPS, BeiDou-2, and GLONASS, respectively. Roughly two times larger values apply for the corresponding 95th-percentile values. Overall, the study contributes to a better understanding and harmonization of multi-GNSS SISRE analyses and their use as key performance indicators for the various constellations.

  3. A new model integrating short- and long-term aging of copper added to soils

    PubMed Central

    Zeng, Saiqi; Li, Jumei; Wei, Dongpu

    2017-01-01

    Aging refers to the processes by which the bioavailability/toxicity, isotopic exchangeability, and extractability of metals added to soils decline overtime. We studied the characteristics of the aging process in copper (Cu) added to soils and the factors that affect this process. Then we developed a semi-mechanistic model to predict the lability of Cu during the aging process with descriptions of the diffusion process using complementary error function. In the previous studies, two semi-mechanistic models to separately predict short-term and long-term aging of Cu added to soils were developed with individual descriptions of the diffusion process. In the short-term model, the diffusion process was linearly related to the square root of incubation time (t1/2), and in the long-term model, the diffusion process was linearly related to the natural logarithm of incubation time (lnt). Both models could predict short-term or long-term aging processes separately, but could not predict the short- and long-term aging processes by one model. By analyzing and combining the two models, we found that the short- and long-term behaviors of the diffusion process could be described adequately using the complementary error function. The effect of temperature on the diffusion process was obtained in this model as well. The model can predict the aging process continuously based on four factors—soil pH, incubation time, soil organic matter content and temperature. PMID:28820888

  4. Applying axiomatic design to a medication distribution system

    NASA Astrophysics Data System (ADS)

    Raguini, Pepito B.

    As the need to minimize medication errors drives many medical facilities to come up with robust solutions to the most common error that affects patient's safety, these hospitals would be wise to put a concerted effort into finding methodologies that can facilitate an optimized medical distribution system. If the hospitals' upper management is looking for an optimization method that is an ideal fit, it is just as important that the right tool be selected for the application at hand. In the present work, we propose the application of Axiomatic Design (AD), which is a process that focuses on the generation and selection of functional requirements to meet the customer needs for product and/or process design. The appeal of the axiomatic approach is to provide both a formal design process and a set of technical coefficients for meeting the customer's needs. Thus, AD offers a strategy for the effective integration of people, design methods, design tools and design data. Therefore, we propose the AD methodology to medical applications with the main objective of allowing nurses the opportunity to provide cost effective delivery of medications to inpatients, thereby improving quality patient care. The AD methodology will be implemented through the use of focused stores, where medications can be readily stored and can be conveniently located near patients, as well as a mobile apparatus that can also store medications and is commonly used by hospitals, the medication cart. Moreover, a robust methodology called the focused store methodology will be introduced and developed for both the uncapacitated and capacitated case studies, which will set up an appropriate AD framework and design problem for a medication distribution case study.

  5. Neutrinos help reconcile Planck measurements with the local universe.

    PubMed

    Wyman, Mark; Rudd, Douglas H; Vanderveld, R Ali; Hu, Wayne

    2014-02-07

    Current measurements of the low and high redshift Universe are in tension if we restrict ourselves to the standard six-parameter model of flat ΛCDM. This tension has two parts. First, the Planck satellite data suggest a higher normalization of matter perturbations than local measurements of galaxy clusters. Second, the expansion rate of the Universe today, H0, derived from local distance-redshift measurements is significantly higher than that inferred using the acoustic scale in galaxy surveys and the Planck data as a standard ruler. The addition of a sterile neutrino species changes the acoustic scale and brings the two into agreement; meanwhile, adding mass to the active neutrinos or to a sterile neutrino can suppress the growth of structure, bringing the cluster data into better concordance as well. For our fiducial data set combination, with statistical errors for clusters, a model with a massive sterile neutrino shows 3.5σ evidence for a nonzero mass and an even stronger rejection of the minimal model. A model with massive active neutrinos and a massless sterile neutrino is similarly preferred. An eV-scale sterile neutrino mass--of interest for short baseline and reactor anomalies--is well within the allowed range. We caution that (i) unknown astrophysical systematic errors in any of the data sets could weaken this conclusion, but they would need to be several times the known errors to eliminate the tensions entirely; (ii) the results we find are at some variance with analyses that do not include cluster measurements; and (iii) some tension remains among the data sets even when new neutrino physics is included.

  6. Detecting trends in raptor counts: power and type I error rates of various statistical tests

    USGS Publications Warehouse

    Hatfield, J.S.; Gould, W.R.; Hoover, B.A.; Fuller, M.R.; Lindquist, E.L.

    1996-01-01

    We conducted simulations that estimated power and type I error rates of statistical tests for detecting trends in raptor population count data collected from a single monitoring site. Results of the simulations were used to help analyze count data of bald eagles (Haliaeetus leucocephalus) from 7 national forests in Michigan, Minnesota, and Wisconsin during 1980-1989. Seven statistical tests were evaluated, including simple linear regression on the log scale and linear regression with a permutation test. Using 1,000 replications each, we simulated n = 10 and n = 50 years of count data and trends ranging from -5 to 5% change/year. We evaluated the tests at 3 critical levels (alpha = 0.01, 0.05, and 0.10) for both upper- and lower-tailed tests. Exponential count data were simulated by adding sampling error with a coefficient of variation of 40% from either a log-normal or autocorrelated log-normal distribution. Not surprisingly, tests performed with 50 years of data were much more powerful than tests with 10 years of data. Positive autocorrelation inflated alpha-levels upward from their nominal levels, making the tests less conservative and more likely to reject the null hypothesis of no trend. Of the tests studied, Cox and Stuart's test and Pollard's test clearly had lower power than the others. Surprisingly, the linear regression t-test, Collins' linear regression permutation test, and the nonparametric Lehmann's and Mann's tests all had similar power in our simulations. Analyses of the count data suggested that bald eagles had increasing trends on at least 2 of the 7 national forests during 1980-1989.

  7. Synthesis and analysis of precise spaceborne laser ranging systems, volume 1. [link analysis

    NASA Technical Reports Server (NTRS)

    Paddon, E. A.

    1977-01-01

    Measurement accuracy goals of 2 cm rms range estimation error and 0.003 cm/sec rms range rate estimation error, with no more than 1 cm (range) static bias error are requirements for laser measurement systems to be used in planned space-based earth physics investigations. Constraints and parameters were defined for links between a high altitude, transmit/receive satellite (HATRS), and one of three targets: a low altitude target satellite, passive (LATS), and active low altitude target, and a ground-based target, as well as with operations with a primary transmit/receive terminal intended to be carried as a shuttle payload, in conjunction with the Spacelab program.

  8. On the use of kinetic energy preserving DG-schemes for large eddy simulation

    NASA Astrophysics Data System (ADS)

    Flad, David; Gassner, Gregor

    2017-12-01

    Recently, element based high order methods such as Discontinuous Galerkin (DG) methods and the closely related flux reconstruction (FR) schemes have become popular for compressible large eddy simulation (LES). Element based high order methods with Riemann solver based interface numerical flux functions offer an interesting dispersion dissipation behavior for multi-scale problems: dispersion errors are very low for a broad range of scales, while dissipation errors are very low for well resolved scales and are very high for scales close to the Nyquist cutoff. In some sense, the inherent numerical dissipation caused by the interface Riemann solver acts as a filter of high frequency solution components. This observation motivates the trend that element based high order methods with Riemann solvers are used without an explicit LES model added. Only the high frequency type inherent dissipation caused by the Riemann solver at the element interfaces is used to account for the missing sub-grid scale dissipation. Due to under-resolution of vortical dominated structures typical for LES type setups, element based high order methods suffer from stability issues caused by aliasing errors of the non-linear flux terms. A very common strategy to fight these aliasing issues (and instabilities) is so-called polynomial de-aliasing, where interpolation is exchanged with projection based on an increased number of quadrature points. In this paper, we start with this common no-model or implicit LES (iLES) DG approach with polynomial de-aliasing and Riemann solver dissipation and review its capabilities and limitations. We find that the strategy gives excellent results, but only when the resolution is such, that about 40% of the dissipation is resolved. For more realistic, coarser resolutions used in classical LES e.g. of industrial applications, the iLES DG strategy becomes quite inaccurate. We show that there is no obvious fix to this strategy, as adding for instance a sub-grid-scale models on top doesn't change much or in worst case decreases the fidelity even more. Finally, the core of this work is a novel LES strategy based on split form DG methods that are kinetic energy preserving. The scheme offers excellent stability with full control over the amount and shape of the added artificial dissipation. This premise is the main idea of the work and we will assess the LES capabilities of the novel split form DG approach when applied to shock-free, moderate Mach number turbulence. We will demonstrate that the novel DG LES strategy offers similar accuracy as the iLES methodology for well resolved cases, but strongly increases fidelity in case of more realistic coarse resolutions.

  9. Navigated total knee arthroplasty: is it error-free?

    PubMed

    Chua, Kerk Hsiang Zackary; Chen, Yongsheng; Lingaraj, Krishna

    2014-03-01

    The aim of this study was to determine whether errors do occur in navigated total knee arthroplasty (TKAs) and to study whether errors in bone resection or implantation contribute to these errors. A series of 20 TKAs was studied using computer navigation. The coronal and sagittal alignments of the femoral and tibial cutting guides, the coronal and sagittal alignments of the final tibial implant and the coronal alignment of the final femoral implant were compared with that of the respective bone resections. To determine the post-implantation mechanical alignment of the limb, the coronal alignment of the femoral and tibial implants was combined. The median deviation between the femoral cutting guide and bone resection was 0° (range -0.5° to +0.5°) in the coronal plane and 1.0° (range -2.0° to +1.0°) in the sagittal plane. The median deviation between the tibial cutting guide and bone resection was 0.5° (range -1.0° to +1.5°) in the coronal plane and 1.0° (range -1.0° to +3.5°) in the sagittal plane. The median deviation between the femoral bone resection and the final implant was 0.25° (range -2.0° to 3.0°) in the coronal plane. The median deviation between the tibial bone resection and the final implant was 0.75° (range -3.0° to +1.5°) in the coronal plane and 1.75° (range -4.0° to +2.0°) in the sagittal plane. The median post-implantation mechanical alignment of the limb was 0.25° (range -3.0° to +2.0°). When navigation is used only to guide the positioning of the cutting jig, errors may arise in the manual, non-navigated steps of the procedure. Our study showed increased cutting errors in the sagittal plane for both the femur and the tibia, and following implantation, the greatest error was seen in the sagittal alignment of the tibial component. Computer navigation should be used not only to guide the positioning of the cutting jig, but also to check the bone resection and implant position during TKA. IV.

  10. External cavity diode laser setup with two interference filters

    NASA Astrophysics Data System (ADS)

    Martin, Alexander; Baus, Patrick; Birkl, Gerhard

    2016-12-01

    We present an external cavity diode laser setup using two identical, commercially available interference filters operated in the blue wavelength range around 450 nm. The combination of the two filters decreases the transmission width, while increasing the edge steepness without a significant reduction in peak transmittance. Due to the broad spectral transmission of these interference filters compared to the internal mode spacing of blue laser diodes, an additional locking scheme, based on Hänsch-Couillaud locking to a cavity, has been added to improve the stability. The laser is stabilized to a line in the tellurium spectrum via saturation spectroscopy, and single-frequency operation for a duration of two days is demonstrated by monitoring the error signal of the lock and the piezo drive compensating the length change of the external resonator due to air pressure variations. Additionally, transmission curves of the filters and the spectra of a sample of diodes are given.

  11. Space-based Doppler lidar sampling strategies: Algorithm development and simulated observation experiments

    NASA Technical Reports Server (NTRS)

    Emmitt, G. D.; Wood, S. A.; Morris, M.

    1990-01-01

    Lidar Atmospheric Wind Sounder (LAWS) Simulation Models (LSM) were developed to evaluate the potential impact of global wind observations on the basic understanding of the Earth's atmosphere and on the predictive skills of current forecast models (GCM and regional scale). Fully integrated top to bottom LAWS Simulation Models for global and regional scale simulations were developed. The algorithm development incorporated the effects of aerosols, water vapor, clouds, terrain, and atmospheric turbulence into the models. Other additions include a new satellite orbiter, signal processor, line of sight uncertainty model, new Multi-Paired Algorithm and wind error analysis code. An atmospheric wind field library containing control fields, meteorological fields, phenomena fields, and new European Center for Medium Range Weather Forecasting (ECMWF) data was also added. The LSM was used to address some key LAWS issues and trades such as accuracy and interpretation of LAWS information, data density, signal strength, cloud obscuration, and temporal data resolution.

  12. An engineered design of a diffractive mask for high precision astrometry

    NASA Astrophysics Data System (ADS)

    Dennison, Kaitlin; Ammons, S. Mark; Garrel, Vincent; Marin, Eduardo; Sivo, Gaetano; Bendek, Eduardo; Guyon, Oliver

    2016-07-01

    AutoCAD, Zemax Optic Studio 15, and Interactive Data Language (IDL) with the Proper Library are used to computationally model and test a diffractive mask (DiM) suitable for use in the Gemini Multi-Conjugate Adaptive Optics System (GeMS) on the Gemini South Telescope. Systematic errors in telescope imagery are produced when the light travels through the adaptive optics system of the telescope. DiM is a transparent, flat optic with a pattern of miniscule dots lithographically applied to it. It is added ahead of the adaptive optics system in the telescope in order to produce diffraction spots that will encode systematic errors in the optics after it. Once these errors are encoded, they can be corrected for. DiM will allow for more accurate measurements in astrometry and thus improve exoplanet detection. The mechanics and physical attributes of the DiM are modeled in AutoCAD. Zemax models the ray propagation of point sources of light through the telescope. IDL and Proper simulate the wavefront and image results of the telescope. Aberrations are added to the Zemax and IDL models to test how the diffraction spots from the DiM change in the final images. Based on the Zemax and IDL results, the diffraction spots are able to encode the systematic aberrations.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Dennison, Kaitlin; Ammons, S. Mark; Garrel, Vincent

    AutoCAD, Zemax Optic Studio 15, and Interactive Data Language (IDL) with the Proper Library are used to computationally model and test a diffractive mask (DiM) suitable for use in the Gemini Multi-Conjugate Adaptive Optics System (GeMS) on the Gemini South Telescope. Systematic errors in telescope imagery are produced when the light travels through the adaptive optics system of the telescope. DiM is a transparent, flat optic with a pattern of miniscule dots lithographically applied to it. It is added ahead of the adaptive optics system in the telescope in order to produce diffraction spots that will encode systematic errors inmore » the optics after it. Once these errors are encoded, they can be corrected for. DiM will allow for more accurate measurements in astrometry and thus improve exoplanet detection. Furthermore, the mechanics and physical attributes of the DiM are modeled in AutoCAD. Zemax models the ray propagation of point sources of light through the telescope. IDL and Proper simulate the wavefront and image results of the telescope. Aberrations are added to the Zemax and IDL models to test how the diffraction spots from the DiM change in the final images. Based on the Zemax and IDL results, the diffraction spots are able to encode the systematic aberrations.« less

  14. Determination of hydroxyurea in human plasma by HPLC-UV using derivatization with xanthydrol.

    PubMed

    Legrand, Tiphaine; Rakotoson, Marie-Georgine; Galactéros, Frédéric; Bartolucci, Pablo; Hulin, Anne

    2017-10-01

    A simple and rapid high performance liquid chromatography (HPLC) method using ultraviolet (UV) detection was developed to determine hydroxyurea (HU) concentration in plasma sample after derivatization with xanthydrol. Two hundred microliters samples were spiked with methylurea (MeU) as internal standard and proteins were precipitated by adding methanol. Derivatization of HU and MeU was immediately performed by adding 0.02M xanthydrol and 1.5M HCl in order to obtain xanthyl-derivatives of HU and MeU that can be further separated using HPLC and quantified using UV detection at 240nm. Separation was achieved using a C18 column with a mobile phase composed of 20mM ammonium acetate and acetonitrile in gradient elution mode at a flow rate of 1mL/min. The total analysis time did not exceed 18min. The method was found linear from 5 to 400μM and all validation parameters fulfilled the international requirements. Between- and within-run accuracy error ranged from -4.7% to 3.2% and precision was lower than 12.8%. This simple method requires small volume samples and can be easily implemented in most clinical laboratories to develop pharmacokinetics studies of HU and to promote its therapeutic monitoring. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Frequency-difference MIT imaging of cerebral haemorrhage with a hemispherical coil array: numerical modelling.

    PubMed

    Zolgharni, M; Griffiths, H; Ledger, P D

    2010-08-01

    The feasibility of detecting a cerebral haemorrhage with a hemispherical MIT coil array consisting of 56 exciter/sensor coils of 10 mm radius and operating at 1 and 10 MHz was investigated. A finite difference method combined with an anatomically realistic head model comprising 12 tissue types was used to simulate the strokes. Frequency-difference images were reconstructed from the modelled data with different levels of the added phase noise and two types of a priori boundary errors: a displacement of the head and a size scaling error. The results revealed that a noise level of 3 m degrees (standard deviation) was adequate for obtaining good visualization of a peripheral stroke (volume approximately 49 ml). The simulations further showed that the displacement error had to be within 3-4 mm and the scaling error within 3-4% so as not to cause unacceptably large artefacts on the images.

  16. Tilt Error in Cryospheric Surface Radiation Measurements at High Latitudes: A Model Study

    NASA Astrophysics Data System (ADS)

    Bogren, W.; Kylling, A.; Burkhart, J. F.

    2015-12-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in-situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response foreoptic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250nm to 4500nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high latitude albedo measurement with a solar zenith angle of 60◦, a sensor tilted by 1, 3, and 5◦ can respectively introduce up to 2.6, 7.7, and 12.8% error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo.

  17. Higher-order ionospheric error at Arecibo, Millstone, and Jicamarca

    NASA Astrophysics Data System (ADS)

    Matteo, N. A.; Morton, Y. T.

    2010-12-01

    The ionosphere is a dominant source of Global Positioning System receiver range measurement error. Although dual-frequency receivers can eliminate the first-order ionospheric error, most second- and third-order errors remain in the range measurements. Higher-order ionospheric error is a function of both electron density distribution and the magnetic field vector along the GPS signal propagation path. This paper expands previous efforts by combining incoherent scatter radar (ISR) electron density measurements, the International Reference Ionosphere model, exponential decay extensions of electron densities, the International Geomagnetic Reference Field, and total electron content maps to compute higher-order error at ISRs in Arecibo, Puerto Rico; Jicamarca, Peru; and Millstone Hill, Massachusetts. Diurnal patterns, dependency on signal direction, seasonal variation, and geomagnetic activity dependency are analyzed. Higher-order error is largest at Arecibo with code phase maxima circa 7 cm for low-elevation southern signals. The maximum variation of the error over all angles of arrival is circa 8 cm.

  18. The sensitivity of patient specific IMRT QC to systematic MLC leaf bank offset errors.

    PubMed

    Rangel, Alejandra; Palte, Gesa; Dunscombe, Peter

    2010-07-01

    Patient specific IMRT QC is performed routinely in many clinics as a safeguard against errors and inaccuracies which may be introduced during the complex planning, data transfer, and delivery phases of this type of treatment. The purpose of this work is to evaluate the feasibility of detecting systematic errors in MLC leaf bank position with patient specific checks. 9 head and neck (H&N) and 14 prostate IMRT beams were delivered using MLC files containing systematic offsets (+/- 1 mm in two banks, +/- 0.5 mm in two banks, and 1 mm in one bank of leaves). The beams were measured using both MAPCHECK (Sun Nuclear Corp., Melbourne, FL) and the aS1000 electronic portal imaging device (Varian Medical Systems, Palo Alto, CA). Comparisons with calculated fields, without offsets, were made using commonly adopted criteria including absolute dose (AD) difference, relative dose difference, distance to agreement (DTA), and the gamma index. The criteria most sensitive to systematic leaf bank offsets were the 3% AD, 3 mm DTA for MAPCHECK and the gamma index with 2% AD and 2 mm DTA for the EPID. The criterion based on the relative dose measurements was the least sensitive to MLC offsets. More highly modulated fields, i.e., H&N, showed greater changes in the percentage of passing points due to systematic MLC inaccuracy than prostate fields. None of the techniques or criteria tested is sufficiently sensitive, with the population of IMRT fields, to detect a systematic MLC offset at a clinically significant level on an individual field. Patient specific QC cannot, therefore, substitute for routine QC of the MLC itself.

  19. Proxy-equation paradigm: A strategy for massively parallel asynchronous computations

    NASA Astrophysics Data System (ADS)

    Mittal, Ankita; Girimaji, Sharath

    2017-09-01

    Massively parallel simulations of transport equation systems call for a paradigm change in algorithm development to achieve efficient scalability. Traditional approaches require time synchronization of processing elements (PEs), which severely restricts scalability. Relaxing synchronization requirement introduces error and slows down convergence. In this paper, we propose and develop a novel "proxy equation" concept for a general transport equation that (i) tolerates asynchrony with minimal added error, (ii) preserves convergence order and thus, (iii) expected to scale efficiently on massively parallel machines. The central idea is to modify a priori the transport equation at the PE boundaries to offset asynchrony errors. Proof-of-concept computations are performed using a one-dimensional advection (convection) diffusion equation. The results demonstrate the promise and advantages of the present strategy.

  20. Design and experiment of FBG-based icing monitoring on overhead transmission lines with an improvement trial for windy weather.

    PubMed

    Zhang, Min; Xing, Yimeng; Zhang, Zhiguo; Chen, Qiguan

    2014-12-12

    A scheme for monitoring icing on overhead transmission lines with fiber Bragg grating (FBG) strain sensors is designed and evaluated both theoretically and experimentally. The influences of temperature and wind are considered. The results of field experiments using simulated ice loading on windless days indicate that the scheme is capable of monitoring the icing thickness within 0-30 mm with an accuracy of ±1 mm, a load cell error of 0.0308v, a repeatability error of 0.3328v and a hysteresis error is 0.026%. To improve the measurement during windy weather, a correction factor is added to the effective gravity acceleration, and the absolute FBG strain is replaced by its statistical average.

  1. Incidence of speech recognition errors in the emergency department.

    PubMed

    Goss, Foster R; Zhou, Li; Weiner, Scott G

    2016-09-01

    Physician use of computerized speech recognition (SR) technology has risen in recent years due to its ease of use and efficiency at the point of care. However, error rates between 10 and 23% have been observed, raising concern about the number of errors being entered into the permanent medical record, their impact on quality of care and medical liability that may arise. Our aim was to determine the incidence and types of SR errors introduced by this technology in the emergency department (ED). Level 1 emergency department with 42,000 visits/year in a tertiary academic teaching hospital. A random sample of 100 notes dictated by attending emergency physicians (EPs) using SR software was collected from the ED electronic health record between January and June 2012. Two board-certified EPs annotated the notes and conducted error analysis independently. An existing classification schema was adopted to classify errors into eight errors types. Critical errors deemed to potentially impact patient care were identified. There were 128 errors in total or 1.3 errors per note, and 14.8% (n=19) errors were judged to be critical. 71% of notes contained errors, and 15% contained one or more critical errors. Annunciation errors were the highest at 53.9% (n=69), followed by deletions at 18.0% (n=23) and added words at 11.7% (n=15). Nonsense errors, homonyms and spelling errors were present in 10.9% (n=14), 4.7% (n=6), and 0.8% (n=1) of notes, respectively. There were no suffix or dictionary errors. Inter-annotator agreement was 97.8%. This is the first estimate at classifying speech recognition errors in dictated emergency department notes. Speech recognition errors occur commonly with annunciation errors being the most frequent. Error rates were comparable if not lower than previous studies. 15% of errors were deemed critical, potentially leading to miscommunication that could affect patient care. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  2. Adverse Drug Events and Medication Errors in African Hospitals: A Systematic Review.

    PubMed

    Mekonnen, Alemayehu B; Alhawassi, Tariq M; McLachlan, Andrew J; Brien, Jo-Anne E

    2018-03-01

    Medication errors and adverse drug events are universal problems contributing to patient harm but the magnitude of these problems in Africa remains unclear. The objective of this study was to systematically investigate the literature on the extent of medication errors and adverse drug events, and the factors contributing to medication errors in African hospitals. We searched PubMed, MEDLINE, EMBASE, Web of Science and Global Health databases from inception to 31 August, 2017 and hand searched the reference lists of included studies. Original research studies of any design published in English that investigated adverse drug events and/or medication errors in any patient population in the hospital setting in Africa were included. Descriptive statistics including median and interquartile range were presented. Fifty-one studies were included; of these, 33 focused on medication errors, 15 on adverse drug events, and three studies focused on medication errors and adverse drug events. These studies were conducted in nine (of the 54) African countries. In any patient population, the median (interquartile range) percentage of patients reported to have experienced any suspected adverse drug event at hospital admission was 8.4% (4.5-20.1%), while adverse drug events causing admission were reported in 2.8% (0.7-6.4%) of patients but it was reported that a median of 43.5% (20.0-47.0%) of the adverse drug events were deemed preventable. Similarly, the median mortality rate attributed to adverse drug events was reported to be 0.1% (interquartile range 0.0-0.3%). The most commonly reported types of medication errors were prescribing errors, occurring in a median of 57.4% (interquartile range 22.8-72.8%) of all prescriptions and a median of 15.5% (interquartile range 7.5-50.6%) of the prescriptions evaluated had dosing problems. Major contributing factors for medication errors reported in these studies were individual practitioner factors (e.g. fatigue and inadequate knowledge/training) and environmental factors, such as workplace distraction and high workload. Medication errors in the African healthcare setting are relatively common, and the impact of adverse drug events is substantial but many are preventable. This review supports the design and implementation of preventative strategies targeting the most likely contributing factors.

  3. Performance of an implantable impedance spectroscopy monitor using ZigBee

    NASA Astrophysics Data System (ADS)

    Bogónez-Franco, P.; Bayés-Genís, A.; Rosell, J.; Bragós, R.

    2010-04-01

    This paper presents the characterization measurements of an implantable bioimpedance monitor with ZigBee. Such measurements are done over RC networks, performing short and long-term measurements, with and without mismatch in electrodes and varying the temperature and the RF range. The bioimpedance monitor will be used in organ monitoring through electrical impedance spectroscopy in the 100 Hz - 200 kHz range. The specific application is the study of the viability and evolution of engineered tissue in cardiac regeneration in an experimental protocol with pig models. The bioimpedance monitor includes a ZigBee transceiver to transmit the measured data outside the animal chest. The bioimpedance monitor is based in the 12 Bit Impedance Converter and Network Analyzer AD5933, improved with an analog front-end that implements a 4-electrode measurement structure and allows to measure small impedances. In the debugging prototype, the system autonomy exceeds 1 month when a 14 frequencies impedance spectrum is acquired every 5 minutes. The receiver side consists of a ZigBee transceiver connected to a PC to process the received data. In the current implementation, the effective range of the RF link was of a few centimeters, then needing a range extender placed close to the animal. We have increased it by using an antenna with higher gain. Basic errors in the phantom circuit parameters estimation after model fitting are below 1%.

  4. Wind turbine design codes: A comparison of the structural response

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Buhl, M.L. Jr.; Wright, A.D.; Pierce, K.G.

    2000-03-01

    The National Wind Technology Center (NWTC) of the National Renewable Energy Laboratory is continuing a comparison of several computer codes used in the design and analysis of wind turbines. The second part of this comparison determined how well the programs predict the structural response of wind turbines. In this paper, the authors compare the structural response for four programs: ADAMS, BLADED, FAST{_}AD, and YawDyn. ADAMS is a commercial, multibody-dynamics code from Mechanical Dynamics, Inc. BLADED is a commercial, performance and structural-response code from Garrad Hassan and Partners Limited. FAST{_}AD is a structural-response code developed by Oregon State University and themore » University of Utah for the NWTC. YawDyn is a structural-response code developed by the University of Utah for the NWTC. ADAMS, FAST{_}AD, and YawDyn use the University of Utah's AeroDyn subroutine package for calculating aerodynamic forces. Although errors were found in all the codes during this study, once they were fixed, the codes agreed surprisingly well for most of the cases and configurations that were evaluated. One unresolved discrepancy between BLADED and the AeroDyn-based codes was when there was blade and/or teeter motion in addition to a large yaw error.« less

  5. Increasing Linear Dynamic Range of a CMOS Image Sensor

    NASA Technical Reports Server (NTRS)

    Pain, Bedabrata

    2007-01-01

    A generic design and a corresponding operating sequence have been developed for increasing the linear-response dynamic range of a complementary metal oxide/semiconductor (CMOS) image sensor. The design provides for linear calibrated dual-gain pixels that operate at high gain at a low signal level and at low gain at a signal level above a preset threshold. Unlike most prior designs for increasing dynamic range of an image sensor, this design does not entail any increase in noise (including fixed-pattern noise), decrease in responsivity or linearity, or degradation of photometric calibration. The figure is a simplified schematic diagram showing the circuit of one pixel and pertinent parts of its column readout circuitry. The conventional part of the pixel circuit includes a photodiode having a small capacitance, CD. The unconventional part includes an additional larger capacitance, CL, that can be connected to the photodiode via a transfer gate controlled in part by a latch. In the high-gain mode, the signal labeled TSR in the figure is held low through the latch, which also helps to adapt the gain on a pixel-by-pixel basis. Light must be coupled to the pixel through a microlens or by back illumination in order to obtain a high effective fill factor; this is necessary to ensure high quantum efficiency, a loss of which would minimize the efficacy of the dynamic- range-enhancement scheme. Once the level of illumination of the pixel exceeds the threshold, TSR is turned on, causing the transfer gate to conduct, thereby adding CL to the pixel capacitance. The added capacitance reduces the conversion gain, and increases the pixel electron-handling capacity, thereby providing an extension of the dynamic range. By use of an array of comparators also at the bottom of the column, photocharge voltages on sampling capacitors in each column are compared with a reference voltage to determine whether it is necessary to switch from the high-gain to the low-gain mode. Depending upon the built-in offset in each pixel and in each comparator, the point at which the gain change occurs will be different, adding gain-dependent fixed pattern noise in each pixel. The offset, and hence the fixed pattern noise, is eliminated by sampling the pixel readout charge four times by use of four capacitors (instead of two such capacitors as in conventional design) connected to the bottom of the column via electronic switches SHS1, SHR1, SHS2, and SHR2, respectively, corresponding to high and low values of the signals TSR and RST. The samples are combined in an appropriate fashion to cancel offset-induced errors, and provide spurious-free imaging with extended dynamic range.

  6. Solar-System Tests of Gravitational Theories

    NASA Technical Reports Server (NTRS)

    Shapiro, Irwin I.

    2005-01-01

    We are engaged in testing gravitational theory, mainly using observations of objects in the solar system and mainly on the interplanetary scale. Our goal is either to detect departures from the standard model (general relativity) - if any exist within the level of sensitivity of our data - or to support this model by placing tighter bounds on any departure from it. For this project, we have analyzed a combination of observational data with our model of the solar system, including planetary radar ranging, lunar laser ranging, and spacecraft tracking, as well as pulsar timing and pulsar VLBI measurements. In the past year, we have added to our data, primarily lunar laser ranging measurements, but also supplementary data concerning the physical properties of solar-system objects, such as the solar quadrupole moment, planetary masses, and asteroid radii. Because the solar quadrupole moment contributes to the classical precession of planetary perihelia, but with a dependence on distance from the Sun that differs from that of the relativistic precession, it is possible to estimate effects simultaneously. However, our interest is mainly in the relativistic effect, and we find that imposing a constraint on the quadrupole moment from helioseismology studies, gives us a dramatic (about ten-fold) decrease in the standard error of our estimate of the relativistic component of the perihelion advance.

  7. Tilt error in cryospheric surface radiation measurements at high latitudes: a model study

    NASA Astrophysics Data System (ADS)

    Bogren, Wiley Steven; Faulkner Burkhart, John; Kylling, Arve

    2016-03-01

    We have evaluated the magnitude and makeup of error in cryospheric radiation observations due to small sensor misalignment in in situ measurements of solar irradiance. This error is examined through simulation of diffuse and direct irradiance arriving at a detector with a cosine-response fore optic. Emphasis is placed on assessing total error over the solar shortwave spectrum from 250 to 4500 nm, as well as supporting investigation over other relevant shortwave spectral ranges. The total measurement error introduced by sensor tilt is dominated by the direct component. For a typical high-latitude albedo measurement with a solar zenith angle of 60°, a sensor tilted by 1, 3, and 5° can, respectively introduce up to 2.7, 8.1, and 13.5 % error into the measured irradiance and similar errors in the derived albedo. Depending on the daily range of solar azimuth and zenith angles, significant measurement error can persist also in integrated daily irradiance and albedo. Simulations including a cloud layer demonstrate decreasing tilt error with increasing cloud optical depth.

  8. Flight calibration of compensated and uncompensated pitot-static airspeed probes and application of the probes to supersonic cruise vehicles

    NASA Technical Reports Server (NTRS)

    Webb, L. D.; Washington, H. P.

    1972-01-01

    Static pressure position error calibrations for a compensated and an uncompensated XB-70 nose boom pitot static probe were obtained in flight. The methods (Pacer, acceleration-deceleration, and total temperature) used to obtain the position errors over a Mach number range from 0.5 to 3.0 and an altitude range from 25,000 feet to 70,000 feet are discussed. The error calibrations are compared with the position error determined from wind tunnel tests, theoretical analysis, and a standard NACA pitot static probe. Factors which influence position errors, such as angle of attack, Reynolds number, probe tip geometry, static orifice location, and probe shape, are discussed. Also included are examples showing how the uncertainties caused by position errors can affect the inlet controls and vertical altitude separation of a supersonic transport.

  9. Comparison of Agar Dilution, Disk Diffusion, MicroScan, and Vitek Antimicrobial Susceptibility Testing Methods to Broth Microdilution for Detection of Fluoroquinolone-Resistant Isolates of the Family Enterobacteriaceae

    PubMed Central

    Steward, Christine D.; Stocker, Sheila A.; Swenson, Jana M.; O’Hara, Caroline M.; Edwards, Jonathan R.; Gaynes, Robert P.; McGowan, John E.; Tenover, Fred C.

    1999-01-01

    Fluoroquinolone resistance appears to be increasing in many species of bacteria, particularly in those causing nosocomial infections. However, the accuracy of some antimicrobial susceptibility testing methods for detecting fluoroquinolone resistance remains uncertain. Therefore, we compared the accuracy of the results of agar dilution, disk diffusion, MicroScan Walk Away Neg Combo 15 conventional panels, and Vitek GNS-F7 cards to the accuracy of the results of the broth microdilution reference method for detection of ciprofloxacin and ofloxacin resistance in 195 clinical isolates of the family Enterobacteriaceae collected from six U.S. hospitals for a national surveillance project (Project ICARE [Intensive Care Antimicrobial Resistance Epidemiology]). For ciprofloxacin, very major error rates were 0% (disk diffusion and MicroScan), 0.9% (agar dilution), and 2.7% (Vitek), while major error rates ranged from 0% (agar dilution) to 3.7% (MicroScan and Vitek). Minor error rates ranged from 12.3% (agar dilution) to 20.5% (MicroScan). For ofloxacin, no very major errors were observed, and major errors were noted only with MicroScan (3.7% major error rate). Minor error rates ranged from 8.2% (agar dilution) to 18.5% (Vitek). Minor errors for all methods were substantially reduced when results with MICs within ±1 dilution of the broth microdilution reference MIC were excluded from analysis. However, the high number of minor errors by all test systems remains a concern. PMID:9986809

  10. Effectiveness of Occupational Therapy Interventions to Enhance Occupational Performance for Adults With Alzheimer's Disease and Related Major Neurocognitive Disorders: A Systematic Review.

    PubMed

    Smallfield, Stacy; Heckenlaible, Cindy

    The purpose of this systematic review was to describe the evidence for the effectiveness of interventions designed to establish, modify, and maintain occupations for adults with Alzheimer's disease (AD) and related neurocognitive disorders. Titles and abstracts of 2,597 articles were reviewed, of which 256 were retrieved for full review and 52 met inclusion criteria. U.S. Preventive Services Task Force levels of certainty and grade definitions were used to describe the strength of evidence. Articles were categorized into five themes: occupation-based, sleep, cognitive, physical exercise, and multicomponent interventions. Strong evidence supports the benefits of occupation-based interventions, physical exercise, and error-reduction learning. Occupational therapy practitioners should integrate daily occupations, physical exercise, and error-reduction techniques into the daily routine of adults with AD to enhance occupational performance and delay functional decline. Future research should focus on establishing consensus on types and dosage of exercise and cognitive interventions. Copyright © 2017 by the American Occupational Therapy Association, Inc.

  11. Model Based Verification of Cyber Range Event Environments

    DTIC Science & Technology

    2015-12-10

    Model Based Verification of Cyber Range Event Environments Suresh K. Damodaran MIT Lincoln Laboratory 244 Wood St., Lexington, MA, USA...apply model based verification to cyber range event environment configurations, allowing for the early detection of errors in event environment...Environment Representation (CCER) ontology. We also provide an overview of a methodology to specify verification rules and the corresponding error

  12. Quality of Life and Cost of Care at the End of Life: The Role of Advance Directives

    PubMed Central

    Garrido, Melissa M.; Balboni, Tracy A.; Maciejewski, Paul K.; Bao, Yuhua; Prigerson, Holly G.

    2014-01-01

    Context Advance directives (ADs) are expected to improve patients’ end-of-life outcomes, but retrospective analyses, surrogate recall of patients’ preferences, and selection bias have hampered efforts to determine ADs’ effects on patient outcomes. Objectives To examine associations among ADs, quality of life, and estimated costs of care in the week before death. Methods We used prospective data from interviews of 336 patients with advanced cancer and their caregivers, and analyzed patient baseline interview and caregiver and provider post-mortem evaluation data from the Coping with Cancer study. Cost estimates were from the Healthcare Cost and Utilization Project Nationwide Inpatient Sample and published Medicare payment rates and cost estimates. Outcomes were quality of life (range 0-10) and estimated costs of care received in the week before death. Because patient end-of-life care preferences influence both AD completion and care use, analyses were stratified by preferences regarding heroic endof-life measures (everything possible to remain alive). Results Most patients did not want heroic measures (76%). Do-not-resuscitate (DNR) orders were associated with higher quality of life (β=0.75, standard error=0.30, P=0.01) across the entire sample. There were no statistically significant relationships between DNR orders and outcomes among patients when we stratified by patient preference, or between living wills/durable powers of attorney and outcomes in any of the patient groups. Conclusion The associations between DNR orders and better quality of life in the week before death indicate that documenting preferences against resuscitation in medical orders may be beneficial to many patients. PMID:25498855

  13. Effect of phase errors in stepped-frequency radar systems

    NASA Astrophysics Data System (ADS)

    Vanbrundt, H. E.

    1988-04-01

    Stepped-frequency waveforms are being considered for inverse synthetic aperture radar (ISAR) imaging from ship and airborne platforms and for detailed radar cross section (RCS) measurements of ships and aircraft. These waveforms make it possible to achieve resolutions of 1.0 foot by using existing radar designs and processing technology. One problem not yet fully resolved in using stepped-frequency waveform for ISAR imaging is the deterioration in signal level caused by random frequency error. Random frequency error of the stepped-frequency source results in reduced peak responses and increased null responses. The resulting reduced signal-to-noise ratio is range dependent. Two of the major concerns addressed in this report are radar range limitations for ISAR and the error in calibration for RCS measurements caused by differences in range between a passive reflector used for an RCS reference and the target to be measured. In addressing these concerns, NOSC developed an analysis to assess the tolerable frequency error in terms of resulting power loss in signal power and signal-to-phase noise.

  14. Blunted amygdala functional connectivity during a stress task in alcohol dependent individuals: A pilot study.

    PubMed

    Wade, Natasha E; Padula, Claudia B; Anthenelli, Robert M; Nelson, Erik; Eliassen, James; Lisdahl, Krista M

    2017-12-01

    Scant research has been conducted on neural mechanisms underlying stress processing in individuals with alcohol dependence (AD). We examined neural substrates of stress in AD individuals compared with controls using an fMRI task previously shown to induce stress, assessing amygdala functional connectivity to medial prefrontal cortex (mPFC). For this novel pilot study, 10 abstinent AD individuals and 11 controls completed a modified Trier stress task while undergoing fMRI acquisition. The amygdala was used as a seed region for whole-brain seed-based functional connectivity analysis. After controlling for family-wise error (p = 0.05), there was significantly decreased left and right amygdala connectivity with frontal (specifically mPFC), temporal, parietal, and cerebellar regions. Subjective stress, but not craving, increased from pre-to post-task. This study demonstrated decreased connectivity between the amygdala and regions important for stress and emotional processing in long-term abstinent individuals with AD. These results suggest aberrant stress processing in individuals with AD even after lengthy periods of abstinence.

  15. Effluent composition prediction of a two-stage anaerobic digestion process: machine learning and stoichiometry techniques.

    PubMed

    Alejo, Luz; Atkinson, John; Guzmán-Fierro, Víctor; Roeckel, Marlene

    2018-05-16

    Computational self-adapting methods (Support Vector Machines, SVM) are compared with an analytical method in effluent composition prediction of a two-stage anaerobic digestion (AD) process. Experimental data for the AD of poultry manure were used. The analytical method considers the protein as the only source of ammonia production in AD after degradation. Total ammonia nitrogen (TAN), total solids (TS), chemical oxygen demand (COD), and total volatile solids (TVS) were measured in the influent and effluent of the process. The TAN concentration in the effluent was predicted, this being the most inhibiting and polluting compound in AD. Despite the limited data available, the SVM-based model outperformed the analytical method for the TAN prediction, achieving a relative average error of 15.2% against 43% for the analytical method. Moreover, SVM showed higher prediction accuracy in comparison with Artificial Neural Networks. This result reveals the future promise of SVM for prediction in non-linear and dynamic AD processes. Graphical abstract ᅟ.

  16. Performance Errors in Weight Training and Their Correction.

    ERIC Educational Resources Information Center

    Downing, John H.; Lander, Jeffrey E.

    2002-01-01

    Addresses general performance errors in weight training, also discussing each category of error separately. The paper focuses on frequency and intensity, incorrect training velocities, full range of motion, and symmetrical training. It also examines specific errors related to the bench press, squat, military press, and bent- over and seated row…

  17. Code Properties from Holographic Geometries

    NASA Astrophysics Data System (ADS)

    Pastawski, Fernando; Preskill, John

    2017-04-01

    Almheiri, Dong, and Harlow [J. High Energy Phys. 04 (2015) 163., 10.1007/JHEP04(2015)163] proposed a highly illuminating connection between the AdS /CFT holographic correspondence and operator algebra quantum error correction (OAQEC). Here, we explore this connection further. We derive some general results about OAQEC, as well as results that apply specifically to quantum codes that admit a holographic interpretation. We introduce a new quantity called price, which characterizes the support of a protected logical system, and find constraints on the price and the distance for logical subalgebras of quantum codes. We show that holographic codes defined on bulk manifolds with asymptotically negative curvature exhibit uberholography, meaning that a bulk logical algebra can be supported on a boundary region with a fractal structure. We argue that, for holographic codes defined on bulk manifolds with asymptotically flat or positive curvature, the boundary physics must be highly nonlocal, an observation with potential implications for black holes and for quantum gravity in AdS space at distance scales that are small compared to the AdS curvature radius.

  18. Stability of warped AdS3 vacua of topologically massive gravity

    NASA Astrophysics Data System (ADS)

    Anninos, Dionysios; Esole, Mboyo; Guica, Monica

    2009-10-01

    AdS3 vacua of topologically massive gravity (TMG) have been shown to be perturbatively unstable for all values of the coupling constant except the chiral point μl = 1. We study the possibility that the warped vacua of TMG, which exist for all values of μ, are stable under linearized perturbations. In this paper, we show that spacelike warped AdS3 vacua with Compère-Detournay boundary conditions are indeed stable in the range μl>3. This is precisely the range in which black hole solutions arise as discrete identifications of the warped AdS3 vacuum. The situation somewhat resembles chiral gravity: although negative energy modes do exist, they are all excluded by the boundary conditions, and the perturbative spectrum solely consists of boundary (pure large gauge) gravitons.

  19. Suppression of the Nonlinear Zeeman Effect and Heading Error in Earth-Field-Range Alkali-Vapor Magnetometers.

    PubMed

    Bao, Guzhi; Wickenbrock, Arne; Rochester, Simon; Zhang, Weiping; Budker, Dmitry

    2018-01-19

    The nonlinear Zeeman effect can induce splitting and asymmetries of magnetic-resonance lines in the geophysical magnetic-field range. This is a major source of "heading error" for scalar atomic magnetometers. We demonstrate a method to suppress the nonlinear Zeeman effect and heading error based on spin locking. In an all-optical synchronously pumped magnetometer with separate pump and probe beams, we apply a radio-frequency field which is in phase with the precessing magnetization. This results in the collapse of the multicomponent asymmetric magnetic-resonance line with ∼100  Hz width in the Earth-field range into a single peak with a width of 22 Hz, whose position is largely independent of the orientation of the sensor within a range of orientation angles. The technique is expected to be broadly applicable in practical magnetometry, potentially boosting the sensitivity and accuracy of Earth-surveying magnetometers by increasing the magnetic-resonance amplitude, decreasing its width, and removing the important and limiting heading-error systematic.

  20. Analysis of GRACE Range-rate Residuals with Emphasis on Reprocessed Star-Camera Datasets

    NASA Astrophysics Data System (ADS)

    Goswami, S.; Flury, J.; Naeimi, M.; Bandikova, T.; Guerr, T. M.; Klinger, B.

    2015-12-01

    Since March 2002 the two GRACE satellites orbit the Earth at rela-tively low altitude. Determination of the gravity field of the Earth including itstemporal variations from the satellites' orbits and the inter-satellite measure-ments is the goal of the mission. Yet, the time-variable gravity signal has notbeen fully exploited. This can be seen better in the computed post-fit range-rateresiduals. The errors reflected in the range-rate residuals are due to the differ-ent sources as systematic errors, mismodelling errors and tone errors. Here, weanalyse the effect of three different star-camera data sets on the post-fit range-rate residuals. On the one hand, we consider the available attitude data andon other hand we take the two different data sets which has been reprocessedat Institute of Geodesy, Hannover and Institute of Theoretical Geodesy andSatellite Geodesy, TU Graz Austria respectively. Then the differences in therange-rate residuals computed from different attitude dataset are analyzed inthis study. Details will be given and results will be discussed.

  1. Usual intake of added sugars and lipid profiles among the U.S. adolescents: National Health and Nutrition Examination Survey, 2005-2010.

    PubMed

    Zhang, Zefeng; Gillespie, Cathleen; Welsh, Jean A; Hu, Frank B; Yang, Quanhe

    2015-03-01

    Although studies suggest that higher consumption of added sugars is associated with cardiovascular risk factors in adolescents, none have adjusted for measurement errors or examined its association with the risk of dyslipidemia. We analyzed data of 4,047 adolescents aged 12-19 years from the 2005-2010 National Health and Nutrition Examination Survey, a nationally representative, cross-sectional survey. We estimated the usual percentage of calories (%kcal) from added sugars using up to two 24-hour dietary recalls and the National Cancer Institute method to account for measurement error. The average usual %kcal from added sugars was 16.0%. Most adolescents (88.0%) had usual intake of ≥10% of total energy, and 5.5% had usual intake of ≥25% of total energy. After adjustment for potential confounders, usual %kcal from added sugars was inversely associated with high-density lipoprotein (HDL) and positively associated with triglycerides (TGs), TG-to-HDL ratio, and total cholesterol (TC) to HDL ratio. Comparing the lowest and highest quintiles of intake, HDLs were 49.5 (95% confidence interval [CI], 47.4-51.6) and 46.4 mg/dL (95% CI, 45.2-47.6; p = .009), TGs were 85.6 (95% CI, 75.5-95.6) and 101.2 mg/dL (95% CI, 88.7-113.8; p = .037), TG to HDL ratios were 2.28 (95% CI, 1.84-2.70) and 2.73 (95% CI, 2.11-3.32; p = .017), and TC to HDL ratios were 3.41 (95% CI, 3.03-3.79) and 3.70 (95% CI, 3.24-4.15; p = .028), respectively. Comparing the highest and lowest quintiles of intake, adjusted odds ratio of dyslipidemia was 1.41 (95% CI, 1.01-1.95). The patterns were consistent across sex, race/ethnicity, and body mass index subgroups. No association was found for TC, low-density lipoprotein, and non-HDL cholesterol. Most U.S. adolescents consumed more added sugars than recommended for heart health. Usual intake of added sugars was significantly associated with several measures of lipid profiles. Published by Elsevier Inc.

  2. COMPARISON OF LAPAROSCOPIC SKILLS PERFORMANCE USING SINGLE-SITE ACCESS (SSA) DEVICES VS. AN INDEPENDENT-PORT SSA APPROACH

    PubMed Central

    Schill, Matthew R.; Varela, J. Esteban; Frisella, Margaret M.; Brunt, L. Michael

    2015-01-01

    Background We compared performance of validated laparoscopic tasks on four commercially available single site access (SSA) access devices (AD) versus an independent port (IP) SSA set-up. Methods A prospective, randomized comparison of laparoscopic skills performance on four AD (GelPOINT™, SILS™ Port, SSL Access System™, TriPort™) and one IP SSA set-up was conducted. Eighteen medical students (2nd–4th year), four surgical residents, and five attending surgeons were trained to proficiency in multi-port laparoscopy using four laparoscopic drills (peg transfer, bean drop, pattern cutting, extracorporeal suturing) in a laparoscopic trainer box. Drills were then performed in random order on each IP-SSA and AD-SSA set-up using straight laparoscopic instruments. Repetitions were timed and errors recorded. Data are mean ± SD, and statistical analysis was by two-way ANOVA with Tukey HSD post-hoc tests. Results Attending surgeons had significantly faster total task times than residents or students (p< 0.001), but the difference between residents and students was NS. Pair-wise comparisons revealed significantly faster total task times for the IP-SSA set-up compared to all four AD-SSA’s within the student group only (p<0.05). Total task times for residents and attending surgeons showed a similar profile, but the differences were NS. When data for the three groups was combined, the total task time was less for the IP-SSA set-up than for each of the four AD-SSA set-ups (p < 0.001). Similarly,, the IP-SSA set-up was significantly faster than 3 of 4 AD-SSA set-ups for peg transfer, 3 of 4 for pattern cutting, and 2 of 4 for suturing. No significant differences in error rates between IP-SSA and AD-SSA set-ups were detected. Conclusions When compared to an IP-SSA laparoscopic set-up, single site access devices are associated with longer task performance times in a trainer box model, independent of level of training. Task performance was similar across different SSA devices. PMID:21993938

  3. Bias Reduction and Filter Convergence for Long Range Stereo

    NASA Technical Reports Server (NTRS)

    Sibley, Gabe; Matthies, Larry; Sukhatme, Gaurav

    2005-01-01

    We are concerned here with improving long range stereo by filtering image sequences. Traditionally, measurement errors from stereo camera systems have been approximated as 3-D Gaussians, where the mean is derived by triangulation and the covariance by linearized error propagation. However, there are two problems that arise when filtering such 3-D measurements. First, stereo triangulation suffers from a range dependent statistical bias; when filtering this leads to over-estimating the true range. Second, filtering 3-D measurements derived via linearized error propagation leads to apparent filter divergence; the estimator is biased to under-estimate range. To address the first issue, we examine the statistical behavior of stereo triangulation and show how to remove the bias by series expansion. The solution to the second problem is to filter with image coordinates as measurements instead of triangulated 3-D coordinates.

  4. SU-F-J-206: Systematic Evaluation of the Minimum Detectable Shift Using a Range- Finding Camera

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Platt, M; Platt, M; Lamba, M

    2016-06-15

    Purpose: The robotic table used for patient alignment in proton therapy is calibrated only at commissioning under well-defined conditions and table shifts may vary over time and with differing conditions. The purpose of this study is to systematically investigate minimum detectable shifts using a time-of-flight (TOF) range-finding camera for table position feedback. Methods: A TOF camera was used to acquire one hundred 424 × 512 range images from a flat surface before and after known shifts. Range was assigned by averaging central regions of the image across multiple images. Depth resolution was determined by evaluating the difference between the actualmore » shift of the surface and the measured shift. Depth resolution was evaluated for number of images averaged, area of sensor over which depth was averaged, distance from camera to surface, central versus peripheral image regions, and angle of surface relative to camera. Results: For one to one thousand images with a shift of one millimeter the range in error was 0.852 ± 0.27 mm to 0.004 ± 0.01 mm (95% C.I.). For varying regions of the camera sensor the range in error was 0.02 ± 0.05 mm to 0.47 ± 0.04 mm. The following results are for 10 image averages. For areas ranging from one pixel to 9 × 9 pixels the range in error was 0.15 ± 0.09 to 0.29 ± 0.15 mm (1σ). For distances ranging from two to four meters the range in error was 0.15 ± 0.09 to 0.28 ± 0.15 mm. For an angle of incidence between thirty degrees and ninety degrees the average range in error was 0.11 ± 0.08 to 0.17 ± 0.09 mm. Conclusion: It is feasible to use a TOF camera for measuring shifts in flat surfaces under clinically relevant conditions with submillimeter precision.« less

  5. Processing arctic eddy-flux data using a simple carbon-exchange model embedded in the ensemble Kalman filter.

    PubMed

    Rastetter, Edward B; Williams, Mathew; Griffin, Kevin L; Kwiatkowski, Bonnie L; Tomasky, Gabrielle; Potosnak, Mark J; Stoy, Paul C; Shaver, Gaius R; Stieglitz, Marc; Hobbie, John E; Kling, George W

    2010-07-01

    Continuous time-series estimates of net ecosystem carbon exchange (NEE) are routinely made using eddy covariance techniques. Identifying and compensating for errors in the NEE time series can be automated using a signal processing filter like the ensemble Kalman filter (EnKF). The EnKF compares each measurement in the time series to a model prediction and updates the NEE estimate by weighting the measurement and model prediction relative to a specified measurement error estimate and an estimate of the model-prediction error that is continuously updated based on model predictions of earlier measurements in the time series. Because of the covariance among model variables, the EnKF can also update estimates of variables for which there is no direct measurement. The resulting estimates evolve through time, enabling the EnKF to be used to estimate dynamic variables like changes in leaf phenology. The evolving estimates can also serve as a means to test the embedded model and reconcile persistent deviations between observations and model predictions. We embedded a simple arctic NEE model into the EnKF and filtered data from an eddy covariance tower located in tussock tundra on the northern foothills of the Brooks Range in northern Alaska, USA. The model predicts NEE based only on leaf area, irradiance, and temperature and has been well corroborated for all the major vegetation types in the Low Arctic using chamber-based data. This is the first application of the model to eddy covariance data. We modified the EnKF by adding an adaptive noise estimator that provides a feedback between persistent model data deviations and the noise added to the ensemble of Monte Carlo simulations in the EnKF. We also ran the EnKF with both a specified leaf-area trajectory and with the EnKF sequentially recalibrating leaf-area estimates to compensate for persistent model-data deviations. When used together, adaptive noise estimation and sequential recalibration substantially improved filter performance, but it did not improve performance when used individually. The EnKF estimates of leaf area followed the expected springtime canopy phenology. However, there were also diel fluctuations in the leaf-area estimates; these are a clear indication of a model deficiency possibly related to vapor pressure effects on canopy conductance.

  6. Assessing explicit error reporting in the narrative electronic medical record using keyword searching.

    PubMed

    Cao, Hui; Stetson, Peter; Hripcsak, George

    2003-01-01

    Many types of medical errors occur in and outside of hospitals, some of which have very serious consequences and increase cost. Identifying errors is a critical step for managing and preventing them. In this study, we assessed the explicit reporting of medical errors in the electronic record. We used five search terms "mistake," "error," "incorrect," "inadvertent," and "iatrogenic" to survey several sets of narrative reports including discharge summaries, sign-out notes, and outpatient notes from 1991 to 2000. We manually reviewed all the positive cases and identified them based on the reporting of physicians. We identified 222 explicitly reported medical errors. The positive predictive value varied with different keywords. In general, the positive predictive value for each keyword was low, ranging from 3.4 to 24.4%. Therapeutic-related errors were the most common reported errors and these reported therapeutic-related errors were mainly medication errors. Keyword searches combined with manual review indicated some medical errors that were reported in medical records. It had a low sensitivity and a moderate positive predictive value, which varied by search term. Physicians were most likely to record errors in the Hospital Course and History of Present Illness sections of discharge summaries. The reported errors in medical records covered a broad range and were related to several types of care providers as well as non-health care professionals.

  7. Added sugars in kids' meals from chain restaurants.

    PubMed

    Scourboutakos, Mary J; Semnani-Azad, Zhila; L'Abbé, Mary R

    2016-06-01

    To analyze the added sugars in kids' meals from Canadian chain restaurants in relation to the World Health Organization's proposed sugar recommendation (less than 5% of total daily calories should come from added sugars) and current recommendation (less than 10% of total daily calories should come from added sugars). Total sugar levels were retrieved from the websites of 10 fast-food and 7 sit-down restaurants in 2010. The added sugar levels in 3178 kids' meals from Canadian chain restaurants were calculated in 2014 (in Toronto, Canada) by subtracting all naturally occurring sugars from the total sugar level. The average amount of added sugars in restaurant kids' meals (25 ± 0.36 g) exceeded the WHO's proposed daily recommendation for sugar intake. There was a wide range of added sugar levels in kids' meals ranging from 0 g to 114 g. 50% of meals exceeded the WHO's proposed daily sugar recommendation, and 19% exceeded the WHO's current daily sugar recommendation. There is a wide range of sugar levels in kids' meals from restaurants, and many contain more than a day's worth of sugar.

  8. Digital Analysis and Sorting of Fluorescence Lifetime by Flow Cytometry

    PubMed Central

    Houston, Jessica P.; Naivar, Mark A.; Freyer, James P.

    2010-01-01

    Frequency-domain flow cytometry techniques are combined with modifications to the digital signal processing capabilities of the Open Reconfigurable Cytometric Acquisition System (ORCAS) to analyze fluorescence decay lifetimes and control sorting. Real-time fluorescence lifetime analysis is accomplished by rapidly digitizing correlated, radiofrequency modulated detector signals, implementing Fourier analysis programming with ORCAS’ digital signal processor (DSP) and converting the processed data into standard cytometric list mode data. To systematically test the capabilities of the ORCAS 50 MS/sec analog-to-digital converter (ADC) and our DSP programming, an error analysis was performed using simulated light scatter and fluorescence waveforms (0.5–25 ns simulated lifetime), pulse widths ranging from 2 to 15 µs, and modulation frequencies from 2.5 to 16.667 MHz. The standard deviations of digitally acquired lifetime values ranged from 0.112 to >2 ns, corresponding to errors in actual phase shifts from 0.0142° to 1.6°. The lowest coefficients of variation (<1%) were found for 10-MHz modulated waveforms having pulse widths of 6 µs and simulated lifetimes of 4 ns. Direct comparison of the digital analysis system to a previous analog phase-sensitive flow cytometer demonstrated similar precision and accuracy on measurements of a range of fluorescent microspheres, unstained cells and cells stained with three common fluorophores. Sorting based on fluorescence lifetime was accomplished by adding analog outputs to ORCAS and interfacing with a commercial cell sorter with a radiofrequency modulated solid-state laser. Two populations of fluorescent microspheres with overlapping fluorescence intensities but different lifetimes (2 and 7 ns) were separated to ~98% purity. Overall, the digital signal acquisition and processing methods we introduce present a simple yet robust approach to phase-sensitive measurements in flow cytometry. The ability to simply and inexpensively implement this system on a commercial flow sorter will both allow better dissemination of this technology and better exploit the traditionally underutilized parameter of fluorescence lifetime. PMID:20662090

  9. Author Correction: Geometric constraints during epithelial jamming

    NASA Astrophysics Data System (ADS)

    Atia, Lior; Bi, Dapeng; Sharma, Yasha; Mitchel, Jennifer A.; Gweon, Bomi; Koehler, Stephan A.; DeCamp, Stephen J.; Lan, Bo; Kim, Jae Hun; Hirsch, Rebecca; Pegoraro, Adrian F.; Lee, Kyu Ha; Starr, Jacqueline R.; Weitz, David A.; Martin, Adam C.; Park, Jin-Ah; Butler, James P.; Fredberg, Jeffrey J.

    2018-06-01

    In the first correction to this Article, the authors added James P. Butler and Jeffrey J. Fredburg as equally contributing authors. However, this was in error; the statement should have remained indicating that Lior Atia, Dapeng Bi and Yasha Sharma contributed equally. This has now been corrected.

  10. Appraisals of Negative Divorce Events and Children's Psychological Adjustment.

    ERIC Educational Resources Information Center

    Mazur, Elizabeth; And Others

    Adding to prior literature on adults' and children's appraisals of stressors, this study examined relationships among children's negative cognitive errors regarding hypothetical negative divorce events, positive illusions about those same events, the actual divorce events, and children's post-divorce psychological adjustment. Subjects were 38…

  11. Multi-Sensor Improved Sea Surface Temperature (MISST) for GODAE

    DTIC Science & Technology

    2008-01-01

    its methodology to add 3 retrieval error information to the US Navy operational data stream. Quantitative estimates of reliability are added to...hycom.rsmas.miami.edu/ “ POSITIV : Prototype Operational System – ISAR – Temperature Instrumentation for the VOS fleet” CIRA/CSU Joint Hurricane Testbed project

  12. Visual short-term memory binding deficit in familial Alzheimer's disease.

    PubMed

    Liang, Yuying; Pertzov, Yoni; Nicholas, Jennifer M; Henley, Susie M D; Crutch, Sebastian; Woodward, Felix; Leung, Kelvin; Fox, Nick C; Husain, Masud

    2016-05-01

    Long-term episodic memory deficits in Alzheimer's disease (AD) are well characterised but, until recently, short-term memory (STM) function has attracted far less attention. We employed a recently-developed, delayed reproduction task which requires participants to reproduce precisely the remembered location of items they had seen only seconds previously. This paradigm provides not only a continuous measure of localization error in memory, but also an index of relational binding by determining the frequency with which an object is misplaced to the location of one of the other items held in memory. Such binding errors in STM have previously been found on this task to be sensitive to medial temporal lobe (MTL) damage in focal lesion cases. Twenty individuals with pathological mutations in presenilin 1 or amyloid precursor protein genes for familial Alzheimer's disease (FAD) were tested together with 62 healthy controls. Participants were assessed using the delayed reproduction memory task, a standard neuropsychological battery and structural MRI. Overall, FAD mutation carriers were worse than controls for object identity as well as in gross localization memory performance. Moreover, they showed greater misbinding of object identity and location than healthy controls. Thus they would often mislocalize a correctly-identified item to the location of one of the other items held in memory. Significantly, asymptomatic gene carriers - who performed similarly to healthy controls on standard neuropsychological tests - had a specific impairment in object-location binding, despite intact memory for object identity and location. Consistent with the hypothesis that the hippocampus is critically involved in relational binding regardless of memory duration, decreased hippocampal volume across FAD participants was significantly associated with deficits in object-location binding but not with recall precision for object identity or localization. Object-location binding may therefore provide a sensitive cognitive biomarker for MTL dysfunction in a range of diseases including AD. Copyright © 2016. Published by Elsevier Ltd.

  13. Nonlinear analysis and dynamic compensation of stylus scanning measurement with wide range

    NASA Astrophysics Data System (ADS)

    Hui, Heiyang; Liu, Xiaojun; Lu, Wenlong

    2011-12-01

    Surface topography is an important geometrical feature of a workpiece that influences its quality and functions such as friction, wearing, lubrication and sealing. Precision measurement of surface topography is fundamental for product quality characterizing and assurance. Stylus scanning technique is a widely used method for surface topography measurement, and it is also regarded as the international standard method for 2-D surface characterizing. Usually surface topography, including primary profile, waviness and roughness, can be measured precisely and efficiently by this method. However, by stylus scanning method to measure curved surface topography, the nonlinear error is unavoidable because of the difference of horizontal position of the actual measured point from given sampling point and the nonlinear transformation process from vertical displacement of the stylus tip to angle displacement of the stylus arm, and the error increases with the increasing of measuring range. In this paper, a wide range stylus scanning measurement system based on cylindrical grating interference principle is constructed, the originations of the nonlinear error are analyzed, the error model is established and a solution to decrease the nonlinear error is proposed, through which the error of the collected data is dynamically compensated.

  14. Error Analysis of Indirect Broadband Monitoring of Multilayer Optical Coatings using Computer Simulations

    NASA Astrophysics Data System (ADS)

    Semenov, Z. V.; Labusov, V. A.

    2017-11-01

    Results of studying the errors of indirect monitoring by means of computer simulations are reported. The monitoring method is based on measuring spectra of reflection from additional monitoring substrates in a wide spectral range. Special software (Deposition Control Simulator) is developed, which allows one to estimate the influence of the monitoring system parameters (noise of the photodetector array, operating spectral range of the spectrometer and errors of its calibration in terms of wavelengths, drift of the radiation source intensity, and errors in the refractive index of deposited materials) on the random and systematic errors of deposited layer thickness measurements. The direct and inverse problems of multilayer coatings are solved using the OptiReOpt library. Curves of the random and systematic errors of measurements of the deposited layer thickness as functions of the layer thickness are presented for various values of the system parameters. Recommendations are given on using the indirect monitoring method for the purpose of reducing the layer thickness measurement error.

  15. Techniques for estimating monthly mean streamflow at gaged sites and monthly streamflow duration characteristics at ungaged sites in central Nevada

    USGS Publications Warehouse

    Hess, G.W.; Bohman, L.R.

    1996-01-01

    Techniques for estimating monthly mean streamflow at gaged sites and monthly streamflow duration characteristics at ungaged sites in central Nevada were developed using streamflow records at six gaged sites and basin physical and climatic characteristics. Streamflow data at gaged sites were related by regression techniques to concurrent flows at nearby gaging stations so that monthly mean streamflows for periods of missing or no record can be estimated for gaged sites in central Nevada. The standard error of estimate for relations at these sites ranged from 12 to 196 percent. Also, monthly streamflow data for selected percent exceedence levels were used in regression analyses with basin and climatic variables to determine relations for ungaged basins for annual and monthly percent exceedence levels. Analyses indicate that the drainage area and percent of drainage area at altitudes greater than 10,000 feet are the most significant variables. For the annual percent exceedence, the standard error of estimate of the relations for ungaged sites ranged from 51 to 96 percent and standard error of prediction for ungaged sites ranged from 96 to 249 percent. For the monthly percent exceedence values, the standard error of estimate of the relations ranged from 31 to 168 percent, and the standard error of prediction ranged from 115 to 3,124 percent. Reliability and limitations of the estimating methods are described.

  16. Status of the NASA GMAO Observing System Simulation Experiment

    NASA Technical Reports Server (NTRS)

    Prive, Nikki C.; Errico, Ronald M.

    2014-01-01

    An Observing System Simulation Experiment (OSSE) is a pure modeling study used when actual observations are too expensive or difficult to obtain. OSSEs are valuable tools for determining the potential impact of new observing systems on numerical weather forecasts and for evaluation of data assimilation systems (DAS). An OSSE has been developed at the NASA Global Modeling and Assimilation Office (GMAO, Errico et al 2013). The GMAO OSSE uses a 13-month integration of the European Centre for Medium- Range Weather Forecasts 2005 operational model at T511/L91 resolution for the Nature Run (NR). Synthetic observations have been updated so that they are based on real observations during the summer of 2013. The emulated observation types include AMSU-A, MHS, IASI, AIRS, and HIRS4 radiance data, GPS-RO, and conventional types including aircraft, rawinsonde, profiler, surface, and satellite winds. The synthetic satellite wind observations are colocated with the NR cloud fields, and the rawinsondes are advected during ascent using the NR wind fields. Data counts for the synthetic observations are matched as closely as possible to real data counts, as shown in Figure 2. Errors are added to the synthetic observations to emulate representativeness and instrument errors. The synthetic errors are calibrated so that the statistics of observation innovation and analysis increments in the OSSE are similar to the same statistics for assimilation of real observations, in an iterative method described by Errico et al (2013). The standard deviations of observation minus forecast (xo-H(xb)) are compared for the OSSE and real data in Figure 3. The synthetic errors include both random, uncorrelated errors, and an additional correlated error component for some observational types. Vertically correlated errors are included for conventional sounding data and GPS-RO, and channel correlated errors are introduced to AIRS and IASI (Figure 4). HIRS, AMSU-A, and MHS have a component of horizontally correlated error. The forecast model used by the GMAO OSSE is the Goddard Earth Observing System Model, Version 5 (GEOS-5) with Gridpoint Statistical Interpolation (GSI) DAS. The model version has been updated to v. 5.13.3, corresponding to the current operational model. Forecasts are run on a cube-sphere grid with 180 points along each edge of the cube (approximately 0.5 degree horizontal resolution) with 72 vertical levels. The DAS is cycled at 6-hour intervals, with 240 hour forecasts launched daily at 0000 UTC. Evaluation of the forecasting skill for July and August is currently underway. Prior versions of the GMAO OSSE have been found to have greater forecasting skill than real world forecasts. It is anticipated that similar forecast skill will be found in the updated OSSE.

  17. Enhanced orbit determination filter sensitivity analysis: Error budget development

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Burkhart, P. D.

    1994-01-01

    An error budget analysis is presented which quantifies the effects of different error sources in the orbit determination process when the enhanced orbit determination filter, recently developed, is used to reduce radio metric data. The enhanced filter strategy differs from more traditional filtering methods in that nearly all of the principal ground system calibration errors affecting the data are represented as filter parameters. Error budget computations were performed for a Mars Observer interplanetary cruise scenario for cases in which only X-band (8.4-GHz) Doppler data were used to determine the spacecraft's orbit, X-band ranging data were used exclusively, and a combined set in which the ranging data were used in addition to the Doppler data. In all three cases, the filter model was assumed to be a correct representation of the physical world. Random nongravitational accelerations were found to be the largest source of error contributing to the individual error budgets. Other significant contributors, depending on the data strategy used, were solar-radiation pressure coefficient uncertainty, random earth-orientation calibration errors, and Deep Space Network (DSN) station location uncertainty.

  18. New main reflector, subreflector and dual chamber concepts for compact range applications

    NASA Technical Reports Server (NTRS)

    Pistorius, C. W. I.; Burnside, W. D.

    1987-01-01

    A compact range is a facility used for the measurement of antenna radiation and target scattering problems. Most presently available parabolic reflectors do not produce ideal uniform plane waves in the target zone. Design improvements are suggested to reduce the amplitude taper, ripple and cross polarization errors. The ripple caused by diffractions from the reflector edges can be reduced by adding blended rolled edges and shaping the edge contour. Since the reflected edge continues smoothly from the parabola onto the rolled surface, rather than being abruptly terminated, the discontinuity in the reflected field is reduced which results in weaker diffracted fields. This is done by blending the rolled edges from the parabola into an ellipse. An algorithm which enables one to design optimum blended rolled edges was developed that is based on an analysis of the continuity of the surface radius of curvature and its derivatives across the junction. Futhermore, a concave edge contour results in a divergent diffracted ray pattern and hence less stray energy in the target zone. Design equations for three-dimensional reflectors are given. Various examples were analyzed using a new physical optics method which eliminates the effects of the false scattering centers on the incident shadow boundaries. A Gregorian subreflector system, in which both the subreflector and feed axes are tilted, results in a substantial reduction in the amplitude taper and cross polarization errors. A dual chamber configuration is proposed to eliminate the effects of diffraction from the subreflector and spillover from the feed. A computationally efficient technique, based on ray tracing and aperture integration, was developed to analyze the scattering from a lossy dielectric slab with a wedge termination.

  19. Ultrasound-assisted low-density solvent dispersive liquid-liquid microextraction for the determination of 4 designer benzodiazepines in urine samples by gas chromatography-triple quadrupole mass spectrometry.

    PubMed

    Meng, Liang; Zhu, Binling; Zheng, Kefang; Fu, Shanlin

    2017-05-15

    A novel microextraction technique based on ultrasound-assisted low-density solvent dispersive liquid-liquid microextraction (UA-LDS-DLLME) had been applied for the determination of 4 designer benzodiazepines (phenazepam, diclazepam, flubromazepam and etizolam) in urine samples by gas chromatography- triple quadrupole mass spectrometry (GC-QQQ-MS). Ethyl acetate (168μL) was added into the urine samples after adjusting pH to 11.3. The samples were sonicated in an ultrasonic bath for 5.5min to form a cloudy suspension. After centrifugation at 10000rpm for 3min, the supernatant extractant was withdrawn and injected into the GC-QQQ-MS for analysis. Parameters affecting the extraction efficiency have been investigated and optimized by means of single factor experiment and response surface methodology (Box-Behnken design). Under the optimum extraction conditions, a recovery of 73.8-85.5% were obtained for all analytes. The analytical method was linear for all analytes in the range from 0.003 to 10μg/mL with the correlation coefficient ranging from 0.9978 to 0.9990. The LODs were estimated to be 1-3ng/mL. The accuracy (expressed as mean relative error MRE) was within ±5.8% and the precision (expressed as relative standard error RSD) was less than 5.9%. UA-LDS-DLLME technique has the advantages of shorter extraction time and is suitable for simultaneous pretreatment of samples in batches. The combination of UA-LDS-DLLME with GC-QQQ-MS offers an alternative analytical approach for the sensitive detection of these designer benzodiazepines in urine matrix for clinical and medico-legal purposes. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Systematic Poisoning Attacks on and Defenses for Machine Learning in Healthcare.

    PubMed

    Mozaffari-Kermani, Mehran; Sur-Kolay, Susmita; Raghunathan, Anand; Jha, Niraj K

    2015-11-01

    Machine learning is being used in a wide range of application domains to discover patterns in large datasets. Increasingly, the results of machine learning drive critical decisions in applications related to healthcare and biomedicine. Such health-related applications are often sensitive, and thus, any security breach would be catastrophic. Naturally, the integrity of the results computed by machine learning is of great importance. Recent research has shown that some machine-learning algorithms can be compromised by augmenting their training datasets with malicious data, leading to a new class of attacks called poisoning attacks. Hindrance of a diagnosis may have life-threatening consequences and could cause distrust. On the other hand, not only may a false diagnosis prompt users to distrust the machine-learning algorithm and even abandon the entire system but also such a false positive classification may cause patient distress. In this paper, we present a systematic, algorithm-independent approach for mounting poisoning attacks across a wide range of machine-learning algorithms and healthcare datasets. The proposed attack procedure generates input data, which, when added to the training set, can either cause the results of machine learning to have targeted errors (e.g., increase the likelihood of classification into a specific class), or simply introduce arbitrary errors (incorrect classification). These attacks may be applied to both fixed and evolving datasets. They can be applied even when only statistics of the training dataset are available or, in some cases, even without access to the training dataset, although at a lower efficacy. We establish the effectiveness of the proposed attacks using a suite of six machine-learning algorithms and five healthcare datasets. Finally, we present countermeasures against the proposed generic attacks that are based on tracking and detecting deviations in various accuracy metrics, and benchmark their effectiveness.

  1. Design and Experiment of FBG-Based Icing Monitoring on Overhead Transmission Lines with an Improvement Trial for Windy Weather

    PubMed Central

    Zhang, Min; Xing, Yimeng; Zhang, Zhiguo; Chen, Qiguan

    2014-01-01

    A scheme for monitoring icing on overhead transmission lines with fiber Bragg grating (FBG) strain sensors is designed and evaluated both theoretically and experimentally. The influences of temperature and wind are considered. The results of field experiments using simulated ice loading on windless days indicate that the scheme is capable of monitoring the icing thickness within 0–30 mm with an accuracy of ±1 mm, a load cell error of 0.0308v, a repeatability error of 0.3328v and a hysteresis error is 0.026%. To improve the measurement during windy weather, a correction factor is added to the effective gravity acceleration, and the absolute FBG strain is replaced by its statistical average. PMID:25615733

  2. Source Memory for Self and Other in Patients With Mild Cognitive Impairment due to Alzheimer’s Disease

    PubMed Central

    Deason, Rebecca G.; Budson, Andrew E.; Gutchess, Angela H.

    2016-01-01

    Objectives. The present study examined the role of enactment in source memory in a cognitively impaired population. As seen in healthy older adults, it was predicted that source memory in people with mild cognitive impairment due to Alzheimer’s disease (MCI-AD) would benefit from the self-reference aspect of enactment. Method. Seventeen participants with MCI-AD and 18 controls worked in small groups to pack a picnic basket and suitcase and were later tested for their source memory for each item. Results. For item memory, self-referencing improved corrected recognition scores for both MCI-AD and control participants. The MCI-AD group did not demonstrate the same benefit as controls in correct source memory for self-related items. However, those with MCI-AD were relatively less likely to misattribute new items to the self and more likely to misattribute new items to others when committing errors, compared with controls. Discussion. The enactment effect and self-referencing did not enhance accurate source memory more than other referencing for patients with MCI-AD. However, people with MCI-AD benefited in item memory and source memory, being less likely to falsely claim new items as their own, indicating some self-reference benefit occurs for people with MCI-AD. PMID:24904049

  3. Source Memory for Self and Other in Patients With Mild Cognitive Impairment due to Alzheimer's Disease.

    PubMed

    Rosa, Nicole M; Deason, Rebecca G; Budson, Andrew E; Gutchess, Angela H

    2016-01-01

    The present study examined the role of enactment in source memory in a cognitively impaired population. As seen in healthy older adults, it was predicted that source memory in people with mild cognitive impairment due to Alzheimer's disease (MCI-AD) would benefit from the self-reference aspect of enactment. Seventeen participants with MCI-AD and 18 controls worked in small groups to pack a picnic basket and suitcase and were later tested for their source memory for each item. For item memory, self-referencing improved corrected recognition scores for both MCI-AD and control participants. The MCI-AD group did not demonstrate the same benefit as controls in correct source memory for self-related items. However, those with MCI-AD were relatively less likely to misattribute new items to the self and more likely to misattribute new items to others when committing errors, compared with controls. The enactment effect and self-referencing did not enhance accurate source memory more than other referencing for patients with MCI-AD. However, people with MCI-AD benefited in item memory and source memory, being less likely to falsely claim new items as their own, indicating some self-reference benefit occurs for people with MCI-AD. Published by Oxford University Press on behalf of the Gerontological Society of America 2014.

  4. Finite-time sliding surface constrained control for a robot manipulator with an unknown deadzone and disturbance.

    PubMed

    Ik Han, Seong; Lee, Jangmyung

    2016-11-01

    This paper presents finite-time sliding mode control (FSMC) with predefined constraints for the tracking error and sliding surface in order to obtain robust positioning of a robot manipulator with input nonlinearity due to an unknown deadzone and external disturbance. An assumed model feedforward FSMC was designed to avoid tedious identification procedures for the manipulator parameters and to obtain a fast response time. Two constraint switching control functions based on the tracking error and finite-time sliding surface were added to the FSMC to guarantee the predefined tracking performance despite the presence of an unknown deadzone and disturbance. The tracking error due to the deadzone and disturbance can be suppressed within the predefined error boundary simply by tuning the gain value of the constraint switching function and without the addition of an extra compensator. Therefore, the designed constraint controller has a simpler structure than conventional transformed error constraint methods and the sliding surface constraint scheme can also indirectly guarantee the tracking error constraint while being more stable than the tracking error constraint control. A simulation and experiment were performed on an articulated robot manipulator to validate the proposed control schemes. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  5. Evaluation of seasonal and spatial variations of lumped water balance model sensitivity to precipitation data errors

    NASA Astrophysics Data System (ADS)

    Xu, Chong-yu; Tunemar, Liselotte; Chen, Yongqin David; Singh, V. P.

    2006-06-01

    Sensitivity of hydrological models to input data errors have been reported in the literature for particular models on a single or a few catchments. A more important issue, i.e. how model's response to input data error changes as the catchment conditions change has not been addressed previously. This study investigates the seasonal and spatial effects of precipitation data errors on the performance of conceptual hydrological models. For this study, a monthly conceptual water balance model, NOPEX-6, was applied to 26 catchments in the Mälaren basin in Central Sweden. Both systematic and random errors were considered. For the systematic errors, 5-15% of mean monthly precipitation values were added to the original precipitation to form the corrupted input scenarios. Random values were generated by Monte Carlo simulation and were assumed to be (1) independent between months, and (2) distributed according to a Gaussian law of zero mean and constant standard deviation that were taken as 5, 10, 15, 20, and 25% of the mean monthly standard deviation of precipitation. The results show that the response of the model parameters and model performance depends, among others, on the type of the error, the magnitude of the error, physical characteristics of the catchment, and the season of the year. In particular, the model appears less sensitive to the random error than to the systematic error. The catchments with smaller values of runoff coefficients were more influenced by input data errors than were the catchments with higher values. Dry months were more sensitive to precipitation errors than were wet months. Recalibration of the model with erroneous data compensated in part for the data errors by altering the model parameters.

  6. Analysis of ionospheric refraction error corrections for GRARR systems

    NASA Technical Reports Server (NTRS)

    Mallinckrodt, A. J.; Parker, H. C.; Berbert, J. H.

    1971-01-01

    A determination is presented of the ionospheric refraction correction requirements for the Goddard range and range rate (GRARR) S-band, modified S-band, very high frequency (VHF), and modified VHF systems. The relation ships within these four systems are analyzed to show that the refraction corrections are the same for all four systems and to clarify the group and phase nature of these corrections. The analysis is simplified by recognizing that the range rate is equivalent to a carrier phase range change measurement. The equation for the range errors are given.

  7. Predictability of CFSv2 in the tropical Indo-Pacific region, at daily and subseasonal time scales

    NASA Astrophysics Data System (ADS)

    Krishnamurthy, V.

    2018-06-01

    The predictability of a coupled climate model is evaluated at daily and intraseasonal time scales in the tropical Indo-Pacific region during boreal summer and winter. This study has assessed the daily retrospective forecasts of the Climate Forecast System version 2 from the National Centers of Environmental Prediction for the period 1982-2010. The growth of errors in the forecasts of daily precipitation, monsoon intraseasonal oscillation (MISO) and the Madden-Julian oscillation (MJO) is studied. The seasonal cycle of the daily climatology of precipitation is reasonably well predicted except for the underestimation during the peak of summer. The anomalies follow the typical pattern of error growth in nonlinear systems and show no difference between summer and winter. The initial errors in all the cases are found to be in the nonlinear phase of the error growth. The doubling time of small errors is estimated by applying Lorenz error formula. For summer and winter, the doubling time of the forecast errors is in the range of 4-7 and 5-14 days while the doubling time of the predictability errors is 6-8 and 8-14 days, respectively. The doubling time in MISO during the summer and MJO during the winter is in the range of 12-14 days, indicating higher predictability and providing optimism for long-range prediction. There is no significant difference in the growth of forecasts errors originating from different phases of MISO and MJO, although the prediction of the active phase seems to be slightly better.

  8. Reliable estimation of orbit errors in spaceborne SAR interferometry. The network approach

    NASA Astrophysics Data System (ADS)

    Bähr, Hermann; Hanssen, Ramon F.

    2012-12-01

    An approach to improve orbital state vectors by orbit error estimates derived from residual phase patterns in synthetic aperture radar interferograms is presented. For individual interferograms, an error representation by two parameters is motivated: the baseline error in cross-range and the rate of change of the baseline error in range. For their estimation, two alternatives are proposed: a least squares approach that requires prior unwrapping and a less reliable gridsearch method handling the wrapped phase. In both cases, reliability is enhanced by mutual control of error estimates in an overdetermined network of linearly dependent interferometric combinations of images. Thus, systematic biases, e.g., due to unwrapping errors, can be detected and iteratively eliminated. Regularising the solution by a minimum-norm condition results in quasi-absolute orbit errors that refer to particular images. For the 31 images of a sample ENVISAT dataset, orbit corrections with a mutual consistency on the millimetre level have been inferred from 163 interferograms. The method itself qualifies by reliability and rigorous geometric modelling of the orbital error signal but does not consider interfering large scale deformation effects. However, a separation may be feasible in a combined processing with persistent scatterer approaches or by temporal filtering of the estimates.

  9. Error measure comparison of currently employed dose-modulation schemes for e-beam proximity effect control

    NASA Astrophysics Data System (ADS)

    Peckerar, Martin C.; Marrian, Christie R.

    1995-05-01

    Standard matrix inversion methods of e-beam proximity correction are compared with a variety of pseudoinverse approaches based on gradient descent. It is shown that the gradient descent methods can be modified using 'regularizers' (terms added to the cost function minimized during gradient descent). This modification solves the 'negative dose' problem in a mathematically sound way. Different techniques are contrasted using a weighted error measure approach. It is shown that the regularization approach leads to the highest quality images. In some cases, ignoring negative doses yields results which are worse than employing an uncorrected dose file.

  10. Correction to “New maps of California to improve tsunami preparedness”

    NASA Astrophysics Data System (ADS)

    Barberopoulou, Aggeliki; Borrero, Jose C.; Uslu, Burak; Kalligeris, Nikos; Goltz, James D.; Wilson, Rick I.; Synolakis, Costas E.

    2009-05-01

    In the 21 April issue (Eos, 90(16), 2009), the article titled “New maps of California to improve tsunami preparedness” contained an error in its Figure 2 caption. Figure 2 is a map of Goleta, a city in Santa Barbara County. Thus, the first sentence of the caption should read, “Newly created tsunami inundation maps for Goleta, a city in Santa Barbara County, Calif., show the city's ‘wet line’ in black, representing the highest probable tsunami runup modeled for the region added to average water levels at high tide.” Eos deeply regrets this error.

  11. Posture Recognition in Alzheimer's Disease

    ERIC Educational Resources Information Center

    Mozaz, Maria; Garaigordobil, Maite; Rothi, Leslie J. Gonzalez; Anderson, Jeffrey; Crucian, Gregory P.; Heilman, Kenneth M.

    2006-01-01

    Background: Apraxia is neurologically induced deficit in the ability perform purposeful skilled movements. One of the most common forms is ideomotor apraxia (IMA) where spatial and temporal production errors are most prevalent. IMA can be associated Alzheimer's disease (AD), even early in its course, but is often not identified possibly because…

  12. Finite Element Analysis of Free-Edge Delamination in Laminated Composite Specimens

    DTIC Science & Technology

    1991-06-18

    for the degree of Doctor of Philosophy at the Ohio State University. Revision by H. R. Chu corrected some errors and added further studies on...Galerkin’s approach, in which interlaminar stresses and displacements of each layer satisfying geometrica ’ boundary conditions were represented as -series

  13. Type I Rehearsal and Recognition.

    ERIC Educational Resources Information Center

    Glenberg, Arthur; Adams, Frederick

    1978-01-01

    Rote, repetitive Type I Rehearsal is defined as the continuous maintenance of information in memory using the minimum cognitive capacity necessary for maintenance. An analysis of errors made on a forced-choice recognition test supported the hypothesis that acoustic-phonemic components of the memory trace are added or strengthened by this…

  14. A novel artificial fish swarm algorithm for recalibration of fiber optic gyroscope error parameters.

    PubMed

    Gao, Yanbin; Guan, Lianwu; Wang, Tingjun; Sun, Yunlong

    2015-05-05

    The artificial fish swarm algorithm (AFSA) is one of the state-of-the-art swarm intelligent techniques, which is widely utilized for optimization purposes. Fiber optic gyroscope (FOG) error parameters such as scale factors, biases and misalignment errors are relatively unstable, especially with the environmental disturbances and the aging of fiber coils. These uncalibrated error parameters are the main reasons that the precision of FOG-based strapdown inertial navigation system (SINS) degraded. This research is mainly on the application of a novel artificial fish swarm algorithm (NAFSA) on FOG error coefficients recalibration/identification. First, the NAFSA avoided the demerits (e.g., lack of using artificial fishes' pervious experiences, lack of existing balance between exploration and exploitation, and high computational cost) of the standard AFSA during the optimization process. To solve these weak points, functional behaviors and the overall procedures of AFSA have been improved with some parameters eliminated and several supplementary parameters added. Second, a hybrid FOG error coefficients recalibration algorithm has been proposed based on NAFSA and Monte Carlo simulation (MCS) approaches. This combination leads to maximum utilization of the involved approaches for FOG error coefficients recalibration. After that, the NAFSA is verified with simulation and experiments and its priorities are compared with that of the conventional calibration method and optimal AFSA. Results demonstrate high efficiency of the NAFSA on FOG error coefficients recalibration.

  15. Balancing aggregation and smoothing errors in inverse models

    DOE PAGES

    Turner, A. J.; Jacob, D. J.

    2015-06-30

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function ofmore » state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.« less

  16. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-01-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  17. Balancing aggregation and smoothing errors in inverse models

    NASA Astrophysics Data System (ADS)

    Turner, A. J.; Jacob, D. J.

    2015-06-01

    Inverse models use observations of a system (observation vector) to quantify the variables driving that system (state vector) by statistical optimization. When the observation vector is large, such as with satellite data, selecting a suitable dimension for the state vector is a challenge. A state vector that is too large cannot be effectively constrained by the observations, leading to smoothing error. However, reducing the dimension of the state vector leads to aggregation error as prior relationships between state vector elements are imposed rather than optimized. Here we present a method for quantifying aggregation and smoothing errors as a function of state vector dimension, so that a suitable dimension can be selected by minimizing the combined error. Reducing the state vector within the aggregation error constraints can have the added advantage of enabling analytical solution to the inverse problem with full error characterization. We compare three methods for reducing the dimension of the state vector from its native resolution: (1) merging adjacent elements (grid coarsening), (2) clustering with principal component analysis (PCA), and (3) applying a Gaussian mixture model (GMM) with Gaussian pdfs as state vector elements on which the native-resolution state vector elements are projected using radial basis functions (RBFs). The GMM method leads to somewhat lower aggregation error than the other methods, but more importantly it retains resolution of major local features in the state vector while smoothing weak and broad features.

  18. Neuromotor Noise Is Malleable by Amplifying Perceived Errors

    PubMed Central

    Zhang, Zhaoran; Abe, Masaki O.; Sternad, Dagmar

    2016-01-01

    Variability in motor performance results from the interplay of error correction and neuromotor noise. This study examined whether visual amplification of error, previously shown to improve performance, affects not only error correction, but also neuromotor noise, typically regarded as inaccessible to intervention. Seven groups of healthy individuals, with six participants in each group, practiced a virtual throwing task for three days until reaching a performance plateau. Over three more days of practice, six of the groups received different magnitudes of visual error amplification; three of these groups also had noise added. An additional control group was not subjected to any manipulations for all six practice days. The results showed that the control group did not improve further after the first three practice days, but the error amplification groups continued to decrease their error under the manipulations. Analysis of the temporal structure of participants’ corrective actions based on stochastic learning models revealed that these performance gains were attained by reducing neuromotor noise and, to a considerably lesser degree, by increasing the size of corrective actions. Based on these results, error amplification presents a promising intervention to improve motor function by decreasing neuromotor noise after performance has reached an asymptote. These results are relevant for patients with neurological disorders and the elderly. More fundamentally, these results suggest that neuromotor noise may be accessible to practice interventions. PMID:27490197

  19. 19 CFR 12.104g - Specific items or categories designated by agreements or emergency actions.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... representing the Byzantine period ranging from approximately the 4th century A.D. through approximately the... Byzantine culture (approximately the 4th century through the 15th century A.D.) CBP Dec. 11-25 Guatemala... periods ranging approximately from the 9th century B.C. to the 4th century A.D. T.D. 01-06 extended by CBP...

  20. Dynamic Calibration and Verification Device of Measurement System for Dynamic Characteristic Coefficients of Sliding Bearing

    PubMed Central

    Chen, Runlin; Wei, Yangyang; Shi, Zhaoyang; Yuan, Xiaoyang

    2016-01-01

    The identification accuracy of dynamic characteristics coefficients is difficult to guarantee because of the errors of the measurement system itself. A novel dynamic calibration method of measurement system for dynamic characteristics coefficients is proposed in this paper to eliminate the errors of the measurement system itself. Compared with the calibration method of suspension quality, this novel calibration method is different because the verification device is a spring-mass system, which can simulate the dynamic characteristics of sliding bearing. The verification device is built, and the calibration experiment is implemented in a wide frequency range, in which the bearing stiffness is simulated by the disc springs. The experimental results show that the amplitude errors of this measurement system are small in the frequency range of 10 Hz–100 Hz, and the phase errors increase along with the increasing of frequency. It is preliminarily verified by the simulated experiment of dynamic characteristics coefficients identification in the frequency range of 10 Hz–30 Hz that the calibration data in this frequency range can support the dynamic characteristics test of sliding bearing in this frequency range well. The bearing experiments in greater frequency ranges need higher manufacturing and installation precision of calibration device. Besides, the processes of calibration experiments should be improved. PMID:27483283

  1. Experiments and error analysis of laser ranging based on frequency-sweep polarization modulation

    NASA Astrophysics Data System (ADS)

    Gao, Shuyuan; Ji, Rongyi; Li, Yao; Cheng, Zhi; Zhou, Weihu

    2016-11-01

    Frequency-sweep polarization modulation ranging uses a polarization-modulated laser beam to determine the distance to the target, the modulation frequency is swept and frequency values are measured when transmitted and received signals are in phase, thus the distance can be calculated through these values. This method gets much higher theoretical measuring accuracy than phase difference method because of the prevention of phase measurement. However, actual accuracy of the system is limited since additional phase retardation occurs in the measuring optical path when optical elements are imperfectly processed and installed. In this paper, working principle of frequency sweep polarization modulation ranging method is analyzed, transmission model of polarization state in light path is built based on the theory of Jones Matrix, additional phase retardation of λ/4 wave plate and PBS, their impact on measuring performance is analyzed. Theoretical results show that wave plate's azimuth error dominates the limitation of ranging accuracy. According to the system design index, element tolerance and error correcting method of system is proposed, ranging system is built and ranging experiment is performed. Experiential results show that with proposed tolerance, the system can satisfy the accuracy requirement. The present work has a guide value for further research about system design and error distribution.

  2. A parallel unbalanced digitization architecture to reduce the dynamic range of multiple signals

    NASA Astrophysics Data System (ADS)

    Vallérian, Mathieu; HuÅ£u, Florin; Villemaud, Guillaume; Miscopein, Benoît; Risset, Tanguy

    2016-05-01

    Technologies employed in urban sensor networks are permanently evolving, and thus the gateways employed to collect data in such kind of networks have to be very flexible in order to be compliant with the new communication standards. A convenient way to do that is to digitize all the received signals in one shot and then to digitally perform the signal processing, as it is done in software-defined radio (SDR). All signals can be emitted with very different features (bandwidth, modulation type, and power level) in order to respond to the various propagation conditions. Their difference in terms of power levels is a problem when digitizing them together, as no current commercial analog-to-digital converter (ADC) can provide a fine enough resolution to digitize this high dynamic range between the weakest possible signal in the presence of a stronger signal. This paper presents an RF front end receiver architecture capable of handling this problem by using two ADCs of lower resolutions. The architecture is validated through a set of simulations using Keysight's ADS software. The main validation criterion is the bit error rate comparison with a classical receiver.

  3. Rates and patterns of surface deformation from laser scanning following the South Napa earthquake, California

    USGS Publications Warehouse

    DeLong, Stephen B.; Lienkaemper, James J.; Pickering, Alexandra J; Avdievitch, Nikita N.

    2015-01-01

    The A.D. 2014 M6.0 South Napa earthquake, despite its moderate magnitude, caused significant damage to the Napa Valley in northern California (USA). Surface rupture occurred along several mapped and unmapped faults. Field observations following the earthquake indicated that the magnitude of postseismic surface slip was likely to approach or exceed the maximum coseismic surface slip and as such presented ongoing hazard to infrastructure. Using a laser scanner, we monitored postseismic deformation in three dimensions through time along 0.5 km of the main surface rupture. A key component of this study is the demonstration of proper alignment of repeat surveys using point cloud–based methods that minimize error imposed by both local survey errors and global navigation satellite system georeferencing errors. Using solid modeling of natural and cultural features, we quantify dextral postseismic displacement at several hundred points near the main fault trace. We also quantify total dextral displacement of initially straight cultural features. Total dextral displacement from both coseismic displacement and the first 2.5 d of postseismic displacement ranges from 0.22 to 0.29 m. This range increased to 0.33–0.42 m at 59 d post-earthquake. Furthermore, we estimate up to 0.15 m of vertical deformation during the first 2.5 d post-earthquake, which then increased by ∼0.02 m at 59 d post-earthquake. This vertical deformation is not expressed as a distinct step or scarp at the fault trace but rather as a broad up-to-the-west zone of increasing elevation change spanning the fault trace over several tens of meters, challenging common notions about fault scarp development in strike-slip systems. Integrating these analyses provides three-dimensional mapping of surface deformation and identifies spatial variability in slip along the main fault trace that we attribute to distributed slip via subtle block rotation. These results indicate the benefits of laser scanner surveys along active faults and demonstrate that fine-scale variability in fault slip has been missed by traditional earthquake response methods.

  4. Combined influence of CT random noise and HU-RSP calibration curve nonlinearities on proton range systematic errors

    NASA Astrophysics Data System (ADS)

    Brousmiche, S.; Souris, K.; Orban de Xivry, J.; Lee, J. A.; Macq, B.; Seco, J.

    2017-11-01

    Proton range random and systematic uncertainties are the major factors undermining the advantages of proton therapy, namely, a sharp dose falloff and a better dose conformality for lower doses in normal tissues. The influence of CT artifacts such as beam hardening or scatter can easily be understood and estimated due to their large-scale effects on the CT image, like cupping and streaks. In comparison, the effects of weakly-correlated stochastic noise are more insidious and less attention is drawn on them partly due to the common belief that they only contribute to proton range uncertainties and not to systematic errors thanks to some averaging effects. A new source of systematic errors on the range and relative stopping powers (RSP) has been highlighted and proved not to be negligible compared to the 3.5% uncertainty reference value used for safety margin design. Hence, we demonstrate that the angular points in the HU-to-RSP calibration curve are an intrinsic source of proton range systematic error for typical levels of zero-mean stochastic CT noise. Systematic errors on RSP of up to 1% have been computed for these levels. We also show that the range uncertainty does not generally vary linearly with the noise standard deviation. We define a noise-dependent effective calibration curve that better describes, for a given material, the RSP value that is actually used. The statistics of the RSP and the range continuous slowing down approximation (CSDA) have been analytically derived for the general case of a calibration curve obtained by the stoichiometric calibration procedure. These models have been validated against actual CSDA simulations for homogeneous and heterogeneous synthetical objects as well as on actual patient CTs for prostate and head-and-neck treatment planning situations.

  5. Improved wetland remote sensing in Yellowstone National Park using classification trees to combine TM imagery and ancillary environmental data

    USGS Publications Warehouse

    Wright, C.; Gallant, Alisa L.

    2007-01-01

    The U.S. Fish and Wildlife Service uses the term palustrine wetland to describe vegetated wetlands traditionally identified as marsh, bog, fen, swamp, or wet meadow. Landsat TM imagery was combined with image texture and ancillary environmental data to model probabilities of palustrine wetland occurrence in Yellowstone National Park using classification trees. Model training and test locations were identified from National Wetlands Inventory maps, and classification trees were built for seven years spanning a range of annual precipitation. At a coarse level, palustrine wetland was separated from upland. At a finer level, five palustrine wetland types were discriminated: aquatic bed (PAB), emergent (PEM), forested (PFO), scrub–shrub (PSS), and unconsolidated shore (PUS). TM-derived variables alone were relatively accurate at separating wetland from upland, but model error rates dropped incrementally as image texture, DEM-derived terrain variables, and other ancillary GIS layers were added. For classification trees making use of all available predictors, average overall test error rates were 7.8% for palustrine wetland/upland models and 17.0% for palustrine wetland type models, with consistent accuracies across years. However, models were prone to wetland over-prediction. While the predominant PEM class was classified with omission and commission error rates less than 14%, we had difficulty identifying the PAB and PSS classes. Ancillary vegetation information greatly improved PSS classification and moderately improved PFO discrimination. Association with geothermal areas distinguished PUS wetlands. Wetland over-prediction was exacerbated by class imbalance in likely combination with spatial and spectral limitations of the TM sensor. Wetland probability surfaces may be more informative than hard classification, and appear to respond to climate-driven wetland variability. The developed method is portable, relatively easy to implement, and should be applicable in other settings and over larger extents.

  6. Assessment of radar altimetry correction slopes for marine gravity recovery: A case study of Jason-1 GM data

    NASA Astrophysics Data System (ADS)

    Zhang, Shengjun; Li, Jiancheng; Jin, Taoyong; Che, Defu

    2018-04-01

    Marine gravity anomaly derived from satellite altimetry can be computed using either sea surface height or sea surface slope measurements. Here we consider the slope method and evaluate the errors in the slope of the corrections supplied with the Jason-1 geodetic mission data. The slope corrections are divided into three groups based on whether they are small, comparable, or large with respect to the 1 microradian error in the current sea surface slope models. (1) The small and thus negligible corrections include dry tropospheric correction, inverted barometer correction, solid earth tide and geocentric pole tide. (2) The moderately important corrections include wet tropospheric correction, dual-frequency ionospheric correction and sea state bias. The radiometer measurements are more preferred than model values in the geophysical data records for constraining wet tropospheric effect owing to the highly variable water-vapor structure in atmosphere. The items of dual-frequency ionospheric correction and sea state bias should better not be directly added to range observations for obtaining sea surface slopes since their inherent errors may cause abnormal sea surface slopes and along-track smoothing with uniform distribution weight in certain width is an effective strategy for avoiding introducing extra noises. The slopes calculated from radiometer wet tropospheric corrections, and along-track smoothed dual-frequency ionospheric corrections, sea state bias are generally within ±0.5 microradians and no larger than 1 microradians. (3) Ocean tide has the largest influence on obtaining sea surface slopes while most of ocean tide slopes distribute within ±3 microradians. Larger ocean tide slopes mostly occur over marginal and island-surrounding seas, and extra tidal models with better precision or with extending process (e.g. Got-e) are strongly recommended for updating corrections in geophysical data records.

  7. A broadband variable-temperature test system for complex permittivity measurements of solid and powder materials

    NASA Astrophysics Data System (ADS)

    Zhang, Yunpeng; Li, En; Zhang, Jing; Yu, Chengyong; Zheng, Hu; Guo, Gaofeng

    2018-02-01

    A microwave test system to measure the complex permittivity of solid and powder materials as a function of temperature has been developed. The system is based on a TM0n0 multi-mode cylindrical cavity with a slotting structure, which provides purer test modes compared to a traditional cavity. To ensure the safety, effectiveness, and longevity, heating and testing are carried out separately and the sample can move between two functional areas through an Alundum tube. Induction heating and a pneumatic platform are employed to, respectively, shorten the heating and cooling time of the sample. The single trigger function of the vector network analyzer is added to test software to suppress the drift of the resonance peak during testing. Complex permittivity is calculated by the rigorous field theoretical solution considering multilayer media loading. The variation of the cavity equivalent radius caused by the sample insertion holes is discussed in detail, and its influence to the test result is analyzed. The calibration method for the complex permittivity of the Alundum tube and quartz vial (for loading powder sample), which vary with the temperature, is given. The feasibility of the system has been verified by measuring different samples in a wide range of relative permittivity and loss tangent, and variable-temperature test results of fused quartz and SiO2 powder up to 1500 °C are compared with published data. The results indicate that the presented system is reliable and accurate. The stability of the system is verified by repeated and long-term tests, and error analysis is presented to estimate the error incurred due to the uncertainties in different error sources.

  8. How much swamp are we talking here?: Propagating uncertainty about the area of coastal wetlands into the U.S. greenhouse gas inventory

    NASA Astrophysics Data System (ADS)

    Holmquist, J. R.; Crooks, S.; Windham-Myers, L.; Megonigal, P.; Weller, D.; Lu, M.; Bernal, B.; Byrd, K. B.; Morris, J. T.; Troxler, T.; McCombs, J.; Herold, N.

    2017-12-01

    Stable coastal wetlands can store substantial amounts of carbon (C) that can be released when they are degraded or eroded. The EPA recently incorporated coastal wetland net-storage and emissions within the Agricultural Forested and Other Land Uses category of the U.S. National Greenhouse Gas Inventory (NGGI). This was a seminal analysis, but its quantification of uncertainty needs improvement. We provide a value-added analysis by estimating that uncertainty, focusing initially on the most basic assumption, the area of coastal wetlands. We considered three sources: uncertainty in the areas of vegetation and salinity subclasses, uncertainty in the areas of changing or stable wetlands, and uncertainty in the inland extent of coastal wetlands. The areas of vegetation and salinity subtypes, as well as stable or changing, were estimated from 2006 and 2010 maps derived from Landsat imagery by the Coastal Change Analysis Program (C-CAP). We generated unbiased area estimates and confidence intervals for C-CAP, taking into account mapped area, proportional areas of commission and omission errors, as well as the number of observations. We defined the inland extent of wetlands as all land below the current elevation of twice monthly highest tides. We generated probabilistic inundation maps integrating wetland-specific bias and random error in light-detection and ranging elevation maps, with the spatially explicit random error in tidal surfaces generated from tide gauges. This initial uncertainty analysis will be extended to calculate total propagated uncertainty in the NGGI by including the uncertainties in the amount of C lost from eroded and degraded wetlands, stored annually in stable wetlands, and emitted in the form of methane by tidal freshwater wetlands.

  9. Behavioural inflexibility in a comorbid rat model of striatal ischemic injury and mutant hAPP overexpression.

    PubMed

    Levit, Alexander; Regis, Aaron M; Garabon, Jessica R; Oh, Seung-Hun; Desai, Sagar J; Rajakumar, Nagalingam; Hachinski, Vladimir; Agca, Yuksel; Agca, Cansu; Whitehead, Shawn N; Allman, Brian L

    2017-08-30

    Alzheimer disease (AD) and stroke coexist and interact; yet how they interact is not sufficiently understood. Both AD and basal ganglia stroke can impair behavioural flexibility, which can be reliably modeled in rats using an established operant based set-shifting test. Transgenic Fischer 344-APP21 rats (TgF344) overexpress pathogenic human amyloid precursor protein (hAPP) but do not spontaneously develop overt pathology, hence TgF344 rats can be used to model the effect of vascular injury in the prodromal stages of Alzheimer disease. We demonstrate that the injection of endothelin-1 (ET1) into the dorsal striatum of TgF344 rats (Tg-ET1) produced an exacerbation of behavioural inflexibility with a behavioural phenotype that was distinct from saline-injected wildtype & TgF344 rats as well as ET1-injected wildtype rats (Wt-ET1). In addition to profiling the types of errors made, interpolative modeling using logistic exposure-response regression provided an informative analysis of the timing and efficiency of behavioural flexibility. During set-shifting, Tg-ET1 committed fewer perseverative errors than Wt-ET1. However, Tg-ET1 committed significantly more regressive errors and had a less efficient strategy change than all other groups. Thus, behavioural flexibility was more vulnerable to striatal ischemic injury in TgF344 rats. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. An engineered design of a diffractive mask for high precision astrometry [Modeling a diffractive mask that calibrates optical distortions

    DOE PAGES

    Dennison, Kaitlin; Ammons, S. Mark; Garrel, Vincent; ...

    2016-06-26

    AutoCAD, Zemax Optic Studio 15, and Interactive Data Language (IDL) with the Proper Library are used to computationally model and test a diffractive mask (DiM) suitable for use in the Gemini Multi-Conjugate Adaptive Optics System (GeMS) on the Gemini South Telescope. Systematic errors in telescope imagery are produced when the light travels through the adaptive optics system of the telescope. DiM is a transparent, flat optic with a pattern of miniscule dots lithographically applied to it. It is added ahead of the adaptive optics system in the telescope in order to produce diffraction spots that will encode systematic errors inmore » the optics after it. Once these errors are encoded, they can be corrected for. DiM will allow for more accurate measurements in astrometry and thus improve exoplanet detection. Furthermore, the mechanics and physical attributes of the DiM are modeled in AutoCAD. Zemax models the ray propagation of point sources of light through the telescope. IDL and Proper simulate the wavefront and image results of the telescope. Aberrations are added to the Zemax and IDL models to test how the diffraction spots from the DiM change in the final images. Based on the Zemax and IDL results, the diffraction spots are able to encode the systematic aberrations.« less

  11. Evaluation of SMART sensor displays for multidimensional precision control of Space Shuttle remote manipulator

    NASA Technical Reports Server (NTRS)

    Bejczy, A. K.; Brown, J. W.; Lewis, J. L.

    1982-01-01

    An enhanced proximity sensor and display system was developed at the Jet Propulsion Laboratory (JPL) and tested on the full scale Space Shuttle Remote Manipulator at the Johnson Space Center (JSC) Manipulator Development Facility (MDF). The sensor system, integrated with a four-claw end effector, measures range error up to 6 inches, and pitch and yaw alignment errors within + or 15 deg., and displays error data on both graphic and numeric displays. The errors are referenced to the end effector control axes through appropriate data processing by a dedicated microcomputer acting on the sensor data in real time. Both display boxes contain a green lamp which indicates whether the combination of range, pitch and yaw errors will assure a successful grapple. More than 200 test runs were completed in early 1980 by three operators at JSC for grasping static and capturing slowly moving targets. The tests have indicated that the use of graphic/numeric displays of proximity sensor information improves precision control of grasp/capture range by more than a factor of two for both static and dynamic grapple conditions.

  12. Accuracy and Repeatability of Trajectory Rod Measurement Using Laser Scanners.

    PubMed

    Liscio, Eugene; Guryn, Helen; Stoewner, Daniella

    2017-12-22

    Three-dimensional (3D) technologies contribute greatly to bullet trajectory analysis and shooting reconstruction. There are few papers which address the errors associated with utilizing laser scanning for bullet trajectory documentation. This study examined the accuracy and precision of laser scanning for documenting trajectory rods in drywall for angles between 25° and 90°. The inherent error range of 0.02°-2.10° was noted while the overall error for laser scanning ranged between 0.04° and 1.98°. The inter- and intraobserver errors for trajectory rod placement and virtual trajectory marking showed that the range of variation for rod placement was between 0.1°-1° in drywall and 0.05°-0.5° in plywood. Virtual trajectory marking accuracy tests showed that 75% of data values were below 0.91° and 0.61° on azimuth and vertical angles, respectively. In conclusion, many contributing factors affect bullet trajectory analysis, and the use of 3D technologies can aid in reduction of errors associated with documentation. © 2017 American Academy of Forensic Sciences.

  13. Analysis of surface-water data network in Kansas for effectiveness in providing regional streamflow information; with a section on theory and application of generalized least squares

    USGS Publications Warehouse

    Medina, K.D.; Tasker, Gary D.

    1987-01-01

    This report documents the results of an analysis of the surface-water data network in Kansas for its effectiveness in providing regional streamflow information. The network was analyzed using generalized least squares regression. The correlation and time-sampling error of the streamflow characteristic are considered in the generalized least squares method. Unregulated medium-, low-, and high-flow characteristics were selected to be representative of the regional information that can be obtained from streamflow-gaging-station records for use in evaluating the effectiveness of continuing the present network stations, discontinuing some stations, and (or) adding new stations. The analysis used streamflow records for all currently operated stations that were not affected by regulation and for discontinued stations for which unregulated flow characteristics, as well as physical and climatic characteristics, were available. The State was divided into three network areas, western, northeastern, and southeastern Kansas, and analysis was made for the three streamflow characteristics in each area, using three planning horizons. The analysis showed that the maximum reduction of sampling mean-square error for each cost level could be obtained by adding new stations and discontinuing some current network stations. Large reductions in sampling mean-square error for low-flow information could be achieved in all three network areas, the reduction in western Kansas being the most dramatic. The addition of new stations would be most beneficial for mean-flow information in western Kansas. The reduction of sampling mean-square error for high-flow information would benefit most from the addition of new stations in western Kansas. Southeastern Kansas showed the smallest error reduction in high-flow information. A comparison among all three network areas indicated that funding resources could be most effectively used by discontinuing more stations in northeastern and southeastern Kansas and establishing more new stations in western Kansas.

  14. Analysis of surface-water data network in Kansas for effectiveness in providing regional streamflow information

    USGS Publications Warehouse

    Medina, K.D.; Tasker, Gary D.

    1985-01-01

    The surface water data network in Kansas was analyzed using generalized least squares regression for its effectiveness in providing regional streamflow information. The correlation and time-sampling error of the streamflow characteristic are considered in the generalized least squares method. Unregulated medium-flow, low-flow and high-flow characteristics were selected to be representative of the regional information that can be obtained from streamflow gaging station records for use in evaluating the effectiveness of continuing the present network stations, discontinuing some stations; and/or adding new stations. The analysis used streamflow records for all currently operated stations that were not affected by regulation and discontinued stations for which unregulated flow characteristics , as well as physical and climatic characteristics, were available. The state was divided into three network areas, western, northeastern, and southeastern Kansas, and analysis was made for three streamflow characteristics in each area, using three planning horizons. The analysis showed that the maximum reduction of sampling mean square error for each cost level could be obtained by adding new stations and discontinuing some of the present network stations. Large reductions in sampling mean square error for low-flow information could be accomplished in all three network areas, with western Kansas having the most dramatic reduction. The addition of new stations would be most beneficial for man- flow information in western Kansas, and to lesser degrees in the other two areas. The reduction of sampling mean square error for high-flow information would benefit most from the addition of new stations in western Kansas, and the effect diminishes to lesser degrees in the other two areas. Southeastern Kansas showed the smallest error reduction in high-flow information. A comparison among all three network areas indicated that funding resources could be most effectively used by discontinuing more stations in northeastern and southeastern Kansas and establishing more new stations in western Kansas. (Author 's abstract)

  15. Wireless sensor platform for harsh environments

    NASA Technical Reports Server (NTRS)

    Garverick, Steven L. (Inventor); Yu, Xinyu (Inventor); Toygur, Lemi (Inventor); He, Yunli (Inventor)

    2009-01-01

    Reliable and efficient sensing becomes increasingly difficult in harsher environments. A sensing module for high-temperature conditions utilizes a digital, rather than analog, implementation on a wireless platform to achieve good quality data transmission. The module comprises a sensor, integrated circuit, and antenna. The integrated circuit includes an amplifier, A/D converter, decimation filter, and digital transmitter. To operate, an analog signal is received by the sensor, amplified by the amplifier, converted into a digital signal by the A/D converter, filtered by the decimation filter to address the quantization error, and output in digital format by the digital transmitter and antenna.

  16. The RMI Space Weather and Navigation Systems (SWANS) Project

    NASA Astrophysics Data System (ADS)

    Warnant, Rene; Lejeune, Sandrine; Wautelet, Gilles; Spits, Justine; Stegen, Koen; Stankov, Stan

    The SWANS (Space Weather and Navigation Systems) research and development project (http://swans.meteo.be) is an initiative of the Royal Meteorological Institute (RMI) under the auspices of the Belgian Solar-Terrestrial Centre of Excellence (STCE). The RMI SWANS objectives are: research on space weather and its effects on GNSS applications; permanent mon-itoring of the local/regional geomagnetic and ionospheric activity; and development/operation of relevant nowcast, forecast, and alert services to help professional GNSS/GALILEO users in mitigating space weather effects. Several SWANS developments have already been implemented and available for use. The K-LOGIC (Local Operational Geomagnetic Index K Calculation) system is a nowcast system based on a fully automated computer procedure for real-time digital magnetogram data acquisition, data screening, and calculating the local geomagnetic K index. Simultaneously, the planetary Kp index is estimated from solar wind measurements, thus adding to the service reliability and providing forecast capabilities as well. A novel hybrid empirical model, based on these ground-and space-based observations, has been implemented for nowcasting and forecasting the geomagnetic index, issuing also alerts whenever storm-level activity is indicated. A very important feature of the nowcast/forecast system is the strict control on the data input and processing, allowing for an immediate assessment of the output quality. The purpose of the LIEDR (Local Ionospheric Electron Density Reconstruction) system is to acquire and process data from simultaneous ground-based GNSS TEC and digital ionosonde measurements, and subsequently to deduce the vertical electron density distribution. A key module is the real-time estimation of the ionospheric slab thickness, offering additional infor-mation on the local ionospheric dynamics. The RTK (Real Time Kinematic) status mapping provides a quick look at the small-scale ionospheric effects on the RTK precision for several GPS stations in Belgium. The service assesses the effect of small-scale ionospheric irregularities by monitoring the high-frequency TEC rate of change at any given station. This assessment results in a (colour) code assigned to each station, code ranging from "quiet" (green) to "extreme" (red) and referring to the local ionospheric conditions. Alerts via e-mail are sent to subscribed users when disturbed conditions are observed. SoDIPE (Software for Determining the Ionospheric Positioning Error) estimates the position-ing error due to the ionospheric conditions only (called "ionospheric error") in high-precision positioning applications (RTK in particular). For each of the Belgian Active Geodetic Network (AGN) baselines, SoDIPE computes the ionospheric error and its median value (every 15 min-utes). Again, a (colour) code is assigned to each baseline, ranging from "nominal" (green) to "extreme" (red) error level. Finally, all available baselines (drawn in colour corresponding to error level) are displayed on a map of Belgium. The future SWANS work will focus on regional ionospheric monitoring and developing various other nowcast and forecast services.

  17. Development and operations of the astrophysics data system

    NASA Technical Reports Server (NTRS)

    Murray, Stephen S.; Oliversen, Ronald (Technical Monitor)

    2005-01-01

    Abstract service - Continued regular updates of abstracts in the databases, both at SA0 and at all mirror sites. - Modified loading scripts to accommodate changes in data format (PhyS) - Discussed data deliveries with providers to clear up problems with format or other errors (EGU) - Continued inclusion of large numbers of historical literature volumes and physics conference volumes xeroxed from the library. - Performed systematic fixes on some data sets in the database to account for changes in article numbering (AGU journals) - Implemented linking of ADS bibliographic records with multimedia files - Debugged and fixed obscure connection problems with the ADS Korean mirror site which were preventing successful updates of the data holdings. - Wrote procedure to parse citation data and characterize an ADS record based on its citation ratios within each database.

  18. Inter-satellite links for satellite autonomous integrity monitoring

    NASA Astrophysics Data System (ADS)

    Rodríguez-Pérez, Irma; García-Serrano, Cristina; Catalán Catalán, Carlos; García, Alvaro Mozo; Tavella, Patrizia; Galleani, Lorenzo; Amarillo, Francisco

    2011-01-01

    A new integrity monitoring mechanisms to be implemented on-board on a GNSS taking advantage of inter-satellite links has been introduced. This is based on accurate range and Doppler measurements not affected neither by atmospheric delays nor ground local degradation (multipath and interference). By a linear combination of the Inter-Satellite Links Observables, appropriate observables for both satellite orbits and clock monitoring are obtained and by the proposed algorithms it is possible to reduce the time-to-alarm and the probability of undetected satellite anomalies.Several test cases have been run to assess the performances of the new orbit and clock monitoring algorithms in front of a complete scenario (satellite-to-satellite and satellite-to-ground links) and in a satellite-only scenario. The results of this experimentation campaign demonstrate that the Orbit Monitoring Algorithm is able to detect orbital feared events when the position error at the worst user location is still under acceptable limits. For instance, an unplanned manoeuvre in the along-track direction is detected (with a probability of false alarm equals to 5 × 10-9) when the position error at the worst user location is 18 cm. The experimentation also reveals that the clock monitoring algorithm is able to detect phase jumps, frequency jumps and instability degradation on the clocks but the latency of detection as well as the detection performances strongly depends on the noise added by the clock measurement system.

  19. Poor interoperability of the Adams-Harbertson method for analysis of anthocyanins: comparison with AOAC pH differential method.

    PubMed

    Brooks, Larry M; Kuhlman, Benjamin J; McKesson, Doug W; McCloskey, Leo

    2013-01-01

    The poor interoperability of anthocyanin glycosides measurements by two pH differential methods is documented. Adams-Harbertson, which was proposed for commercial winemaking, was compared to AOAC Official Method 2005.02 for wine. California bottled wines (Pinot Noir, Merlot, and Cabernet Sauvignon) were assayed in a collaborative study (n=105), which found mean precision of Adams-Harbertson winery versus reference measurements to be 77 +/- 20%. Maximum error is expected to be 48% for Pinot Noir, 42% for Merlot, and 34% for Cabernet Sauvignon from reproducibility RSD. Range of measurements was actually 30 to 91% for Pinot Noir. An interoperability study (n=30) found Adams-Harbertson produces measurements that are nominally 150% of the AOAC pH differential method. Large analytical chemistry differences are: AOAC method uses Beer-Lambert equation and measures absorbance at pH 1.0 and 4.5, proposed a priori by Flueki and Francis; whereas Adams-Harbertson uses "universal" standard curve and measures absorbance ad hoc at pH 1.8 and 4.9 to reduce the effects of so-called co-pigmentation. Errors relative to AOAC are produced by Adams-Harbertson standard curve over Beer-Lambert and pH 1.8 over pH 1.0. The study recommends using AOAC Official Method 2005.02 for analysis of wine anthocyanin glycosides.

  20. Closed loop adaptive control of spectrum-producing step using neural networks

    DOEpatents

    Fu, Chi Yung

    1998-01-01

    Characteristics of the plasma in a plasma-based manufacturing process step are monitored directly and in real time by observing the spectrum which it produces. An artificial neural network analyzes the plasma spectrum and generates control signals to control one or more of the process input parameters in response to any deviation of the spectrum beyond a narrow range. In an embodiment, a plasma reaction chamber forms a plasma in response to input parameters such as gas flow, pressure and power. The chamber includes a window through which the electromagnetic spectrum produced by a plasma in the chamber, just above the subject surface, may be viewed. The spectrum is conducted to an optical spectrometer which measures the intensity of the incoming optical spectrum at different wavelengths. The output of optical spectrometer is provided to an analyzer which produces a plurality of error signals, each indicating whether a respective one of the input parameters to the chamber is to be increased or decreased. The microcontroller provides signals to control respective controls, but these lines are intercepted and first added to the error signals, before being provided to the controls for the chamber. The analyzer can include a neural network and an optional spectrum preprocessor to reduce background noise, as well as a comparator which compares the parameter values predicted by the neural network with a set of desired values provided by the microcontroller.

  1. Closed loop adaptive control of spectrum-producing step using neural networks

    DOEpatents

    Fu, C.Y.

    1998-11-24

    Characteristics of the plasma in a plasma-based manufacturing process step are monitored directly and in real time by observing the spectrum which it produces. An artificial neural network analyzes the plasma spectrum and generates control signals to control one or more of the process input parameters in response to any deviation of the spectrum beyond a narrow range. In an embodiment, a plasma reaction chamber forms a plasma in response to input parameters such as gas flow, pressure and power. The chamber includes a window through which the electromagnetic spectrum produced by a plasma in the chamber, just above the subject surface, may be viewed. The spectrum is conducted to an optical spectrometer which measures the intensity of the incoming optical spectrum at different wavelengths. The output of optical spectrometer is provided to an analyzer which produces a plurality of error signals, each indicating whether a respective one of the input parameters to the chamber is to be increased or decreased. The microcontroller provides signals to control respective controls, but these lines are intercepted and first added to the error signals, before being provided to the controls for the chamber. The analyzer can include a neural network and an optional spectrum preprocessor to reduce background noise, as well as a comparator which compares the parameter values predicted by the neural network with a set of desired values provided by the microcontroller. 7 figs.

  2. Quality Control of Meteorological Observations

    NASA Technical Reports Server (NTRS)

    Collins, William; Dee, Dick; Rukhovets, Leonid

    1999-01-01

    For the first time, a problem of the meteorological observation quality control (QC) was formulated by L.S. Gandin at the Main Geophysical Observatory in the 70's. Later in 1988 L.S. Gandin began adapting his ideas in complex quality control (CQC) to the operational environment at the National Centers for Environmental Prediction. The CQC was first applied by L.S.Gandin and his colleagues to detection and correction of errors in rawinsonde heights and temperatures using a complex of hydrostatic residuals.Later, a full complex of residuals, vertical and horizontal optimal interpolations and baseline checks were added for the checking and correction of a wide range of meteorological variables. some other of Gandin's ideas were applied and substantially developed at other meteorological centers. A new statistical QC was recently implemented in the Goddard Data Assimilation System. The central component of any quality control is a buddy check which is a test of individual suspect observations against available nearby non-suspect observations. A novel feature of this test is that the error variances which are used for QC decision are re-estimated on-line. As a result, the allowed tolerances for suspect observations can depend on local atmospheric conditions. The system is then better able to accept extreme values observed in deep cyclones, jet streams and so on. The basic statements of this adaptive buddy check are described. Some results of the on-line QC including moisture QC are presented.

  3. Implicit time accurate simulation of unsteady flow

    NASA Astrophysics Data System (ADS)

    van Buuren, René; Kuerten, Hans; Geurts, Bernard J.

    2001-03-01

    Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright

  4. Temporal consistent depth map upscaling for 3DTV

    NASA Astrophysics Data System (ADS)

    Schwarz, Sebastian; Sjöström, Mârten; Olsson, Roger

    2014-03-01

    The ongoing success of three-dimensional (3D) cinema fuels increasing efforts to spread the commercial success of 3D to new markets. The possibilities of a convincing 3D experience at home, such as three-dimensional television (3DTV), has generated a great deal of interest within the research and standardization community. A central issue for 3DTV is the creation and representation of 3D content. Acquiring scene depth information is a fundamental task in computer vision, yet complex and error-prone. Dedicated range sensors, such as the Time­ of-Flight camera (ToF), can simplify the scene depth capture process and overcome shortcomings of traditional solutions, such as active or passive stereo analysis. Admittedly, currently available ToF sensors deliver only a limited spatial resolution. However, sophisticated depth upscaling approaches use texture information to match depth and video resolution. At Electronic Imaging 2012 we proposed an upscaling routine based on error energy minimization, weighted with edge information from an accompanying video source. In this article we develop our algorithm further. By adding temporal consistency constraints to the upscaling process, we reduce disturbing depth jumps and flickering artifacts in the final 3DTV content. Temporal consistency in depth maps enhances the 3D experience, leading to a wider acceptance of 3D media content. More content in better quality can boost the commercial success of 3DTV.

  5. Human machine interface by using stereo-based depth extraction

    NASA Astrophysics Data System (ADS)

    Liao, Chao-Kang; Wu, Chi-Hao; Lin, Hsueh-Yi; Chang, Ting-Ting; Lin, Tung-Yang; Huang, Po-Kuan

    2014-03-01

    The ongoing success of three-dimensional (3D) cinema fuels increasing efforts to spread the commercial success of 3D to new markets. The possibilities of a convincing 3D experience at home, such as three-dimensional television (3DTV), has generated a great deal of interest within the research and standardization community. A central issue for 3DTV is the creation and representation of 3D content. Acquiring scene depth information is a fundamental task in computer vision, yet complex and error-prone. Dedicated range sensors, such as the Time­ of-Flight camera (ToF), can simplify the scene depth capture process and overcome shortcomings of traditional solutions, such as active or passive stereo analysis. Admittedly, currently available ToF sensors deliver only a limited spatial resolution. However, sophisticated depth upscaling approaches use texture information to match depth and video resolution. At Electronic Imaging 2012 we proposed an upscaling routine based on error energy minimization, weighted with edge information from an accompanying video source. In this article we develop our algorithm further. By adding temporal consistency constraints to the upscaling process, we reduce disturbing depth jumps and flickering artifacts in the final 3DTV content. Temporal consistency in depth maps enhances the 3D experience, leading to a wider acceptance of 3D media content. More content in better quality can boost the commercial success of 3DTV.

  6. Defining robustness protocols: a method to include and evaluate robustness in clinical plans

    NASA Astrophysics Data System (ADS)

    McGowan, S. E.; Albertini, F.; Thomas, S. J.; Lomax, A. J.

    2015-04-01

    We aim to define a site-specific robustness protocol to be used during the clinical plan evaluation process. Plan robustness of 16 skull base IMPT plans to systematic range and random set-up errors have been retrospectively and systematically analysed. This was determined by calculating the error-bar dose distribution (ebDD) for all the plans and by defining some metrics used to define protocols aiding the plan assessment. Additionally, an example of how to clinically use the defined robustness database is given whereby a plan with sub-optimal brainstem robustness was identified. The advantage of using different beam arrangements to improve the plan robustness was analysed. Using the ebDD it was found range errors had a smaller effect on dose distribution than the corresponding set-up error in a single fraction, and that organs at risk were most robust to the range errors, whereas the target was more robust to set-up errors. A database was created to aid planners in terms of plan robustness aims in these volumes. This resulted in the definition of site-specific robustness protocols. The use of robustness constraints allowed for the identification of a specific patient that may have benefited from a treatment of greater individuality. A new beam arrangement showed to be preferential when balancing conformality and robustness for this case. The ebDD and error-bar volume histogram proved effective in analysing plan robustness. The process of retrospective analysis could be used to establish site-specific robustness planning protocols in proton therapy. These protocols allow the planner to determine plans that, although delivering a dosimetrically adequate dose distribution, have resulted in sub-optimal robustness to these uncertainties. For these cases the use of different beam start conditions may improve the plan robustness to set-up and range uncertainties.

  7. Moment expansion for ionospheric range error

    NASA Technical Reports Server (NTRS)

    Mallinckrodt, A.; Reich, R.; Parker, H.; Berbert, J.

    1972-01-01

    On a plane earth, the ionospheric or tropospheric range error depends only on the total refractivity content or zeroth moment of the refracting layer and the elevation angle. On a spherical earth, however, the dependence is more complex; so for more accurate results it has been necessary to resort to complex ray-tracing calculations. A simple, high-accuracy alternative to the ray-tracing calculation is presented. By appropriate expansion of the angular dependence in the ray-tracing integral in a power series in height, an expression is obtained for the range error in terms of a simple function of elevation angle, E, at the expansion height and of the mth moment of the refractivity, N, distribution about the expansion height. The rapidity of convergence is heavily dependent on the choice of expansion height. For expansion heights in the neighborhood of the centroid of the layer (300-490 km), the expansion to N = 2 (three terms) gives results accurate to about 0.4% at E = 10 deg. As an analytic tool, the expansion affords some insight on the influence of layer shape on range errors in special problems.

  8. Development of a simple system for simultaneously measuring 6DOF geometric motion errors of a linear guide.

    PubMed

    Qibo, Feng; Bin, Zhang; Cunxing, Cui; Cuifang, Kuang; Yusheng, Zhai; Fenglin, You

    2013-11-04

    A simple method for simultaneously measuring the 6DOF geometric motion errors of the linear guide was proposed. The mechanisms for measuring straightness and angular errors and for enhancing their resolution are described in detail. A common-path method for measuring the laser beam drift was proposed and it was used to compensate the errors produced by the laser beam drift in the 6DOF geometric error measurements. A compact 6DOF system was built. Calibration experiments with certain standard measurement meters showed that our system has a standard deviation of 0.5 µm in a range of ± 100 µm for the straightness measurements, and standard deviations of 0.5", 0.5", and 1.0" in the range of ± 100" for pitch, yaw, and roll measurements, respectively.

  9. Robust keyword retrieval method for OCRed text

    NASA Astrophysics Data System (ADS)

    Fujii, Yusaku; Takebe, Hiroaki; Tanaka, Hiroshi; Hotta, Yoshinobu

    2011-01-01

    Document management systems have become important because of the growing popularity of electronic filing of documents and scanning of books, magazines, manuals, etc., through a scanner or a digital camera, for storage or reading on a PC or an electronic book. Text information acquired by optical character recognition (OCR) is usually added to the electronic documents for document retrieval. Since texts generated by OCR generally include character recognition errors, robust retrieval methods have been introduced to overcome this problem. In this paper, we propose a retrieval method that is robust against both character segmentation and recognition errors. In the proposed method, the insertion of noise characters and dropping of characters in the keyword retrieval enables robustness against character segmentation errors, and character substitution in the keyword of the recognition candidate for each character in OCR or any other character enables robustness against character recognition errors. The recall rate of the proposed method was 15% higher than that of the conventional method. However, the precision rate was 64% lower.

  10. Synthesis of hover autopilots for rotary-wing VTOL aircraft

    NASA Technical Reports Server (NTRS)

    Hall, W. E.; Bryson, A. E., Jr.

    1972-01-01

    The practical situation is considered where imperfect information on only a few rotor and fuselage state variables is available. Filters are designed to estimate all the state variables from noisy measurements of fuselage pitch/roll angles and from noisy measurements of both fuselage and rotor pitch/roll angles. The mean square response of the vehicle to a very gusty, random wind is computed using various filter/controllers and is found to be quite satisfactory although, of course, not so good as when one has perfect information (idealized case). The second part of the report considers precision hover over a point on the ground. A vehicle model without rotor dynamics is used and feedback signals in position and integral of position error are added. The mean square response of the vehicle to a very gusty, random wind is computed, assuming perfect information feedback, and is found to be excellent. The integral error feedback gives zero position error for a steady wind, and smaller position error for a random wind.

  11. If You're House Is Still Available, Send Me an Email: Personality Influences Reactions to Written Errors in Email Messages.

    PubMed

    Boland, Julie E; Queen, Robin

    2016-01-01

    The increasing prevalence of social media means that we often encounter written language characterized by both stylistic variation and outright errors. How does the personality of the reader modulate reactions to non-standard text? Experimental participants read 'email responses' to an ad for a housemate that either contained no errors or had been altered to include either typos (e.g., teh) or homophonous grammar errors (grammos, e.g., to/too, it's/its). Participants completed a 10-item evaluation scale for each message, which measured their impressions of the writer. In addition participants completed a Big Five personality assessment and answered demographic and language attitude questions. Both typos and grammos had a negative impact on the evaluation scale. This negative impact was not modulated by age, education, electronic communication frequency, or pleasure reading time. In contrast, personality traits did modulate assessments, and did so in distinct ways for grammos and typos.

  12. If You’re House Is Still Available, Send Me an Email: Personality Influences Reactions to Written Errors in Email Messages

    PubMed Central

    2016-01-01

    The increasing prevalence of social media means that we often encounter written language characterized by both stylistic variation and outright errors. How does the personality of the reader modulate reactions to non-standard text? Experimental participants read ‘email responses’ to an ad for a housemate that either contained no errors or had been altered to include either typos (e.g., teh) or homophonous grammar errors (grammos, e.g., to/too, it’s/its). Participants completed a 10-item evaluation scale for each message, which measured their impressions of the writer. In addition participants completed a Big Five personality assessment and answered demographic and language attitude questions. Both typos and grammos had a negative impact on the evaluation scale. This negative impact was not modulated by age, education, electronic communication frequency, or pleasure reading time. In contrast, personality traits did modulate assessments, and did so in distinct ways for grammos and typos. PMID:26959823

  13. Quantification of errors in ordinal outcome scales using shannon entropy: effect on sample size calculations.

    PubMed

    Mandava, Pitchaiah; Krumpelman, Chase S; Shah, Jharna N; White, Donna L; Kent, Thomas A

    2013-01-01

    Clinical trial outcomes often involve an ordinal scale of subjective functional assessments but the optimal way to quantify results is not clear. In stroke, the most commonly used scale, the modified Rankin Score (mRS), a range of scores ("Shift") is proposed as superior to dichotomization because of greater information transfer. The influence of known uncertainties in mRS assessment has not been quantified. We hypothesized that errors caused by uncertainties could be quantified by applying information theory. Using Shannon's model, we quantified errors of the "Shift" compared to dichotomized outcomes using published distributions of mRS uncertainties and applied this model to clinical trials. We identified 35 randomized stroke trials that met inclusion criteria. Each trial's mRS distribution was multiplied with the noise distribution from published mRS inter-rater variability to generate an error percentage for "shift" and dichotomized cut-points. For the SAINT I neuroprotectant trial, considered positive by "shift" mRS while the larger follow-up SAINT II trial was negative, we recalculated sample size required if classification uncertainty was taken into account. Considering the full mRS range, error rate was 26.1%±5.31 (Mean±SD). Error rates were lower for all dichotomizations tested using cut-points (e.g. mRS 1; 6.8%±2.89; overall p<0.001). Taking errors into account, SAINT I would have required 24% more subjects than were randomized. We show when uncertainty in assessments is considered, the lowest error rates are with dichotomization. While using the full range of mRS is conceptually appealing, a gain of information is counter-balanced by a decrease in reliability. The resultant errors need to be considered since sample size may otherwise be underestimated. In principle, we have outlined an approach to error estimation for any condition in which there are uncertainties in outcome assessment. We provide the user with programs to calculate and incorporate errors into sample size estimation.

  14. Evaluation of real-time data obtained from gravimetric preparation of antineoplastic agents shows medication errors with possible critical therapeutic impact: Results of a large-scale, multicentre, multinational, retrospective study.

    PubMed

    Terkola, R; Czejka, M; Bérubé, J

    2017-08-01

    Medication errors are a significant cause of morbidity and mortality especially with antineoplastic drugs, owing to their narrow therapeutic index. Gravimetric workflow software systems have the potential to reduce volumetric errors during intravenous antineoplastic drug preparation which may occur when verification is reliant on visual inspection. Our aim was to detect medication errors with possible critical therapeutic impact as determined by the rate of prevented medication errors in chemotherapy compounding after implementation of gravimetric measurement. A large-scale, retrospective analysis of data was carried out, related to medication errors identified during preparation of antineoplastic drugs in 10 pharmacy services ("centres") in five European countries following the introduction of an intravenous workflow software gravimetric system. Errors were defined as errors in dose volumes outside tolerance levels, identified during weighing stages of preparation of chemotherapy solutions which would not otherwise have been detected by conventional visual inspection. The gravimetric system detected that 7.89% of the 759 060 doses of antineoplastic drugs prepared at participating centres between July 2011 and October 2015 had error levels outside the accepted tolerance range set by individual centres, and prevented these doses from reaching patients. The proportion of antineoplastic preparations with deviations >10% ranged from 0.49% to 5.04% across sites, with a mean of 2.25%. The proportion of preparations with deviations >20% ranged from 0.21% to 1.27% across sites, with a mean of 0.71%. There was considerable variation in error levels for different antineoplastic agents. Introduction of a gravimetric preparation system for antineoplastic agents detected and prevented dosing errors which would not have been recognized with traditional methods and could have resulted in toxicity or suboptimal therapeutic outcomes for patients undergoing anticancer treatment. © 2017 The Authors. Journal of Clinical Pharmacy and Therapeutics Published by John Wiley & Sons Ltd.

  15. Prescribing errors during hospital inpatient care: factors influencing identification by pharmacists.

    PubMed

    Tully, Mary P; Buchan, Iain E

    2009-12-01

    To investigate the prevalence of prescribing errors identified by pharmacists in hospital inpatients and the factors influencing error identification rates by pharmacists throughout hospital admission. 880-bed university teaching hospital in North-west England. Data about prescribing errors identified by pharmacists (median: 9 (range 4-17) collecting data per day) when conducting routine work were prospectively recorded on 38 randomly selected days over 18 months. Proportion of new medication orders in which an error was identified; predictors of error identification rate, adjusted for workload and seniority of pharmacist, day of week, type of ward or stage of patient admission. 33,012 new medication orders were reviewed for 5,199 patients; 3,455 errors (in 10.5% of orders) were identified for 2,040 patients (39.2%; median 1, range 1-12). Most were problem orders (1,456, 42.1%) or potentially significant errors (1,748, 50.6%); 197 (5.7%) were potentially serious; 1.6% (n = 54) were potentially severe or fatal. Errors were 41% (CI: 28-56%) more likely to be identified at patient's admission than at other times, independent of confounders. Workload was the strongest predictor of error identification rates, with 40% (33-46%) less errors identified on the busiest days than at other times. Errors identified fell by 1.9% (1.5-2.3%) for every additional chart checked, independent of confounders. Pharmacists routinely identify errors but increasing workload may reduce identification rates. Where resources are limited, they may be better spent on identifying and addressing errors immediately after admission to hospital.

  16. Performance Analysis of Ranging Techniques for the KPLO Mission

    NASA Astrophysics Data System (ADS)

    Park, Sungjoon; Moon, Sangman

    2018-03-01

    In this study, the performance of ranging techniques for the Korea Pathfinder Lunar Orbiter (KPLO) space communication system is investigated. KPLO is the first lunar mission of Korea, and pseudo-noise (PN) ranging will be used to support the mission along with sequential ranging. We compared the performance of both ranging techniques using the criteria of accuracy, acquisition probability, and measurement time. First, we investigated the end-to-end accuracy error of a ranging technique incorporating all sources of errors such as from ground stations and the spacecraft communication system. This study demonstrates that increasing the clock frequency of the ranging system is not required when the dominant factor of accuracy error is independent of the thermal noise of the ranging technique being used in the system. Based on the understanding of ranging accuracy, the measurement time of PN and sequential ranging are further investigated and compared, while both techniques satisfied the accuracy and acquisition requirements. We demonstrated that PN ranging performed better than sequential ranging in the signal-to-noise ratio (SNR) regime where KPLO will be operating, and we found that the T2B (weighted-voting balanced Tausworthe, voting v = 2) code is the best choice among the PN codes available for the KPLO mission.

  17. Assessment of the accuracy of global geodetic satellite laser ranging observations and estimated impact on ITRF scale: estimation of systematic errors in LAGEOS observations 1993-2014

    NASA Astrophysics Data System (ADS)

    Appleby, Graham; Rodríguez, José; Altamimi, Zuheir

    2016-12-01

    Satellite laser ranging (SLR) to the geodetic satellites LAGEOS and LAGEOS-2 uniquely determines the origin of the terrestrial reference frame and, jointly with very long baseline interferometry, its scale. Given such a fundamental role in satellite geodesy, it is crucial that any systematic errors in either technique are at an absolute minimum as efforts continue to realise the reference frame at millimetre levels of accuracy to meet the present and future science requirements. Here, we examine the intrinsic accuracy of SLR measurements made by tracking stations of the International Laser Ranging Service using normal point observations of the two LAGEOS satellites in the period 1993 to 2014. The approach we investigate in this paper is to compute weekly reference frame solutions solving for satellite initial state vectors, station coordinates and daily Earth orientation parameters, estimating along with these weekly average range errors for each and every one of the observing stations. Potential issues in any of the large number of SLR stations assumed to have been free of error in previous realisations of the ITRF may have been absorbed in the reference frame, primarily in station height. Likewise, systematic range errors estimated against a fixed frame that may itself suffer from accuracy issues will absorb network-wide problems into station-specific results. Our results suggest that in the past two decades, the scale of the ITRF derived from the SLR technique has been close to 0.7 ppb too small, due to systematic errors either or both in the range measurements and their treatment. We discuss these results in the context of preparations for ITRF2014 and additionally consider the impact of this work on the currently adopted value of the geocentric gravitational constant, GM.

  18. Ranging/tracking system for proximity operations

    NASA Technical Reports Server (NTRS)

    Nilsen, P.; Udalov, S.

    1982-01-01

    The hardware development and testing phase of a hand held radar for the ranging and tracking for Shuttle proximity operations are considered. The radar is to measure range to a 3 sigma accuracy of 1 m (3.28 ft) to a maximum range of 1850 m (6000 ft) and velocity to a 3 sigma accuracy of 0.03 m/s (0.1 ft/s). Size and weight are similar to hand held radars, frequently seen in use by motorcycle police officers. Meeting these goals for a target in free space was very difficult to obtain in the testing program; however, at a range of approximately 700 m, the 3 sigma range error was found to be 0.96 m. It is felt that much of this error is due to clutter in the test environment. As an example of the velocity accuracy, at a range of 450 m, a 3 sigma velocity error of 0.02 m/s was measured. The principles of the radar and recommended changes to its design are given. Analyses performed in support of the design process, the actual circuit diagrams, and the software listing are included.

  19. False recollection of emotional pictures in Alzheimer's disease.

    PubMed

    Gallo, David A; Foster, Katherine T; Wong, Jessica T; Bennett, David A

    2010-10-01

    Alzheimer's Disease (AD) can reduce the effects of emotional content on memory for studied pictures, but less is known about false memory. In healthy adults, emotionally arousing pictures can be more susceptible to false memory effects than neutral pictures, potentially because emotional pictures share conceptual similarities that cause memory confusions. We investigated these effects in AD patients and healthy controls. Participants studied pictures and their verbal labels, and then picture recollection was tested using verbal labels as retrieval cues. Some of the test labels had been associated with a picture at study, whereas other had not. On this picture recollection test, we found that both AD patients and controls incorrectly endorsed some of the test labels that had not been studied with pictures. These errors were associated with medium to high levels of confidence, indicating some degree of false recollection. Critically, these false recollection judgments were greater for emotional compared to neutral items, especially for positively valenced items, in both AD patients and controls. Dysfunction of the amygdala and hippocampus in early AD may impair recollection, but AD did not disrupt the effect of emotion on false recollection judgments. Copyright © 2010 Elsevier Ltd. All rights reserved.

  20. The inference of atmospheric ozone using satellite horizon measurements in the 1042 per cm band.

    NASA Technical Reports Server (NTRS)

    Russell, J. M., III; Drayson, S. R.

    1972-01-01

    Description of a method for inferring atmospheric ozone information using infrared horizon radiance measurements in the 1042 per cm band. An analysis based on this method proves the feasibility of the horizon experiment for determining ozone information and shows that the ozone partial pressure can be determined in the altitude range from 50 down to 25 km. A comprehensive error study is conducted which considers effects of individual errors as well as the effect of all error sources acting simultaneously. The results show that in the absence of a temperature profile bias error, it should be possible to determine the ozone partial pressure to within an rms value of 15 to 20%. It may be possible to reduce this rms error to 5% by smoothing the solution profile. These results would be seriously degraded by an atmospheric temperature bias error of only 3 K; thus, great care should be taken to minimize this source of error in an experiment. It is probable, in view of recent technological developments, that these errors will be much smaller in future flight experiments and the altitude range will widen to include from about 60 km down to the tropopause region.

  1. Clinical experience with image-guided radiotherapy in an accelerated partial breast intensity-modulated radiotherapy protocol.

    PubMed

    Leonard, Charles E; Tallhamer, Michael; Johnson, Tim; Hunter, Kari; Howell, Kathryn; Kercher, Jane; Widener, Jodi; Kaske, Terese; Paul, Devchand; Sedlacek, Scot; Carter, Dennis L

    2010-02-01

    To explore the feasibility of fiducial markers for the use of image-guided radiotherapy (IGRT) in an accelerated partial breast intensity modulated radiotherapy protocol. Nineteen patients consented to an institutional review board approved protocol of accelerated partial breast intensity-modulated radiotherapy with fiducial marker placement and treatment with IGRT. Patients (1 patient with bilateral breast cancer; 20 total breasts) underwent ultrasound guided implantation of three 1.2- x 3-mm gold markers placed around the surgical cavity. For each patient, table shifts (inferior/superior, right/left lateral, and anterior/posterior) and minimum, maximum, mean error with standard deviation were recorded for each of the 10 BID treatments. The dose contribution of daily orthogonal films was also examined. All IGRT patients underwent successful marker placement. In all, 200 IGRT treatment sessions were performed. The average vector displacement was 4 mm (range, 2-7 mm). The average superior/inferior shift was 2 mm (range, 0-5 mm), the average lateral shift was 2 mm (range, 1-4 mm), and the average anterior/posterior shift was 3 mm (range, 1 5 mm). This study shows that the use of IGRT can be successfully used in an accelerated partial breast intensity-modulated radiotherapy protocol. The authors believe that this technique has increased daily treatment accuracy and permitted reduction in the margin added to the clinical target volume to form the planning target volume. Copyright 2010 Elsevier Inc. All rights reserved.

  2. Word recognition in Alzheimer's disease: Effects of semantic degeneration.

    PubMed

    Cuetos, Fernando; Arce, Noemí; Martínez, Carmen; Ellis, Andrew W

    2017-03-01

    Impairments of word recognition in Alzheimer's disease (AD) have been less widely investigated than impairments affecting word retrieval and production. In particular, we know little about what makes individual words easier or harder for patients with AD to recognize. We used a lexical selection task in which participants were shown sets of four items, each set consisting of one word and three non-words. The task was simply to point to the word on each trial. Forty patients with mild-to-moderate AD were significantly impaired on this task relative to matched controls who made very few errors. The number of patients with AD able to recognize each word correctly was predicted by the frequency, age of acquisition, and imageability of the words, but not by their length or number of orthographic neighbours. Patient Mini-Mental State Examination and phonological fluency scores also predicted the number of words recognized. We propose that progressive degradation of central semantic representations in AD differentially affects the ability to recognize low-imageability, low-frequency, late-acquired words, with the same factors affecting word recognition as affecting word retrieval. © 2015 The British Psychological Society.

  3. Shared and differentiated motor skill impairments in children with dyslexia and/or attention deficit disorder: From simple to complex sequential coordination

    PubMed Central

    Morin-Moncet, Olivier; Bélanger, Anne-Marie; Beauchamp, Miriam H.; Leonard, Gabriel

    2017-01-01

    Dyslexia and Attention deficit disorder (AD) are prevalent neurodevelopmental conditions in children and adolescents. They have high comorbidity rates and have both been associated with motor difficulties. Little is known, however, about what is shared or differentiated in dyslexia and AD in terms of motor abilities. Even when motor skill problems are identified, few studies have used the same measurement tools, resulting in inconstant findings. The present study assessed increasingly complex gross motor skills in children and adolescents with dyslexia, AD, and with both Dyslexia and AD. Our results suggest normal performance on simple motor-speed tests, whereas all three groups share a common impairment on unimanual and bimanual sequential motor tasks. Children in these groups generally improve with practice to the same level as normal subjects, though they make more errors. In addition, children with AD are the most impaired on complex bimanual out-of-phase movements and with manual dexterity. These latter findings are examined in light of the Multiple Deficit Model. PMID:28542319

  4. Posterior archaeomagnetic dating for the early Medieval site Thunau am Kamp, Austria

    NASA Astrophysics Data System (ADS)

    Schnepp, Elisabeth; Lanos, Philippe; Obenaus, Martin

    2014-05-01

    The early medieval site Thunau am Kamp consists of a hill fort and a settlement with large burial ground at the bank of river Kamp. All these features are under archaeological investigation since many years. The settlement comprises many pit houses, some with stratigraphic order. Every pit house was equipped with at least one cupola oven and/or a hearth or fireplace. Sometimes the entire cupola was preserved. The site was occupied during the 9th and 10th AD according to potshards which seem to indicate two phases: In the older phase ovens were placed in the corner of the houses while during the younger phase they are found in the middle of the wall. In order to increase the archaeomagnetic data base 14 ovens have been sampled. They fill the temporal gap in the data base for Austria around 900 AD. Laboratory treatment included alternation field and thermal demagnetisations as well as rock magnetic experiments. The baked clay with was formed from a loess sediment has preserved stable directions. Apart from one exception the mean characteristic remanent magnetization directions are concentrated around 900 AD on the early medieval part of the directional archaeomagnetic reference curve of Austria (Schnepp & Lanos, GJI, 2006). Using this curve archaeomagnetic dating with RenDate provides ages between 800 and 1100 AD which are in agreement with archaeological dating. In one case archaeomagnetic dating is even more precise. Together with the archaeological age estimates and stratigraphic information the new data have been included into the database of the Austrian curve. It has been recalculated using a new version of RenCurve. The new data confine the curve and its error band considerably in the time interval 800 to 1100 AD. The curve calibration process also provides a probability density distribution for each structure which allows for posterior dating. This refines temporal errors considerably. Usefulness of such an approach and archaeological implications will be discussed.

  5. Effects of tropospheric and ionospheric refraction errors in the utilization of GEOS-C altimeter data

    NASA Technical Reports Server (NTRS)

    Goad, C. C.

    1977-01-01

    The effects of tropospheric and ionospheric refraction errors are analyzed for the GEOS-C altimeter project in terms of their resultant effects on C-band orbits and the altimeter measurement itself. Operational procedures using surface meteorological measurements at ground stations and monthly means for ocean surface conditions are assumed, with no corrections made for ionospheric effects. Effects on the orbit height due to tropospheric errors are approximately 15 cm for single pass short arcs (such as for calibration) and 10 cm for global orbits of one revolution. Orbit height errors due to neglect of the ionosphere have an amplitude of approximately 40 cm when the orbits are determined from C-band range data with predominantly daylight tracking. Altimeter measurement errors are approximately 10 cm due to residual tropospheric refraction correction errors. Ionospheric effects on the altimeter range measurement are also on the order of 10 cm during the GEOS-C launch and early operation period.

  6. A closer look at visually guided saccades in autism and Asperger’s disorder

    PubMed Central

    Johnson, Beth P.; Rinehart, Nicole J.; Papadopoulos, Nicole; Tonge, Bruce; Millist, Lynette; White, Owen; Fielding, Joanne

    2012-01-01

    Motor impairments have been found to be a significant clinical feature associated with autism and Asperger’s disorder (AD) in addition to core symptoms of communication and social cognition deficits. Motor deficits in high-functioning autism (HFA) and AD may differentiate these disorders, particularly with respect to the role of the cerebellum in motor functioning. Current neuroimaging and behavioral evidence suggests greater disruption of the cerebellum in HFA than AD. Investigations of ocular motor functioning have previously been used in clinical populations to assess the integrity of the cerebellar networks, through examination of saccade accuracy and the integrity of saccade dynamics. Previous investigations of visually guided saccades in HFA and AD have only assessed basic saccade metrics, such as latency, amplitude, and gain, as well as peak velocity. We used a simple visually guided saccade paradigm to further characterize the profile of visually guided saccade metrics and dynamics in HFA and AD. It was found that children with HFA, but not AD, were more inaccurate across both small (5°) and large (10°) target amplitudes, and final eye position was hypometric at 10°. These findings suggest greater functional disturbance of the cerebellum in HFA than AD, and suggest fundamental difficulties with visual error monitoring in HFA. PMID:23162442

  7. Piecewise compensation for the nonlinear error of fiber-optic gyroscope scale factor

    NASA Astrophysics Data System (ADS)

    Zhang, Yonggang; Wu, Xunfeng; Yuan, Shun; Wu, Lei

    2013-08-01

    Fiber-Optic Gyroscope (FOG) scale factor nonlinear error will result in errors in Strapdown Inertial Navigation System (SINS). In order to reduce nonlinear error of FOG scale factor in SINS, a compensation method is proposed in this paper based on curve piecewise fitting of FOG output. Firstly, reasons which can result in FOG scale factor error are introduced and the definition of nonlinear degree is provided. Then we introduce the method to divide the output range of FOG into several small pieces, and curve fitting is performed in each output range of FOG to obtain scale factor parameter. Different scale factor parameters of FOG are used in different pieces to improve FOG output precision. These parameters are identified by using three-axis turntable, and nonlinear error of FOG scale factor can be reduced. Finally, three-axis swing experiment of SINS verifies that the proposed method can reduce attitude output errors of SINS by compensating the nonlinear error of FOG scale factor and improve the precision of navigation. The results of experiments also demonstrate that the compensation scheme is easy to implement. It can effectively compensate the nonlinear error of FOG scale factor with slightly increased computation complexity. This method can be used in inertial technology based on FOG to improve precision.

  8. Radiographic absorptiometry method in measurement of localized alveolar bone density changes.

    PubMed

    Kuhl, E D; Nummikoski, P V

    2000-03-01

    The objective of this study was to measure the accuracy and precision of a radiographic absorptiometry method by using an occlusal density reference wedge in quantification of localized alveolar bone density changes. Twenty-two volunteer subjects had baseline and follow-up radiographs taken of mandibular premolar-molar regions with an occlusal density reference wedge in both films and added bone chips in the baseline films. The absolute bone equivalent densities were calculated in the areas that contained bone chips from the baseline and follow-up radiographs. The differences in densities described the masses of the added bone chips that were then compared with the true masses by using regression analysis. The correlation between the estimated and true bone-chip masses ranged from R = 0.82 to 0.94, depending on the background bone density. There was an average 22% overestimation of the mass of the bone chips when they were in low-density background, and up to 69% overestimation when in high-density background. The precision error of the method, which was calculated from duplicate bone density measurements of non-changing areas in both films, was 4.5%. The accuracy of the intraoral radiographic absorptiometry method is low when used for absolute quantification of bone density. However, the precision of the method is good and the correlation is linear, indicating that the method can be used for serial assessment of bone density changes at individual sites.

  9. Space-Borne Laser Altimeter Geolocation Error Analysis

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Fang, J.; Ai, Y.

    2018-05-01

    This paper reviews the development of space-borne laser altimetry technology over the past 40 years. Taking the ICESAT satellite as an example, a rigorous space-borne laser altimeter geolocation model is studied, and an error propagation equation is derived. The influence of the main error sources, such as the platform positioning error, attitude measurement error, pointing angle measurement error and range measurement error, on the geolocation accuracy of the laser spot are analysed by simulated experiments. The reasons for the different influences on geolocation accuracy in different directions are discussed, and to satisfy the accuracy of the laser control point, a design index for each error source is put forward.

  10. Estimating predictive hydrological uncertainty by dressing deterministic and ensemble forecasts; a comparison, with application to Meuse and Rhine

    NASA Astrophysics Data System (ADS)

    Verkade, J. S.; Brown, J. D.; Davids, F.; Reggiani, P.; Weerts, A. H.

    2017-12-01

    Two statistical post-processing approaches for estimation of predictive hydrological uncertainty are compared: (i) 'dressing' of a deterministic forecast by adding a single, combined estimate of both hydrological and meteorological uncertainty and (ii) 'dressing' of an ensemble streamflow forecast by adding an estimate of hydrological uncertainty to each individual streamflow ensemble member. Both approaches aim to produce an estimate of the 'total uncertainty' that captures both the meteorological and hydrological uncertainties. They differ in the degree to which they make use of statistical post-processing techniques. In the 'lumped' approach, both sources of uncertainty are lumped by post-processing deterministic forecasts using their verifying observations. In the 'source-specific' approach, the meteorological uncertainties are estimated by an ensemble of weather forecasts. These ensemble members are routed through a hydrological model and a realization of the probability distribution of hydrological uncertainties (only) is then added to each ensemble member to arrive at an estimate of the total uncertainty. The techniques are applied to one location in the Meuse basin and three locations in the Rhine basin. Resulting forecasts are assessed for their reliability and sharpness, as well as compared in terms of multiple verification scores including the relative mean error, Brier Skill Score, Mean Continuous Ranked Probability Skill Score, Relative Operating Characteristic Score and Relative Economic Value. The dressed deterministic forecasts are generally more reliable than the dressed ensemble forecasts, but the latter are sharper. On balance, however, they show similar quality across a range of verification metrics, with the dressed ensembles coming out slightly better. Some additional analyses are suggested. Notably, these include statistical post-processing of the meteorological forecasts in order to increase their reliability, thus increasing the reliability of the streamflow forecasts produced with ensemble meteorological forcings.

  11. [Clinical results of the aspheric intraocular lens FY-60AD (Hoya) with particular respect to decentration and tilt].

    PubMed

    Mester, U; Heinen, S; Kaymak, H

    2010-09-01

    Aspheric intraocular lenses (IOLs) aim to improve visual function and particularly contrast vision by neutralizing spherical aberration. One drawback of such IOLs is the enhanced sensitivity to decentration and tilt, which can deteriorate image quality. A total of 30 patients who received bilateral phacoemulsification before implantation of the aspheric lens FY-60AD (Hoya) were included in a prospective study. In 25 of the patients (50 eyes) the following parameters could be assessed 3 months after surgery: visual acuity, refraction, contrast sensitivity, pupil size, wavefront errors and decentration and tilt using a newly developed device. The functional results were very satisfying and comparable to results gained with other aspheric IOLs. The mean refraction was sph + 0.1 D (±0.7 D) and cyl 0.6 D (±0.8 D). The spherical equivalent was −0.2 D (±0.6 D). Wavefront measurements revealed a good compensation of the corneal spherical aberration but vertical and horizontal coma also showed opposing values in the cornea and IOL. The assessment of the lens position using the Purkinje meter demonstrated uncritical amounts of decentration and tilt. The mean amount of decentration was 0.2 mm±0.2 mm in the horizontal and vertical directions. The mean amount of tilt was 4.0±2.1° in horizontal and 3.0±2.5° in vertical directions. In a normal dioptric power range the aspheric IOL FY-60AD compensates the corneal spherical aberration very well with only minimal decentration. The slight tilt is symmetrical in both eyes and corresponds to the position of the crystalline lens in young eyes. This may contribute to our findings of compensated corneal coma.

  12. Effect of noise and filtering on largest Lyapunov exponent of time series associated with human walking.

    PubMed

    Mehdizadeh, Sina; Sanjari, Mohammad Ali

    2017-11-07

    This study aimed to determine the effect of added noise, filtering and time series length on the largest Lyapunov exponent (LyE) value calculated for time series obtained from a passive dynamic walker. The simplest passive dynamic walker model comprising of two massless legs connected by a frictionless hinge joint at the hip was adopted to generate walking time series. The generated time series was used to construct a state space with the embedding dimension of 3 and time delay of 100 samples. The LyE was calculated as the exponential rate of divergence of neighboring trajectories of the state space using Rosenstein's algorithm. To determine the effect of noise on LyE values, seven levels of Gaussian white noise (SNR=55-25dB with 5dB steps) were added to the time series. In addition, the filtering was performed using a range of cutoff frequencies from 3Hz to 19Hz with 2Hz steps. The LyE was calculated for both noise-free and noisy time series with different lengths of 6, 50, 100 and 150 strides. Results demonstrated a high percent error in the presence of noise for LyE. Therefore, these observations suggest that Rosenstein's algorithm might not perform well in the presence of added experimental noise. Furthermore, findings indicated that at least 50 walking strides are required to calculate LyE to account for the effect of noise. Finally, observations support that a conservative filtering of the time series with a high cutoff frequency might be more appropriate prior to calculating LyE. Copyright © 2017 Elsevier Ltd. All rights reserved.

  13. Parallel computers - Estimate errors caused by imprecise data

    NASA Technical Reports Server (NTRS)

    Kreinovich, Vladik; Bernat, Andrew; Villa, Elsa; Mariscal, Yvonne

    1991-01-01

    A new approach to the problem of estimating errors caused by imprecise data is proposed in the context of software engineering. A software device is used to produce an ideal solution to the problem, when the computer is capable of computing errors of arbitrary programs. The software engineering aspect of this problem is to describe a device for computing the error estimates in software terms and then to provide precise numbers with error estimates to the user. The feasibility of the program capable of computing both some quantity and its error estimate in the range of possible measurement errors is demonstrated.

  14. Does Unit Analysis Help Students Construct Equations?

    ERIC Educational Resources Information Center

    Reed, Stephen K.

    2006-01-01

    Previous research has shown that students construct equations for word problems in which many of the terms have no referents. Experiment 1 attempted to eliminate some of these errors by providing instruction on canceling units. The failure of this method was attributed to the cognitive overload (Sweller, 2003) imposed by adding units to the…

  15. Measuring Effect Sizes: The Effect of Measurement Error. Working Paper 19

    ERIC Educational Resources Information Center

    Boyd, Donald; Grossman, Pamela; Lankford, Hamilton; Loeb, Susanna; Wyckoff, James

    2008-01-01

    Value-added models in education research allow researchers to explore how a wide variety of policies and measured school inputs affect the academic performance of students. Researchers typically quantify the impacts of such interventions in terms of "effect sizes", i.e., the estimated effect of a one standard deviation change in the…

  16. Strengthening Scientific Verbal Behavior: An Experimental Comparison of Progressively Prompted and Unprompted Programmed Instruction and Prose Tutorials

    ERIC Educational Resources Information Center

    Davis, Darrel R.; Bostow, Darrel E.; Heimisson, Gudmundur T.

    2007-01-01

    Web-based software was used to deliver and record the effects of programmed instruction that progressively added formal prompts until attempts were successful, programmed instruction with one attempt, and prose tutorials. Error-contingent progressive prompting took significantly longer than programmed instruction and prose. Both forms of…

  17. 75 FR 31419 - Submission for OMB Review; Comment Request

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-03

    ... Person Interview (PI) and CCM PFU have been added. These are to be conducted as part of a CCM evaluation... CCM PI OMB package. The CCM program will provide estimates of net coverage error and components of... (PI) and CCM PFU will be conducted. The purpose of the respondent debriefings is to obtain a...

  18. A Compilation of Laws Pertaining to Indians. State of Maine, July 1976.

    ERIC Educational Resources Information Center

    Maine State Dept. of Indian Affairs, Augusta.

    Compiled from the Maine Revised Statutes of 1964, the Constitution of Maine, and the current Resolves and Private and Special Laws, this document constitutes an update to a previous publication (January 1974), correcting errors and adding amendments through 1976. This compilation of laws pertaining to American Indians includes statutes on the…

  19. Author Correction to: Pooled Analyses of Phase III Studies of ADS-5102 (Amantadine) Extended-Release Capsules for Dyskinesia in Parkinson's Disease.

    PubMed

    Elmer, Lawrence W; Juncos, Jorge L; Singer, Carlos; Truong, Daniel D; Criswell, Susan R; Parashos, Sotirios; Felt, Larissa; Johnson, Reed; Patni, Rajiv

    2018-04-01

    An Online First version of this article was made available online at http://link.springer.com/journal/40263/onlineFirst/page/1 on 12 March 2018. An error was subsequently identified in the article, and the following correction should be noted.

  20. Robust Connectivity in Sensory and Ad Hoc Network

    DTIC Science & Technology

    2011-02-01

    as the prior probability is π0 = 0.8, the error probability should be capped at 0.2. This seemingly pathological result is due to the fact that the...publications and is the author of the book Multirate and Wavelet Signal Processing (Academic Press, 1998). His research interests include multiscale signal and

  1. Multi-template tensor-based morphometry: Application to analysis of Alzheimer's disease

    PubMed Central

    Koikkalainen, Juha; Lötjönen, Jyrki; Thurfjell, Lennart; Rueckert, Daniel; Waldemar, Gunhild; Soininen, Hilkka

    2012-01-01

    In this paper methods for using multiple templates in tensor-based morphometry (TBM) are presented and comparedtothe conventional single-template approach. TBM analysis requires non-rigid registrations which are often subject to registration errors. When using multiple templates and, therefore, multiple registrations, it can be assumed that the registration errors are averaged and eventually compensated. Four different methods are proposed for multi-template TBM. The methods were evaluated using magnetic resonance (MR) images of healthy controls, patients with stable or progressive mild cognitive impairment (MCI), and patients with Alzheimer's disease (AD) from the ADNI database (N=772). The performance of TBM features in classifying images was evaluated both quantitatively and qualitatively. Classification results show that the multi-template methods are statistically significantly better than the single-template method. The overall classification accuracy was 86.0% for the classification of control and AD subjects, and 72.1%for the classification of stable and progressive MCI subjects. The statistical group-level difference maps produced using multi-template TBM were smoother, formed larger continuous regions, and had larger t-values than the maps obtained with single-template TBM. PMID:21419228

  2. Proximal Versus Distal Control of Two-Joint Planar Reaching Movements in the Presence of Neuromuscular Noise

    PubMed Central

    Nguyen, Hung P.; Dingwell, Jonathan B.

    2012-01-01

    Determining how the human nervous system contends with neuro-motor noise is vital to understanding how humans achieve accurate goal-directed movements. Experimentally, people learning skilled tasks tend to reduce variability in distal joint movements more than in proximal joint movements. This suggests that they might be imposing greater control over distal joints than proximal joints. However, the reasons for this remain unclear, largely because it is not experimentally possible to directly manipulate either the noise or the control at each joint independently. Therefore, this study used a 2 degree-of-freedom torque driven arm model to determine how different combinations of noise and/or control independently applied at each joint affected the reaching accuracy and the total work required to make the movement. Signal-dependent noise was simultaneously and independently added to the shoulder and elbow torques to induce endpoint errors during planar reaching. Feedback control was then applied, independently and jointly, at each joint to reduce endpoint error due to the added neuromuscular noise. Movement direction and the inertia distribution along the arm were varied to quantify how these biomechanical variations affected the system performance. Endpoint error and total net work were computed as dependent measures. When each joint was independently subjected to noise in the absence of control, endpoint errors were more sensitive to distal (elbow) noise than to proximal (shoulder) noise for nearly all combinations of reaching direction and inertia ratio. The effects of distal noise on endpoint errors were more pronounced when inertia was distributed more toward the forearm. In contrast, the total net work decreased as mass was shifted to the upper arm for reaching movements in all directions. When noise was present at both joints and joint control was implemented, controlling the distal joint alone reduced endpoint errors more than controlling the proximal joint alone for nearly all combinations of reaching direction and inertia ratio. Applying control only at the distal joint was more effective at reducing endpoint errors when more of the mass was more proximally distributed. Likewise, controlling the distal joint alone required less total net work than controlling the proximal joint alone for nearly all combinations of reaching distance and inertia ratio. It is more efficient to reduce endpoint error and energetic cost by selectively applying control to reduce variability in the distal joint than the proximal joint. The reasons for this arise from the biomechanical configuration of the arm itself. PMID:22757504

  3. Proximal versus distal control of two-joint planar reaching movements in the presence of neuromuscular noise.

    PubMed

    Nguyen, Hung P; Dingwell, Jonathan B

    2012-06-01

    Determining how the human nervous system contends with neuro-motor noise is vital to understanding how humans achieve accurate goal-directed movements. Experimentally, people learning skilled tasks tend to reduce variability in distal joint movements more than in proximal joint movements. This suggests that they might be imposing greater control over distal joints than proximal joints. However, the reasons for this remain unclear, largely because it is not experimentally possible to directly manipulate either the noise or the control at each joint independently. Therefore, this study used a 2 degree-of-freedom torque driven arm model to determine how different combinations of noise and/or control independently applied at each joint affected the reaching accuracy and the total work required to make the movement. Signal-dependent noise was simultaneously and independently added to the shoulder and elbow torques to induce endpoint errors during planar reaching. Feedback control was then applied, independently and jointly, at each joint to reduce endpoint error due to the added neuromuscular noise. Movement direction and the inertia distribution along the arm were varied to quantify how these biomechanical variations affected the system performance. Endpoint error and total net work were computed as dependent measures. When each joint was independently subjected to noise in the absence of control, endpoint errors were more sensitive to distal (elbow) noise than to proximal (shoulder) noise for nearly all combinations of reaching direction and inertia ratio. The effects of distal noise on endpoint errors were more pronounced when inertia was distributed more toward the forearm. In contrast, the total net work decreased as mass was shifted to the upper arm for reaching movements in all directions. When noise was present at both joints and joint control was implemented, controlling the distal joint alone reduced endpoint errors more than controlling the proximal joint alone for nearly all combinations of reaching direction and inertia ratio. Applying control only at the distal joint was more effective at reducing endpoint errors when more of the mass was more proximally distributed. Likewise, controlling the distal joint alone required less total net work than controlling the proximal joint alone for nearly all combinations of reaching distance and inertia ratio. It is more efficient to reduce endpoint error and energetic cost by selectively applying control to reduce variability in the distal joint than the proximal joint. The reasons for this arise from the biomechanical configuration of the arm itself.

  4. How proton pulse characteristics influence protoacoustic determination of proton-beam range: simulation studies.

    PubMed

    Jones, Kevin C; Seghal, Chandra M; Avery, Stephen

    2016-03-21

    The unique dose deposition of proton beams generates a distinctive thermoacoustic (protoacoustic) signal, which can be used to calculate the proton range. To identify the expected protoacoustic amplitude, frequency, and arrival time for different proton pulse characteristics encountered at hospital-based proton sources, the protoacoustic pressure emissions generated by 150 MeV, pencil-beam proton pulses were simulated in a homogeneous water medium. Proton pulses with Gaussian widths ranging up to 200 μs were considered. The protoacoustic amplitude, frequency, and time-of-flight (TOF) range accuracy were assessed. For TOF calculations, the acoustic pulse arrival time was determined based on multiple features of the wave. Based on the simulations, Gaussian proton pulses can be categorized as Dirac-delta-function-like (FWHM < 4 μs) and longer. For the δ-function-like irradiation, the protoacoustic spectrum peaks at 44.5 kHz and the systematic error in determining the Bragg peak range is <2.6 mm. For longer proton pulses, the spectrum shifts to lower frequencies, and the range calculation systematic error increases (⩽ 23 mm for FWHM of 56 μs). By mapping the protoacoustic peak arrival time to range with simulations, the residual error can be reduced. Using a proton pulse with FWHM = 2 μs results in a maximum signal-to-noise ratio per total dose. Simulations predict that a 300 nA, 150 MeV, FWHM = 4 μs Gaussian proton pulse (8.0 × 10(6) protons, 3.1 cGy dose at the Bragg peak) will generate a 146 mPa pressure wave at 5 cm beyond the Bragg peak. There is an angle dependent systematic error in the protoacoustic TOF range calculations. Placing detectors along the proton beam axis and beyond the Bragg peak minimizes this error. For clinical proton beams, protoacoustic detectors should be sensitive to <400 kHz (for -20 dB). Hospital-based synchrocyclotrons and cyclotrons are promising sources of proton pulses for generating clinically measurable protoacoustic emissions.

  5. Transfer of uncertainty of space-borne high resolution rainfall products at ungauged regions

    NASA Astrophysics Data System (ADS)

    Tang, Ling

    Hydrologically relevant characteristics of high resolution (˜ 0.25 degree, 3 hourly) satellite rainfall uncertainty were derived as a function of season and location using a six year (2002-2007) archive of National Aeronautics and Space Administration (NASA)'s Tropical Rainfall Measuring Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) precipitation data. The Next Generation Radar (NEXRAD) Stage IV rainfall data over the continental United States was used as ground validation (GV) data. A geostatistical mapping scheme was developed and tested for transfer (i.e., spatial interpolation) of uncertainty information from GV regions to the vast non-GV regions by leveraging the error characterization work carried out in the earlier step. The open question explored here was, "If 'error' is defined on the basis of independent ground validation (GV) data, how are error metrics estimated for a satellite rainfall data product without the need for much extensive GV data?" After a quantitative analysis of the spatial and temporal structure of the satellite rainfall uncertainty, a proof-of-concept geostatistical mapping scheme (based on the kriging method) was evaluated. The idea was to understand how realistic the idea of 'transfer' is for the GPM era. It was found that it was indeed technically possible to transfer error metrics from a gauged to an ungauged location for certain error metrics and that a regionalized error metric scheme for GPM may be possible. The uncertainty transfer scheme based on a commonly used kriging method (ordinary kriging) was then assessed further at various timescales (climatologic, seasonal, monthly and weekly), and as a function of the density of GV coverage. The results indicated that if a transfer scheme for estimating uncertainty metrics was finer than seasonal scale (ranging from 3-6 hourly to weekly-monthly), the effectiveness for uncertainty transfer worsened significantly. Next, a comprehensive assessment of different kriging methods for spatial transfer (interpolation) of error metrics was performed. Three kriging methods for spatial interpolation are compared, which are: ordinary kriging (OK), indicator kriging (IK) and disjunctive kriging (DK). Additional comparison with the simple inverse distance weighting (IDW) method was also performed to quantify the added benefit (if any) of using geostatistical methods. The overall performance ranking of the kriging methods was found to be as follows: OK=DK > IDW > IK. Lastly, various metrics of satellite rainfall uncertainty were identified for two large continental landmasses that share many similar Koppen climate zones, United States and Australia. The dependence of uncertainty as a function of gauge density was then investigated. The investigation revealed that only the first and second ordered moments of error are most amenable to a Koppen-type climate type classification in different continental landmasses.

  6. A simulation study to quantify the impacts of exposure ...

    EPA Pesticide Factsheets

    BackgroundExposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health.MethodsZIP-code level estimates of exposure for six pollutants (CO, NOx, EC, PM2.5, SO4, O3) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error.Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs.ResultsSubstantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3–85% for population error, and 31–85% for total error. When CO, NOx or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copoll

  7. Longitudinal Neuroimaging Hippocampal Markers for Diagnosing Alzheimer's Disease.

    PubMed

    Platero, Carlos; Lin, Lin; Tobar, M Carmen

    2018-05-21

    Hippocampal atrophy measures from magnetic resonance imaging (MRI) are powerful tools for monitoring Alzheimer's disease (AD) progression. In this paper, we introduce a longitudinal image analysis framework based on robust registration and simultaneous hippocampal segmentation and longitudinal marker classification of brain MRI of an arbitrary number of time points. The framework comprises two innovative parts: a longitudinal segmentation and a longitudinal classification step. The results show that both steps of the longitudinal pipeline improved the reliability and the accuracy of the discrimination between clinical groups. We introduce a novel approach to the joint segmentation of the hippocampus across multiple time points; this approach is based on graph cuts of longitudinal MRI scans with constraints on hippocampal atrophy and supported by atlases. Furthermore, we use linear mixed effect (LME) modeling for differential diagnosis between clinical groups. The classifiers are trained from the average residue between the longitudinal marker of the subjects and the LME model. In our experiments, we analyzed MRI-derived longitudinal hippocampal markers from two publicly available datasets (Alzheimer's Disease Neuroimaging Initiative, ADNI and Minimal Interval Resonance Imaging in Alzheimer's Disease, MIRIAD). In test/retest reliability experiments, the proposed method yielded lower volume errors and significantly higher dice overlaps than the cross-sectional approach (volume errors: 1.55% vs 0.8%; dice overlaps: 0.945 vs 0.975). To diagnose AD, the discrimination ability of our proposal gave an area under the receiver operating characteristic (ROC) curve (AUC) [Formula: see text] 0.947 for the control vs AD, AUC [Formula: see text] 0.720 for mild cognitive impairment (MCI) vs AD, and AUC [Formula: see text] 0.805 for the control vs MCI.

  8. Best-estimate coupled RELAP/CONTAIN analysis of inadvertent BWR ADS valve opening transient

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Feltus, M.A.; Muftuoglu, A.K.

    1993-01-01

    Noncondensible gases may become dissolved in boiling water reactor (BWR) water-level instrumentation during normal operations. Any dissolved noncondensible gases inside these water columns may come out of solution during rapid depressurization events and displace water from the reference leg piping, resulting in a false high level. Significant errors in water-level indication are not expected to occur until the reactor pressure vessel (RPV) pressure has dropped below [approximately]450 psig. These water level errors may cause a delay or failure in emergency core cooling system (ECCS) actuation. The RPV water level is monitored using the pressure of a water column having amore » varying height (reactor water level) that is compared to the pressure of a water column maintained at a constant height (reference level). The reference legs have small-diameter pipes with varying lengths that provide a constant head of water and are located outside the drywell. The amount of noncondensible gases dissolved in each reference leg is very dependent on the amount of leakage from the reference leg and its geometry and interaction of the reactor coolant system with the containment, i.e., torus or suppression pool, and reactor building. If a rapid depressurization causes an erroneously high water level, preventing automatic ECCS actuation, it becomes important to determine if there would be other adequate indications for operator response. In the postulated inadvertent opening of all seven automatic depressurization system (ADS) valves, the ECCS signal on high drywell pressure would be circumvented because the ADS valves discharge directly into the suppression pool. A best-estimate analysis of such an inadvertent opening of all ADS valves would have to consider the thermal-hydraulic coupling between the pool, drywell, reactor building, and RPV.« less

  9. Receiver design, performance analysis, and evaluation for space-borne laser altimeters and space-to-space laser ranging systems

    NASA Technical Reports Server (NTRS)

    Davidson, Frederic M.; Sun, Xiaoli; Field, Christopher T.

    1995-01-01

    Laser altimeters measure the time of flight of the laser pulses to determine the range of the target. The simplest altimeter receiver consists of a photodetector followed by a leading edge detector. A time interval unit (TIU) measures the time from the transmitted laser pulse to the leading edge of the received pulse as it crosses a preset threshold. However, the ranging error of this simple detection scheme depends on the received, pulse amplitude, pulse shape, and the threshold. In practice, the pulse shape and the amplitude are determined by the target target characteristics which has to be assumed unknown prior to the measurement. The ranging error can be improved if one also measures the pulse width and use the average of the leading and trailing edges (half pulse width) as the pulse arrival time. The ranging error becomes independent of the received pulse amplitude and the pulse width as long as the pulse shape is symmetric. The pulse width also gives the slope of the target. The ultimate detection scheme is to digitize the received waveform and calculate the centroid as the pulse arrival time. The centroid detection always gives unbiased measurement even for asymmetric pulses. In this report, we analyze the laser altimeter ranging errors for these three detection schemes using the Mars Orbital Laser Altimeter (MOLA) as an example.

  10. A design method for high performance seismic data acquisition based on oversampling delta-sigma modulation

    NASA Astrophysics Data System (ADS)

    Gao, Shanghua; Xue, Bing

    2017-04-01

    The dynamic range of the currently most widely used 24-bit seismic data acquisition devices is 10-20 dB lower than that of broadband seismometers, and this can affect the completeness of seismic waveform recordings under certain conditions. However, this problem is not easy to solve because of the lack of analog to digital converter (ADC) chips with more than 24 bits in the market. So the key difficulties for higher-resolution data acquisition devices lie in achieving more than 24-bit ADC circuit. In the paper, we propose a method in which an adder, an integrator, a digital to analog converter chip, a field-programmable gate array, and an existing low-resolution ADC chip are used to build a third-order 16-bit oversampling delta-sigma modulator. This modulator is equipped with a digital decimation filter, thus forming a complete analog to digital converting circuit. Experimental results show that, within the 0.1-40 Hz frequency range, the circuit board's dynamic range reaches 158.2 dB, its resolution reaches 25.99 dB, and its linearity error is below 2.5 ppm, which is better than what is achieved by the commercial 24-bit ADC chips ADS1281 and CS5371. This demonstrates that the proposed method may alleviate or even solve the amplitude-limitation problem that broadband observation systems so commonly have to face during strong earthquakes.

  11. Design Optimization for the Measurement Accuracy Improvement of a Large Range Nanopositioning Stage

    PubMed Central

    Torralba, Marta; Yagüe-Fabra, José Antonio; Albajez, José Antonio; Aguilar, Juan José

    2016-01-01

    Both an accurate machine design and an adequate metrology loop definition are critical factors when precision positioning represents a key issue for the final system performance. This article discusses the error budget methodology as an advantageous technique to improve the measurement accuracy of a 2D-long range stage during its design phase. The nanopositioning platform NanoPla is here presented. Its specifications, e.g., XY-travel range of 50 mm × 50 mm and sub-micrometric accuracy; and some novel designed solutions, e.g., a three-layer and two-stage architecture are described. Once defined the prototype, an error analysis is performed to propose improvement design features. Then, the metrology loop of the system is mathematically modelled to define the propagation of the different sources. Several simplifications and design hypothesis are justified and validated, including the assumption of rigid body behavior, which is demonstrated after a finite element analysis verification. The different error sources and their estimated contributions are enumerated in order to conclude with the final error values obtained from the error budget. The measurement deviations obtained demonstrate the important influence of the working environmental conditions, the flatness error of the plane mirror reflectors and the accurate manufacture and assembly of the components forming the metrological loop. Thus, a temperature control of ±0.1 °C results in an acceptable maximum positioning error for the developed NanoPla stage, i.e., 41 nm, 36 nm and 48 nm in X-, Y- and Z-axis, respectively. PMID:26761014

  12. Estimating extreme stream temperatures by the standard deviate method

    NASA Astrophysics Data System (ADS)

    Bogan, Travis; Othmer, Jonathan; Mohseni, Omid; Stefan, Heinz

    2006-02-01

    It is now widely accepted that global climate warming is taking place on the earth. Among many other effects, a rise in air temperatures is expected to increase stream temperatures indefinitely. However, due to evaporative cooling, stream temperatures do not increase linearly with increasing air temperatures indefinitely. Within the anticipated bounds of climate warming, extreme stream temperatures may therefore not rise substantially. With this concept in mind, past extreme temperatures measured at 720 USGS stream gauging stations were analyzed by the standard deviate method. In this method the highest stream temperatures are expressed as the mean temperature of a measured partial maximum stream temperature series plus its standard deviation multiplied by a factor KE (standard deviate). Various KE-values were explored; values of KE larger than 8 were found physically unreasonable. It is concluded that the value of KE should be in the range from 7 to 8. A unit error in estimating KE translates into a typical stream temperature error of about 0.5 °C. Using a logistic model for the stream temperature/air temperature relationship, a one degree error in air temperature gives a typical error of 0.16 °C in stream temperature. With a projected error in the enveloping standard deviate dKE=1.0 (range 0.5-1.5) and an error in projected high air temperature d Ta=2 °C (range 0-4 °C), the total projected stream temperature error is estimated as d Ts=0.8 °C.

  13. Error analysis on spinal motion measurement using skin mounted sensors.

    PubMed

    Yang, Zhengyi; Ma, Heather Ting; Wang, Deming; Lee, Raymond

    2008-01-01

    Measurement errors of skin-mounted sensors in measuring forward bending movement of the lumbar spines are investigated. In this investigation, radiographic images capturing the entire lumbar spines' positions were acquired and used as a 'gold' standard. Seventeen young male volunteers (21 (SD 1) years old) agreed to participate in the study. Light-weight miniature sensors of the electromagnetic tracking systems-Fastrak were attached to the skin overlying the spinous processes of the lumbar spine. With the sensors attached, the subjects were requested to take lateral radiographs in two postures: neutral upright and full flexion. The ranges of motions of lumbar spine were calculated from two sets of digitized data: the bony markers of vertebral bodies and the sensors and compared. The differences between the two sets of results were then analyzed. The relative movement between sensor and vertebrae was decomposed into sensor sliding and titling, from which sliding error and titling error were introduced. Gross motion range of forward bending of lumbar spine measured from bony markers of vertebrae is 67.8 degrees (SD 10.6 degrees ) and that from sensors is 62.8 degrees (SD 12.8 degrees ). The error and absolute error for gross motion range were 5.0 degrees (SD 7.2 degrees ) and 7.7 degrees (SD 3.9 degrees ). The contributions of sensors placed on S1 and L1 to the absolute error were 3.9 degrees (SD 2.9 degrees ) and 4.4 degrees (SD 2.8 degrees ), respectively.

  14. Rotational wind indicator enhances control of rotated displays

    NASA Technical Reports Server (NTRS)

    Cunningham, H. A.; Pavel, Misha

    1991-01-01

    Rotation by 108 deg of the spatial mapping between a visual display and a manual input device produces large spatial errors in a discrete aiming task. These errors are not easily corrected by voluntary mental effort, but the central nervous system does adapt gradually to the new mapping. Bernotat (1970) showed that adding true hand position to a 90 deg rotated display improved performance of a compensatory tracking task, but tracking error rose again upon removal of the explicit cue. This suggests that the explicit error signal did not induce changes in the neural mapping, but rather allowed the operator to reduce tracking error using a higher mental strategy. In this report, we describe an explicit visual display enhancement applied to a 108 deg rotated discrete aiming task. A 'wind indicator' corresponding to the effect of the mapping rotation is displayed on the operator-controlled cursor. The human operator is instructed to oppose the virtual force represented by the indicator, as one would do if flying an airplane in a crosswind. This enhancement reduces spatial aiming error in the first 10 minutes of practice by an average of 70 percent when compared to a no enhancement control condition. Moreover, it produces adaptation aftereffect, which is evidence of learning by neural adaptation rather than by mental strategy. Finally, aiming error does not rise upon removal of the explicit cue.

  15. Ranging error analysis of single photon satellite laser altimetry under different terrain conditions

    NASA Astrophysics Data System (ADS)

    Huang, Jiapeng; Li, Guoyuan; Gao, Xiaoming; Wang, Jianmin; Fan, Wenfeng; Zhou, Shihong

    2018-02-01

    Single photon satellite laser altimeter is based on Geiger model, which has the characteristics of small spot, high repetition rate etc. In this paper, for the slope terrain, the distance of error's formula and numerical calculation are carried out. Monte Carlo method is used to simulate the experiment of different terrain measurements. The experimental results show that ranging accuracy is not affected by the spot size under the condition of the flat terrain, But the inclined terrain can influence the ranging error dramatically, when the satellite pointing angle is 0.001° and the terrain slope is about 12°, the ranging error can reach to 0.5m. While the accuracy can't meet the requirement when the slope is more than 70°. Monte Carlo simulation results show that single photon laser altimeter satellite with high repetition rate can improve the ranging accuracy under the condition of complex terrain. In order to ensure repeated observation of the same point for 25 times, according to the parameters of ICESat-2, we deduce the quantitative relation between the footprint size, footprint, and the frequency repetition. The related conclusions can provide reference for the design and demonstration of the domestic single photon laser altimetry satellite.

  16. APOLLO clock performance and normal point corrections

    NASA Astrophysics Data System (ADS)

    Liang, Y.; Murphy, T. W., Jr.; Colmenares, N. R.; Battat, J. B. R.

    2017-12-01

    The Apache point observatory lunar laser-ranging operation (APOLLO) has produced a large volume of high-quality lunar laser ranging (LLR) data since it began operating in 2006. For most of this period, APOLLO has relied on a GPS-disciplined, high-stability quartz oscillator as its frequency and time standard. The recent addition of a cesium clock as part of a timing calibration system initiated a comparison campaign between the two clocks. This has allowed correction of APOLLO range measurements—called normal points—during the overlap period, but also revealed a mechanism to correct for systematic range offsets due to clock errors in historical APOLLO data. Drift of the GPS clock on  ∼1000 s timescales contributed typically 2.5 mm of range error to APOLLO measurements, and we find that this may be reduced to  ∼1.6 mm on average. We present here a characterization of APOLLO clock errors, the method by which we correct historical data, and the resulting statistics.

  17. Is visual short-term memory depthful?

    PubMed

    Reeves, Adam; Lei, Quan

    2014-03-01

    Does visual short-term memory (VSTM) depend on depth, as it might be if information was stored in more than one depth layer? Depth is critical in natural viewing and might be expected to affect retention, but whether this is so is currently unknown. Cued partial reports of letter arrays (Sperling, 1960) were measured up to 700 ms after display termination. Adding stereoscopic depth hardly affected VSTM capacity or decay inferred from total errors. The pattern of transposition errors (letters reported from an uncued row) was almost independent of depth and cue delay. We conclude that VSTM is effectively two-dimensional. Copyright © 2014 Elsevier Ltd. All rights reserved.

  18. ``The errors were the results of errors'': Promoting Good Writing by Bad Example

    NASA Astrophysics Data System (ADS)

    Korsunsky, Boris

    2010-01-01

    We learn best by example—this adage is probably as old as teaching itself. In my own classroom, I have found that very often the students learn best from the "negative" examples. Perhaps, this shouldn't come as a surprise at all. After all, we don't react strongly to the norm—but an obvious deviation from the norm may attract our attention and make for a great teachable moment. And, if the deviation happens to be either scary or funny, the added emotional impact can create a truly powerful and lasting memory in the minds of the students.

  19. Gallium Compounds: A Possible Problem for the G2 Approaches

    NASA Technical Reports Server (NTRS)

    Bauschlicher, Charles W., Jr.; Melius, Carl F.; Allendorf, Mark D.; Arnold, James (Technical Monitor)

    1998-01-01

    The G2 atomization energies of fluorine and oxygen containing Ga compounds are greatly in error. This arises from an inversion of the Ga 3d core orbital and the F 2s or O 2s valence orbitals. Adding the Ga 3d orbital to the correlation treatment or removing the F 2s orbitals from the correlation treatment are shown to eliminate the problem. Removing the O 2s orbital from the correlation treatment reduces the error, but it can still be more than 6 kcal/mol. It is concluded that the experimental atomization energy of GaF2 is too large.

  20. Optimal quantum error correcting codes from absolutely maximally entangled states

    NASA Astrophysics Data System (ADS)

    Raissi, Zahra; Gogolin, Christian; Riera, Arnau; Acín, Antonio

    2018-02-01

    Absolutely maximally entangled (AME) states are pure multi-partite generalizations of the bipartite maximally entangled states with the property that all reduced states of at most half the system size are in the maximally mixed state. AME states are of interest for multipartite teleportation and quantum secret sharing and have recently found new applications in the context of high-energy physics in toy models realizing the AdS/CFT-correspondence. We work out in detail the connection between AME states of minimal support and classical maximum distance separable (MDS) error correcting codes and, in particular, provide explicit closed form expressions for AME states of n parties with local dimension \

  1. Aleutian Disease: An Emerging Disease in Free-Ranging Striped Skunks (Mephitis mephitis) From California.

    PubMed

    LaDouceur, E E B; Anderson, M; Ritchie, B W; Ciembor, P; Rimoldi, G; Piazza, M; Pesti, D; Clifford, D L; Giannitti, F

    2015-11-01

    Aleutian disease virus (ADV, Amdovirus, Parvoviridae) primarily infects farmed mustelids (mink and ferrets) but also other fur-bearing animals and humans. Three Aleutian disease (AD) cases have been described in captive striped skunks; however, little is known about the relevance of AD in free-ranging carnivores. This work describes the pathological findings and temporospatial distribution in 7 cases of AD in free-ranging striped skunks. All cases showed neurologic disease and were found in a 46-month period (2010-2013) within a localized geographical region in California. Lesions included multisystemic plasmacytic and lymphocytic inflammation (ie, interstitial nephritis, myocarditis, hepatitis, meningoencephalitis, pneumonia, and splenitis), glomerulonephritis, arteritis with or without fibrinoid necrosis in several organs (ie, kidney, heart, brain, and spleen), splenomegaly, ascites/hydrothorax, and/or encephalomalacia with cerebral microangiopathy. ADV infection was confirmed in all cases by specific polymerase chain reaction and/or in situ hybridization. The results suggest that AD is an emerging disease in free-ranging striped skunks in California. © The Author(s) 2014.

  2. A simulation study to quantify the impacts of exposure measurement error on air pollution health risk estimates in copollutant time-series models.

    PubMed

    Dionisio, Kathie L; Chang, Howard H; Baxter, Lisa K

    2016-11-25

    Exposure measurement error in copollutant epidemiologic models has the potential to introduce bias in relative risk (RR) estimates. A simulation study was conducted using empirical data to quantify the impact of correlated measurement errors in time-series analyses of air pollution and health. ZIP-code level estimates of exposure for six pollutants (CO, NO x , EC, PM 2.5 , SO 4 , O 3 ) from 1999 to 2002 in the Atlanta metropolitan area were used to calculate spatial, population (i.e. ambient versus personal), and total exposure measurement error. Empirically determined covariance of pollutant concentration pairs and the associated measurement errors were used to simulate true exposure (exposure without error) from observed exposure. Daily emergency department visits for respiratory diseases were simulated using a Poisson time-series model with a main pollutant RR = 1.05 per interquartile range, and a null association for the copollutant (RR = 1). Monte Carlo experiments were used to evaluate the impacts of correlated exposure errors of different copollutant pairs. Substantial attenuation of RRs due to exposure error was evident in nearly all copollutant pairs studied, ranging from 10 to 40% attenuation for spatial error, 3-85% for population error, and 31-85% for total error. When CO, NO x or EC is the main pollutant, we demonstrated the possibility of false positives, specifically identifying significant, positive associations for copollutants based on the estimated type I error rate. The impact of exposure error must be considered when interpreting results of copollutant epidemiologic models, due to the possibility of attenuation of main pollutant RRs and the increased probability of false positives when measurement error is present.

  3. Retrieval monitoring and anosognosia in Alzheimer's disease.

    PubMed

    Gallo, David A; Chen, Jennifer M; Wiseman, Amy L; Schacter, Daniel L; Budson, Andrew E

    2007-09-01

    This study explored the relationship between episodic memory and anosognosia (a lack of deficit awareness) among patients with mild Alzheimer's disease (AD). Participants studied words and pictures for subsequent memory tests. Healthy older adults made fewer false recognition errors when trying to remember pictures compared with words, suggesting that the perceptual distinctiveness of picture memories enhanced retrieval monitoring (the distinctiveness heuristic). In contrast, although participants with AD could discriminate between studied and nonstudied items, they had difficulty recollecting the specific presentation formats (words or pictures), and they had limited use of the distinctiveness heuristic. Critically, the demands of the memory test modulated the relationship between memory accuracy and anosognosia. Greater anosognosia was associated with impaired memory accuracy when participants with AD tried to remember words but not when they tried to remember pictures. These data further delineate the retrieval monitoring difficulties among individuals with AD and suggest that anosognosia measures are most likely to correlate with memory tests that require the effortful retrieval of nondistinctive information. (PsycINFO Database Record (c) 2007 APA, all rights reserved).

  4. adLIMS: a customized open source software that allows bridging clinical and basic molecular research studies.

    PubMed

    Calabria, Andrea; Spinozzi, Giulio; Benedicenti, Fabrizio; Tenderini, Erika; Montini, Eugenio

    2015-01-01

    Many biological laboratories that deal with genomic samples are facing the problem of sample tracking, both for pure laboratory management and for efficiency. Our laboratory exploits PCR techniques and Next Generation Sequencing (NGS) methods to perform high-throughput integration site monitoring in different clinical trials and scientific projects. Because of the huge amount of samples that we process every year, which result in hundreds of millions of sequencing reads, we need to standardize data management and tracking systems, building up a scalable and flexible structure with web-based interfaces, which are usually called Laboratory Information Management System (LIMS). We started collecting end-users' requirements, composed of desired functionalities of the system and Graphical User Interfaces (GUI), and then we evaluated available tools that could address our requirements, spanning from pure LIMS to Content Management Systems (CMS) up to enterprise information systems. Our analysis identified ADempiere ERP, an open source Enterprise Resource Planning written in Java J2EE, as the best software that also natively implements some highly desirable technological advances, such as the high usability and modularity that grants high use-case flexibility and software scalability for custom solutions. We extended and customized ADempiere ERP to fulfil LIMS requirements and we developed adLIMS. It has been validated by our end-users verifying functionalities and GUIs through test cases for PCRs samples and pre-sequencing data and it is currently in use in our laboratories. adLIMS implements authorization and authentication policies, allowing multiple users management and roles definition that enables specific permissions, operations and data views to each user. For example, adLIMS allows creating sample sheets from stored data using available exporting operations. This simplicity and process standardization may avoid manual errors and information backtracking, features that are not granted using track recording on files or spreadsheets. adLIMS aims to combine sample tracking and data reporting features with higher accessibility and usability of GUIs, thus allowing time to be saved on doing repetitive laboratory tasks, and reducing errors with respect to manual data collection methods. Moreover, adLIMS implements automated data entry, exploiting sample data multiplexing and parallel/transactional processing. adLIMS is natively extensible to cope with laboratory automation through platform-dependent API interfaces, and could be extended to genomic facilities due to the ERP functionalities.

  5. Alternative methods of refraction: a comparison of three techniques.

    PubMed

    Smith, Kyla; Weissberg, Erik; Travison, Thomas G

    2010-03-01

    In the developing world, refractive error is a common untreated cause of visual impairment. Lay people may use portable tools to overcome this issue. This study compares three methods of measuring spherical refractive error (SE) performed by a lay technician to a subjective refraction (SR) in a controlled clinical setting and a field trial. Fifty subjects from Boston, MA (mean age, 24.3 y ± 1.5) and 50 from Nicaragua (mean age, 40 y ± 13.7) were recruited. Measures (performed on right eye only) included (1) AdSpecs, adjustable spectacles; (2) Focometer, focusable telescope; (3) Predetermined Lens Refraction (PLR), prescripted lens choices; (4) SR. Examiners were masked and techniques randomized. Student t-test compared mean SE determined by each method (95% confidence intervals). AdSpecs repeatability was evaluated by repeating measures of SE and visual acuity (VA). Mean (SD) SE for Boston subjects determined by SR was -2.46 D (3.2). Mean (SD) SE for AdSpecs, Focometer -2.41 D (2.69), -2.80 D (2.82). Among the 30 Boston subjects considered in analyses of PLR data (see Methods), PLR and SR obtained mean (SD) values of -0.65 D (1.36) and -0.41 D (1.67), respectively, a statistically significant difference of -0.24 D (p = 0.046, t = 2.09). Mean PLR SE had greatest deviation from SR, 0.67 D. 20/20 VA was achieved by SR, AdSpecs, Focometer, and PLR in 98, 88, 84, 96% of subjects. Mean (SD) SE for Nicaragua subjects determined by SR was +0.51 D (0.71). Mean (SD) SE for AdSpecs, Focometer, and PLR was +0.68 D (0.83), +0.42 D (1.13), +0.27 D (0.79). Mean PLR SE had the greatest deviation from the SR by 0.24 D, which was a statistically significant difference. 20/20 VA was achieved by SR, AdSpecs, Focometer, and PLR in 78, 66, 66, 88% of subjects. Repeated measures by AdSpecs were highly correlated. Although the mean value obtained by each technique may be similar to that obtained by SR, substantial and clinically meaningful differences may exist in some individuals; however, where SR is unavailable they could be a feasible alternative.

  6. Usual Intake of Added Sugars and Lipid Profiles Among the U.S. Adolescents: National Health and Nutrition Examination Survey, 2005–2010

    PubMed Central

    Zhang, Zefeng; Gillespie, Cathleen; Welsh, Jean A.; Hu, Frank B.; Yang, Quanhe

    2015-01-01

    Purpose Although studies suggest that higher consumption of added sugars is associated with cardiovascular risk factors in adolescents, none have adjusted for measurement errors or examined its association with the risk of dyslipidemia. Methods We analyzed data of 4,047 adolescents aged 12–19 years from the 2005–2010 National Health and Nutrition Examination Survey, a nationally representative, cross-sectional survey. We estimated the usual percentage of calories (%kcal) from added sugars using up to two 24-hour dietary recalls and the National Cancer Institute method to account for measurement error. Results The average usual %kcal from added sugars was 16.0%. Most adolescents (88.0%) had usual intake of ≥10% of total energy, and 5.5% had usual intake of ≥25% of total energy. After adjustment for potential confounders, usual %kcal from added sugars was inversely associated with high-density lipoprotein (HDL) and positively associated with triglycerides (TGs), TG-to-HDL ratio, and total cholesterol (TC) to HDL ratio. Comparing the lowest and highest quintiles of intake, HDLs were 49.5 (95% confidence interval [CI], 47.4–51.6) and 46.4 mg/dL(95% CI, 45.2–47.6; p = .009), TGs were 85.6 (95% CI, 75.5–95.6) and 101.2 mg/dL(95% CI, 88.7–113.8; p = .037), TG to HDL ratios were 2.28 (95% CI, 1.84–2.70) and 2.73 (95% CI, 2.11–3.32; p = .017), and TC to HDL ratios were 3.41 (95% CI, 3.03–3.79) and 3.70 (95% CI, 3.24–4.15; p = .028), respectively. Comparing the highest and lowest quintiles of intake, adjusted odds ratio of dyslipidemia was 1.41 (95% CI, 1.01–1.95). The patterns were consistent across sex, race/ethnicity, and body mass index subgroups. No association was found for TC, low-density lipoprotein, and non-HDL cholesterol. Conclusions Most U.S. adolescents consumed more added sugars than recommended for heart health. Usual intake of added sugars was significantly associated with several measures of lipid profiles. PMID:25703323

  7. An analysis of temperature-induced errors for an ultrasound distance measuring system. M. S. Thesis

    NASA Technical Reports Server (NTRS)

    Wenger, David Paul

    1991-01-01

    The presentation of research is provided in the following five chapters. Chapter 2 presents the necessary background information and definitions for general work with ultrasound and acoustics. It also discusses the basis for errors in the slant range measurements. Chapter 3 presents a method of problem solution and an analysis of the sensitivity of the equations to slant range measurement errors. It also presents various methods by which the error in the slant range measurements can be reduced to improve overall measurement accuracy. Chapter 4 provides a description of a type of experiment used to test the analytical solution and provides a discussion of its results. Chapter 5 discusses the setup of a prototype collision avoidance system, discusses its accuracy, and demonstrates various methods of improving the accuracy along with the improvements' ramifications. Finally, Chapter 6 provides a summary of the work and a discussion of conclusions drawn from it. Additionally, suggestions for further research are made to improve upon what has been presented here.

  8. IMRT QA: Selecting gamma criteria based on error detection sensitivity.

    PubMed

    Steers, Jennifer M; Fraass, Benedick A

    2016-04-01

    The gamma comparison is widely used to evaluate the agreement between measurements and treatment planning system calculations in patient-specific intensity modulated radiation therapy (IMRT) quality assurance (QA). However, recent publications have raised concerns about the lack of sensitivity when employing commonly used gamma criteria. Understanding the actual sensitivity of a wide range of different gamma criteria may allow the definition of more meaningful gamma criteria and tolerance limits in IMRT QA. We present a method that allows the quantitative determination of gamma criteria sensitivity to induced errors which can be applied to any unique combination of device, delivery technique, and software utilized in a specific clinic. A total of 21 DMLC IMRT QA measurements (ArcCHECK®, Sun Nuclear) were compared to QA plan calculations with induced errors. Three scenarios were studied: MU errors, multi-leaf collimator (MLC) errors, and the sensitivity of the gamma comparison to changes in penumbra width. Gamma comparisons were performed between measurements and error-induced calculations using a wide range of gamma criteria, resulting in a total of over 20 000 gamma comparisons. Gamma passing rates for each error class and case were graphed against error magnitude to create error curves in order to represent the range of missed errors in routine IMRT QA using 36 different gamma criteria. This study demonstrates that systematic errors and case-specific errors can be detected by the error curve analysis. Depending on the location of the error curve peak (e.g., not centered about zero), 3%/3 mm threshold = 10% at 90% pixels passing may miss errors as large as 15% MU errors and ±1 cm random MLC errors for some cases. As the dose threshold parameter was increased for a given %Diff/distance-to-agreement (DTA) setting, error sensitivity was increased by up to a factor of two for select cases. This increased sensitivity with increasing dose threshold was consistent across all studied combinations of %Diff/DTA. Criteria such as 2%/3 mm and 3%/2 mm with a 50% threshold at 90% pixels passing are shown to be more appropriately sensitive without being overly strict. However, a broadening of the penumbra by as much as 5 mm in the beam configuration was difficult to detect with commonly used criteria, as well as with the previously mentioned criteria utilizing a threshold of 50%. We have introduced the error curve method, an analysis technique which allows the quantitative determination of gamma criteria sensitivity to induced errors. The application of the error curve method using DMLC IMRT plans measured on the ArcCHECK® device demonstrated that large errors can potentially be missed in IMRT QA with commonly used gamma criteria (e.g., 3%/3 mm, threshold = 10%, 90% pixels passing). Additionally, increasing the dose threshold value can offer dramatic increases in error sensitivity. This approach may allow the selection of more meaningful gamma criteria for IMRT QA and is straightforward to apply to other combinations of devices and treatment techniques.

  9. A Novel Methodology to Validate the Accuracy of Extraoral Dental Scanners and Digital Articulation Systems.

    PubMed

    Ellakwa, A; Elnajar, S; Littlefair, D; Sara, G

    2018-05-03

    The aim of the current study is to develop a novel method to investigate the accuracy of 3D scanners and digital articulation systems. An upper and a lower poured stone model were created by taking impression of fully dentate male (fifty years old) participant. Titanium spheres were added to the models to allow for an easily recognisable geometric shape for measurement after scanning and digital articulation. Measurements were obtained using a Coordinate Measuring Machine to record volumetric error, articulation error and clinical effect error. Three scanners were compared, including the Imetric 3D iScan d104i, Shining 3D AutoScan-DS100 and 3Shape D800, as well as their respective digital articulation software packages. Stoneglass Industries PDC digital articulation system was also applied to the Imetric scans for comparison with the CMM measurements. All the scans displayed low volumetric error (p⟩0.05), indicating that the scanners themselves had a minor contribution to the articulation and clinical effect errors. The PDC digital articulation system was found to deliver the lowest average errors, with good repeatability of results. The new measuring technique in the current study was able to assess the scanning and articulation accuracy of the four systems investigated. The PDC digital articulation system using Imetric scans was recommended as it displayed the lowest articulation error and clinical effect error with good repeatability. The low errors from the PDC system may have been due to its use of a 3D axis for alignment rather than the use of a best fit. Copyright© 2018 Dennis Barber Ltd.

  10. Impaired semantic knowledge underlies the reduced verbal short-term storage capacity in Alzheimer's disease.

    PubMed

    Peters, Frédéric; Majerus, Steve; De Baerdemaeker, Julie; Salmon, Eric; Collette, Fabienne

    2009-12-01

    A decrease in verbal short-term memory (STM) capacity is consistently observed in patients with Alzheimer's disease (AD). Although this impairment has been mainly attributed to attentional deficits during encoding and maintenance, the progressive deterioration of semantic knowledge in early stages of AD may also be an important determinant of poor STM performance. The aim of this study was to examine the influence of semantic knowledge on verbal short-term memory storage capacity in normal aging and in AD by exploring the impact of word imageability on STM performance. Sixteen patients suffering from mild AD, 16 healthy elderly subjects and 16 young subjects performed an immediate serial recall task using word lists containing high or low imageability words. All participant groups recalled more high imageability words than low imageability words, but the effect of word imageability on verbal STM was greater in AD patients than in both the young and the elderly control groups. More precisely, AD patients showed a marked decrease in STM performance when presented with lists of low imageability words, whereas recall of high imageability words was relatively well preserved. Furthermore, AD patients displayed an abnormal proportion of phonological errors in the low imageability condition. Overall, these results indicate that the support of semantic knowledge on STM performance was impaired for lists of low imageability words in AD patients. More generally, these findings suggest that the deterioration of semantic knowledge is partly responsible for the poor verbal short-term storage capacity observed in AD.

  11. Cluster-Continuum Calculations of Hydration Free Energies of Anions and Group 12 Divalent Cations.

    PubMed

    Riccardi, Demian; Guo, Hao-Bo; Parks, Jerry M; Gu, Baohua; Liang, Liyuan; Smith, Jeremy C

    2013-01-08

    Understanding aqueous phase processes involving group 12 metal cations is relevant to both environmental and biological sciences. Here, quantum chemical methods and polarizable continuum models are used to compute the hydration free energies of a series of divalent group 12 metal cations (Zn(2+), Cd(2+), and Hg(2+)) together with Cu(2+) and the anions OH(-), SH(-), Cl(-), and F(-). A cluster-continuum method is employed, in which gas-phase clusters of the ion and explicit solvent molecules are immersed in a dielectric continuum. Two approaches to define the size of the solute-water cluster are compared, in which the number of explicit waters used is either held constant or determined variationally as that of the most favorable hydration free energy. Results obtained with various polarizable continuum models are also presented. Each leg of the relevant thermodynamic cycle is analyzed in detail to determine how different terms contribute to the observed mean signed error (MSE) and the standard deviation of the error (STDEV) between theory and experiment. The use of a constant number of water molecules for each set of ions is found to lead to predicted relative trends that benefit from error cancellation. Overall, the best results are obtained with MP2 and the Solvent Model D polarizable continuum model (SMD), with eight explicit water molecules for anions and 10 for the metal cations, yielding a STDEV of 2.3 kcal mol(-1) and MSE of 0.9 kcal mol(-1) between theoretical and experimental hydration free energies, which range from -72.4 kcal mol(-1) for SH(-) to -505.9 kcal mol(-1) for Cu(2+). Using B3PW91 with DFT-D3 dispersion corrections (B3PW91-D) and SMD yields a STDEV of 3.3 kcal mol(-1) and MSE of 1.6 kcal mol(-1), to which adding MP2 corrections from smaller divalent metal cation water molecule clusters yields very good agreement with the full MP2 results. Using B3PW91-D and SMD, with two explicit water molecules for anions and six for divalent metal cations, also yields reasonable agreement with experimental values, due in part to fortuitous error cancellation associated with the metal cations. Overall, the results indicate that the careful application of quantum chemical cluster-continuum methods provides valuable insight into aqueous ionic processes that depend on both local and long-range electrostatic interactions with the solvent.

  12. Evaluation of Acoustic Doppler Current Profiler measurements of river discharge

    USGS Publications Warehouse

    Morlock, S.E.

    1996-01-01

    The standard deviations of the ADCP measurements ranged from approximately 1 to 6 percent and were generally higher than the measurement errors predicted by error-propagation analysis of ADCP instrument performance. These error-prediction methods assume that the largest component of ADCP discharge measurement error is instrument related. The larger standard deviations indicate that substantial portions of measurement error may be attributable to sources unrelated to ADCP electronics or signal processing and are functions of the field environment.

  13. High-resolution frequency measurement method with a wide-frequency range based on a quantized phase step law.

    PubMed

    Du, Baoqiang; Dong, Shaofeng; Wang, Yanfeng; Guo, Shuting; Cao, Lingzhi; Zhou, Wei; Zuo, Yandi; Liu, Dan

    2013-11-01

    A wide-frequency and high-resolution frequency measurement method based on the quantized phase step law is presented in this paper. Utilizing a variation law of the phase differences, the direct different frequency phase processing, and the phase group synchronization phenomenon, combining an A/D converter and the adaptive phase shifting principle, a counter gate is established in the phase coincidences at one-group intervals, which eliminates the ±1 counter error in the traditional frequency measurement method. More importantly, the direct phase comparison, the measurement, and the control between any periodic signals have been realized without frequency normalization in this method. Experimental results show that sub-picosecond resolution can be easily obtained in the frequency measurement, the frequency standard comparison, and the phase-locked control based on the phase quantization processing technique. The method may be widely used in navigation positioning, space techniques, communication, radar, astronomy, atomic frequency standards, and other high-tech fields.

  14. Children's difficulties handling dual identity.

    PubMed

    Apperly, I A; Robinson, E J

    2001-04-01

    Thirty-nine 6-year-old children participated in a longitudinal study using tasks that required handling of dual identity. Pre- and posttest sessions employed tasks involving a protagonist who was partially informed about an object or person; for example, he knew an item as a ball but not as a present. Children who judged correctly that the protagonist did not know the ball was a present (thereby demonstrating some understanding of the consequences of limited information access), often judged incorrectly (1) that he knew that there was a present in the box, and (2) that he would search as if fully informed. Intervening sessions added contextual support and tried to clarify the experimenter's communicative intentions in a range of ways. Despite signs of general improvement, the distinctive pattern of errors persisted in every case. These findings go beyond previous studies of children's handling of limited information access, and are hard to accommodate within existing accounts of developing understanding of the mind. Copyright 2001 Academic Press.

  15. Biophysical Mechanistic Modelling Quantifies the Effects of Plant Traits on Fire Severity: Species, Not Surface Fuel Loads, Determine Flame Dimensions in Eucalypt Forests

    PubMed Central

    Bedward, Michael; Penman, Trent D.; Doherty, Michael D.; Weber, Rodney O.; Gill, A. Malcolm; Cary, Geoffrey J.

    2016-01-01

    The influence of plant traits on forest fire behaviour has evolutionary, ecological and management implications, but is poorly understood and frequently discounted. We use a process model to quantify that influence and provide validation in a diverse range of eucalypt forests burnt under varying conditions. Measured height of consumption was compared to heights predicted using a surface fuel fire behaviour model, then key aspects of our model were sequentially added to this with and without species-specific information. Our fully specified model had a mean absolute error 3.8 times smaller than the otherwise identical surface fuel model (p < 0.01), and correctly predicted the height of larger (≥1 m) flames 12 times more often (p < 0.001). We conclude that the primary endogenous drivers of fire severity are the species of plants present rather than the surface fuel load, and demonstrate the accuracy and versatility of the model for quantifying this. PMID:27529789

  16. Biophysical Mechanistic Modelling Quantifies the Effects of Plant Traits on Fire Severity: Species, Not Surface Fuel Loads, Determine Flame Dimensions in Eucalypt Forests.

    PubMed

    Zylstra, Philip; Bradstock, Ross A; Bedward, Michael; Penman, Trent D; Doherty, Michael D; Weber, Rodney O; Gill, A Malcolm; Cary, Geoffrey J

    2016-01-01

    The influence of plant traits on forest fire behaviour has evolutionary, ecological and management implications, but is poorly understood and frequently discounted. We use a process model to quantify that influence and provide validation in a diverse range of eucalypt forests burnt under varying conditions. Measured height of consumption was compared to heights predicted using a surface fuel fire behaviour model, then key aspects of our model were sequentially added to this with and without species-specific information. Our fully specified model had a mean absolute error 3.8 times smaller than the otherwise identical surface fuel model (p < 0.01), and correctly predicted the height of larger (≥1 m) flames 12 times more often (p < 0.001). We conclude that the primary endogenous drivers of fire severity are the species of plants present rather than the surface fuel load, and demonstrate the accuracy and versatility of the model for quantifying this.

  17. Northern Islands, human error, and environmental degradation: A view of social and ecological change in the medieval North Atlantic

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    McGovern, T.H.; Bigelow, G.; Amorosi, T.

    1988-09-01

    Between ca. 790 and 1,000 AD, Scandinavian settlers occupied the islands of the North Atlantic: Shetland, the Orkneys, the Hebrides, the Faroes, Iceland, and Greenland. These offshore islands initially supported stands of willow, alder, and birch, and a range of non-arboreal species suitable for pasture for the imported Norse domestic animals. Overstocking of domestic animals, fuel collection, ironworking, and construction activity seems to have rapidly depleted the dwarf trees, and several scholars argue that soil erosion and other forms of environmental degradation also resulted from Norse land-use practices in the region. Such degradation of pasture communities may have played amore » significant role in changing social relationships and late medieval economic decline in the western tier colonies of Iceland and Greenland. This paper presents simple quantified models for Scandinavian environmental impact in the region, and suggests some sociopolitical causes for ultimately maladaptive floral degradation.« less

  18. A linear refractive photovoltaic concentrator solar array flight experiment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jones, P.A.; Murphy, D.M.; Piszczor, M.F.

    1995-12-31

    Concentrator arrays deliver a number of generic benefits for space including high array efficiency, protection from space radiation effects, and minimized plasma interactions. The line focus concentrator concept delivers two added advantages: (1) low-cost mass production of the lens material and, (2) relaxation of precise array tracking requirements to only a single axis. New array designs emphasize lightweight, high stiffness, stow-ability and ease of manufacture and assembly. The linear refractive concentrator can be designed to provide an essentially flat response over a wide range of longitudinal pointing errors for satellites having only single-axis tracking capability. In this paper the authorsmore » address the current status of the SCARLET linear concentrator program with special emphasis on hardware development of an array-level linear refractive concentrator flight experiment. An aggressive, 6-month development and flight validation program, sponsored by the Ballistic Missile Defense Organization (BMDO) and NASA Lewis Research Center, will quantify and verify SCARLET benefits with in-orbit performance measurements.« less

  19. Age estimation using tooth cementum annulation.

    PubMed

    Wittwer-Backofen, Ursula

    2012-01-01

    In Forensic Anthropology age diagnosis of unidentified bodies significantly helps in the identification process. Among the set of established aging methods in anthropology tooth cementum annulation (TCA) is increasingly used due to its narrow error range which can reach 5 years of age in adult individuals at best. The rhythm of cementum appositions of seasonally different density provides a principal mechanism on which TCA is based. Using histological preparation techniques for hard tissues, transversal tooth root sections are produced which can be analyzed in transmitted light microscopy. Even though no standard TCA preparation protocol exists, several methodological validation studies recommend specific treatments depending on individual conditions of the teeth. Individual age is estimated by adding mean tooth eruption age to the number of microscopically detected dark layers which are separated by bright layers and stand for 1 year of age each. To assure a high reliability of the method, TCA age diagnosis has to be based on several teeth of one individual if possible and needs to be supported by different techniques in forensic cases.

  20. Evaluation of commercially available techniques and development of simplified methods for measuring grille airflows in HVAC systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Iain S.; Wray, Craig P.; Guillot, Cyril

    2003-08-01

    In this report, we discuss the accuracy of flow hoods for residential applications, based on laboratory tests and field studies. The results indicate that commercially available hoods are often inadequate to measure flows in residential systems, and that there can be a wide range of performance between different flow hoods. The errors are due to poor calibrations, sensitivity of existing hoods to grille flow non-uniformities, and flow changes from added flow resistance. We also evaluated several simple techniques for measuring register airflows that could be adopted by the HVAC industry and homeowners as simple diagnostics that are often as accuratemore » as commercially available devices. Our test results also show that current calibration procedures for flow hoods do not account for field application problems. As a result, organizations such as ASHRAE or ASTM need to develop a new standard for flow hood calibration, along with a new measurement standard to address field use of flow hoods.« less

  1. Advances in Understanding Air Pollution and Cardiovascular Diseases: The Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air)

    PubMed Central

    Kaufman, Joel D.; Spalt, Elizabeth W.; Curl, Cynthia L.; Hajat, Anjum; Jones, Miranda R.; Kim, Sun-Young; Vedal, Sverre; Szpiro, Adam A.; Gassett, Amanda; Sheppard, Lianne; Daviglus, Martha L.; Adar, Sara D.

    2016-01-01

    The Multi-Ethnic Study of Atherosclerosis and Air Pollution (MESA Air) leveraged the platform of the MESA cohort into a prospective longitudinal study of relationships between air pollution and cardiovascular health. MESA Air researchers developed fine-scale, state-of-the-art air pollution exposure models for the MESA Air communities, creating individual exposure estimates for each participant. These models combine cohort-specific exposure monitoring, existing monitoring systems, and an extensive database of geographic and meteorological information. Together with extensive phenotyping in MESA—and adding participants and health measurements to the cohort—MESA Air investigated environmental exposures on a wide range of outcomes. Advances by the MESA Air team included not only a new approach to exposure modeling but also biostatistical advances in addressing exposure measurement error and temporal confounding. The MESA Air study advanced our understanding of the impact of air pollutants on cardiovascular disease and provided a research platform for advances in environmental epidemiology. PMID:27741981

  2. Ten years in the library: new data confirm paleontological patterns

    NASA Technical Reports Server (NTRS)

    Sepkoski, J. J. Jr; Sepkoski JJ, J. r. (Principal Investigator)

    1993-01-01

    A comparison is made between compilations of times of origination and extinction of fossil marine animal families published in 1982 and 1992. As a result of ten years of library research, half of the information in the compendia has changed: families have been added and deleted, low-resolution stratigraphic data been improved, and intervals of origination and extinction have been altered. Despite these changes, apparent macroevolutionary patterns for the entire marine fauna have remained constant. Diversity curves compiled from the two data bases are very similar, with a goodness-of-fit of 99%; the principal difference is that the 1992 curve averages 13% higher than the older curve. Both numbers and percentages of origination and extinction also match well, with fits ranging from 83% to 95%. All major events of radiation and extinction are identical. Therefore, errors in large paleontological data bases and arbitrariness of included taxa are not necessarily impediments to the analysis of pattern in the fossil record, so long as the data are sufficiently numerous.

  3. Passive Ranging Using Infra-Red Atmospheric Attenuation

    DTIC Science & Technology

    2010-03-01

    was the Bomem MR-154 Fourier Transform Spectrometer (FTS). The FTS used both an HgCdTe and InSb detector . For this study, the primary source of data...also outfitted with an HgCdTe and InSb detector . Again, only data from the InSb detector was used. The spectral range of data collected was from...an uncertainty in transmittance of 0.01 (figure 20). This would yield an error in range of 6%. Other sources of error include detector noise or

  4. Ray tracing evaluation of a technique for correcting the refraction errors in satellite tracking data

    NASA Technical Reports Server (NTRS)

    Gardner, C. S.; Rowlett, J. R.; Hendrickson, B. E.

    1978-01-01

    Errors may be introduced in satellite laser ranging data by atmospheric refractivity. Ray tracing data have indicated that horizontal refractivity gradients may introduce nearly 3-cm rms error when satellites are near 10-degree elevation. A correction formula to compensate for the horizontal gradients has been developed. Its accuracy is evaluated by comparing it to refractivity profiles. It is found that if both spherical and gradient correction formulas are employed in conjunction with meteorological measurements, a range resolution of one cm or less is feasible for satellite elevation angles above 10 degrees.

  5. S-193 scatterometer transfer function analysis for data processing

    NASA Technical Reports Server (NTRS)

    Johnson, L.

    1974-01-01

    A mathematical model for converting raw data measurements of the S-193 scatterometer into processed values of radar scattering coefficient is presented. The argument is based on an approximation derived from the Radar Equation and actual operating principles of the S-193 Scatterometer hardware. Possible error sources are inaccuracies in transmitted wavelength, range, antenna illumination integrals, and the instrument itself. The dominant source of error in the calculation of scattering coefficent is accuracy of the range. All other ractors with the possible exception of illumination integral are not considered to cause significant error in the calculation of scattering coefficient.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yu, Juan; Beltran, Chris J., E-mail: beltran.chris@mayo.edu; Herman, Michael G.

    Purpose: To quantitatively and systematically assess dosimetric effects induced by spot positioning error as a function of spot spacing (SS) on intensity-modulated proton therapy (IMPT) plan quality and to facilitate evaluation of safety tolerance limits on spot position. Methods: Spot position errors (PE) ranging from 1 to 2 mm were simulated. Simple plans were created on a water phantom, and IMPT plans were calculated on two pediatric patients with a brain tumor of 28 and 3 cc, respectively, using a commercial planning system. For the phantom, a uniform dose was delivered to targets located at different depths from 10 tomore » 20 cm with various field sizes from 2{sup 2} to 15{sup 2} cm{sup 2}. Two nominal spot sizes, 4.0 and 6.6 mm of 1 σ in water at isocenter, were used for treatment planning. The SS ranged from 0.5 σ to 1.5 σ, which is 2–6 mm for the small spot size and 3.3–9.9 mm for the large spot size. Various perturbation scenarios of a single spot error and systematic and random multiple spot errors were studied. To quantify the dosimetric effects, percent dose error (PDE) depth profiles and the value of percent dose error at the maximum dose difference (PDE [ΔDmax]) were used for evaluation. Results: A pair of hot and cold spots was created per spot shift. PDE[ΔDmax] is found to be a complex function of PE, SS, spot size, depth, and global spot distribution that can be well defined in simple models. For volumetric targets, the PDE [ΔDmax] is not noticeably affected by the change of field size or target volume within the studied ranges. In general, reducing SS decreased the dose error. For the facility studied, given a single spot error with a PE of 1.2 mm and for both spot sizes, a SS of 1σ resulted in a 2% maximum dose error; a SS larger than 1.25 σ substantially increased the dose error and its sensitivity to PE. A similar trend was observed in multiple spot errors (both systematic and random errors). Systematic PE can lead to noticeable hot spots along the field edges, which may be near critical structures. However, random PE showed minimal dose error. Conclusions: Dose error dependence for PE was quantitatively and systematically characterized and an analytic tool was built to simulate systematic and random errors for patient-specific IMPT. This information facilitates the determination of facility specific spot position error thresholds.« less

  7. Frequency-domain Green's functions for radar waves in heterogeneous 2.5D media

    USGS Publications Warehouse

    Ellefsen, K.J.; Croize, D.; Mazzella, A.T.; McKenna, J.R.

    2009-01-01

    Green's functions for radar waves propagating in heterogeneous 2.5D media might be calculated in the frequency domain using a hybrid method. The model is defined in the Cartesian coordinate system, and its electromagnetic properties might vary in the x- and z-directions, but not in the y-direction. Wave propagation in the x- and z-directions is simulated with the finite-difference method, and wave propagation in the y-direction is simulated with an analytic function. The absorbing boundaries on the finite-difference grid are perfectly matched layers that have been modified to make them compatible with the hybrid method. The accuracy of these numerical Greens functions is assessed by comparing them with independently calculated Green's functions. For a homogeneous model, the magnitude errors range from -4.16% through 0.44%, and the phase errors range from -0.06% through 4.86%. For a layered model, the magnitude errors range from -2.60% through 2.06%, and the phase errors range from -0.49% through 2.73%. These numerical Green's functions might be used for forward modeling and full waveform inversion. ?? 2009 Society of Exploration Geophysicists. All rights reserved.

  8. Automated estimation of abdominal effective diameter for body size normalization of CT dose.

    PubMed

    Cheng, Phillip M

    2013-06-01

    Most CT dose data aggregation methods do not currently adjust dose values for patient size. This work proposes a simple heuristic for reliably computing an effective diameter of a patient from an abdominal CT image. Evaluation of this method on 106 patients scanned on Philips Brilliance 64 and Brilliance Big Bore scanners demonstrates close correspondence between computed and manually measured patient effective diameters, with a mean absolute error of 1.0 cm (error range +2.2 to -0.4 cm). This level of correspondence was also demonstrated for 60 patients on Siemens, General Electric, and Toshiba scanners. A calculated effective diameter in the middle slice of an abdominal CT study was found to be a close approximation of the mean calculated effective diameter for the study, with a mean absolute error of approximately 1.0 cm (error range +3.5 to -2.2 cm). Furthermore, the mean absolute error for an adjusted mean volume computed tomography dose index (CTDIvol) using a mid-study calculated effective diameter, versus a mean per-slice adjusted CTDIvol based on the calculated effective diameter of each slice, was 0.59 mGy (error range 1.64 to -3.12 mGy). These results are used to calculate approximate normalized dose length product values in an abdominal CT dose database of 12,506 studies.

  9. Reducing the Conflict Factors Strategies in Question Answering System

    NASA Astrophysics Data System (ADS)

    Suwarningsih, W.; Purwarianti, A.; Supriana, I.

    2017-03-01

    A rule-based system is prone to conflict as new knowledge every time will emerge and indirectly must sign in to the knowledge base that is used by the system. A conflict occurred between the rules in the knowledge base can lead to the errors of reasoning or reasoning circulation. Therefore, when added, the new rules will lead to conflict with other rules, and the only rules that really can be added to the knowledge base. From these conditions, this paper aims to propose a conflict resolution strategy for a medical debriefing system by analyzing scenarios based upon the runtime to improve the efficiency and reliability of systems.

  10. Evaluation of assigned-value uncertainty for complex calibrator value assignment processes: a prealbumin example.

    PubMed

    Middleton, John; Vaks, Jeffrey E

    2007-04-01

    Errors of calibrator-assigned values lead to errors in the testing of patient samples. The ability to estimate the uncertainties of calibrator-assigned values and other variables minimizes errors in testing processes. International Organization of Standardization guidelines provide simple equations for the estimation of calibrator uncertainty with simple value-assignment processes, but other methods are needed to estimate uncertainty in complex processes. We estimated the assigned-value uncertainty with a Monte Carlo computer simulation of a complex value-assignment process, based on a formalized description of the process, with measurement parameters estimated experimentally. This method was applied to study uncertainty of a multilevel calibrator value assignment for a prealbumin immunoassay. The simulation results showed that the component of the uncertainty added by the process of value transfer from the reference material CRM470 to the calibrator is smaller than that of the reference material itself (<0.8% vs 3.7%). Varying the process parameters in the simulation model allowed for optimizing the process, while keeping the added uncertainty small. The patient result uncertainty caused by the calibrator uncertainty was also found to be small. This method of estimating uncertainty is a powerful tool that allows for estimation of calibrator uncertainty for optimization of various value assignment processes, with a reduced number of measurements and reagent costs, while satisfying the requirements to uncertainty. The new method expands and augments existing methods to allow estimation of uncertainty in complex processes.

  11. Puzzles in modern biology. V. Why are genomes overwired?

    PubMed

    Frank, Steven A

    2017-01-01

    Many factors affect eukaryotic gene expression. Transcription factors, histone codes, DNA folding, and noncoding RNA modulate expression. Those factors interact in large, broadly connected regulatory control networks. An engineer following classical principles of control theory would design a simpler regulatory network. Why are genomes overwired? Neutrality or enhanced robustness may lead to the accumulation of additional factors that complicate network architecture. Dynamics progresses like a ratchet. New factors get added. Genomes adapt to the additional complexity. The newly added factors can no longer be removed without significant loss of fitness. Alternatively, highly wired genomes may be more malleable. In large networks, most genomic variants tend to have a relatively small effect on gene expression and trait values. Many small effects lead to a smooth gradient, in which traits may change steadily with respect to underlying regulatory changes. A smooth gradient may provide a continuous path from a starting point up to the highest peak of performance. A potential path of increasing performance promotes adaptability and learning. Genomes gain by the inductive process of natural selection, a trial and error learning algorithm that discovers general solutions for adapting to environmental challenge. Similarly, deeply and densely connected computational networks gain by various inductive trial and error learning procedures, in which the networks learn to reduce the errors in sequential trials. Overwiring alters the geometry of induction by smoothing the gradient along the inductive pathways of improving performance. Those overwiring benefits for induction apply to both natural biological networks and artificial deep learning networks.

  12. Red ball ranging optimization based on dual camera ranging method

    NASA Astrophysics Data System (ADS)

    Kuang, Lei; Sun, Weijia; Liu, Jiaming; Tang, Matthew Wai-Chung

    2018-05-01

    In this paper, the process of positioning and moving to target red ball by NAO robot through its camera system is analyzed and improved using the dual camera ranging method. The single camera ranging method, which is adapted by NAO robot, was first studied and experimented. Since the existing error of current NAO Robot is not a single variable, the experiments were divided into two parts to obtain more accurate single camera ranging experiment data: forward ranging and backward ranging. Moreover, two USB cameras were used in our experiments that adapted Hough's circular method to identify a ball, while the HSV color space model was used to identify red color. Our results showed that the dual camera ranging method reduced the variance of error in ball tracking from 0.68 to 0.20.

  13. Automatic process control in anaerobic digestion technology: A critical review.

    PubMed

    Nguyen, Duc; Gadhamshetty, Venkataramana; Nitayavardhana, Saoharit; Khanal, Samir Kumar

    2015-10-01

    Anaerobic digestion (AD) is a mature technology that relies upon a synergistic effort of a diverse group of microbial communities for metabolizing diverse organic substrates. However, AD is highly sensitive to process disturbances, and thus it is advantageous to use online monitoring and process control techniques to efficiently operate AD process. A range of electrochemical, chromatographic and spectroscopic devices can be deployed for on-line monitoring and control of the AD process. While complexity of the control strategy ranges from a feedback control to advanced control systems, there are some debates on implementation of advanced instrumentations or advanced control strategies. Centralized AD plants could be the answer for the applications of progressive automatic control field. This article provides a critical overview of the available automatic control technologies that can be implemented in AD processes at different scales. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. A simple model for studying rotation errors of gimbal mount axes in laser tracking system based on spherical mirror as a reflection unit

    NASA Astrophysics Data System (ADS)

    Song, Huixu; Shi, Zhaoyao; Chen, Hongfang; Sun, Yanqiang

    2018-01-01

    This paper presents a novel experimental approach and a simple model for verifying that spherical mirror of laser tracking system could lessen the effect of rotation errors of gimbal mount axes based on relative motion thinking. Enough material and evidence are provided to support that this simple model could replace complex optical system in laser tracking system. This experimental approach and model interchange the kinematic relationship between spherical mirror and gimbal mount axes in laser tracking system. Being fixed stably, gimbal mount axes' rotation error motions are replaced by spatial micro-displacements of spherical mirror. These motions are simulated by driving spherical mirror along the optical axis and vertical direction with the use of precision positioning platform. The effect on the laser ranging measurement accuracy of displacement caused by the rotation errors of gimbal mount axes could be recorded according to the outcome of laser interferometer. The experimental results show that laser ranging measurement error caused by the rotation errors is less than 0.1 μm if radial error motion and axial error motion are under 10 μm. The method based on relative motion thinking not only simplifies the experimental procedure but also achieves that spherical mirror owns the ability to reduce the effect of rotation errors of gimbal mount axes in laser tracking system.

  15. Using SEM to Analyze Complex Survey Data: A Comparison between Design-Based Single-Level and Model-Based Multilevel Approaches

    ERIC Educational Resources Information Center

    Wu, Jiun-Yu; Kwok, Oi-man

    2012-01-01

    Both ad-hoc robust sandwich standard error estimators (design-based approach) and multilevel analysis (model-based approach) are commonly used for analyzing complex survey data with nonindependent observations. Although these 2 approaches perform equally well on analyzing complex survey data with equal between- and within-level model structures…

  16. Effect of forest canopy on GPS-based movement data

    Treesearch

    Nicholas J. DeCesare; John R. Squires; Jay A. Kolbe

    2005-01-01

    The advancing role of Global Positioning System (GPS) technology in ecology has made studies of animal movement possible for larger and more vagile species. A simple field test revealed that lengths of GPS-based movement data were strongly biased (P<0.001) by effects of forest canopy. Global Positioning System error added an average of 27.5% additional...

  17. New capacities and modifications for NASTRAN level 17.5 at DTNSRDC

    NASA Technical Reports Server (NTRS)

    Hurwitz, M. M.

    1980-01-01

    Since 1970 DTNSRDC has been modifying NASTRAN to suite various Navy requirements. These modifications include capabilities as well as user conveniences and error corrections. The new features added to NASTRAN Level 17.5 are described. The subject areas of the additions include magnetostatics, piezoelectricity, fluid structure interactions, isoparametric finite elements, and shock design for shipboard equipment.

  18. Defense Logistics Agency Disposition Services Afghanistan Disposal Process Needed Improvement

    DTIC Science & Technology

    2013-11-08

    audit, and management was proactive in correcting the deficiencies we identified. DLA DS eliminated backlogs, identified and corrected system ...problems, provided additional system training, corrected coding errors, added personnel to key positions, addressed scale issues, submitted debit...Service Automated Information System to the Reutilization Business Integration2 (RBI) solution. The implementation of RBI in Afghanistan occurred in

  19. 75 FR 33159 - Airworthiness Directives; Bell Helicopter Textron Canada Model 222, 222B, 222U, 230, and 430...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-06-11

    ... amendment is prompted by the approved rework of certain blades and two newly redesigned blades, which, if... corrected some typographical errors. Since issuing AD 2005-04-09, the manufacturer has introduced a rework... the approved rework of certain blades and two newly redesigned blades, which, if installed...

  20. The United States’ Rejection of the International Criminal Court: A Strategic Error

    DTIC Science & Technology

    2008-05-09

    30 19b. TELEPHONE NUMBER (include area code ) Standard Form 298 (Rev. 8-98) Prescribed by ANSI Std. Z39.18 USAWC PROGRAM RESEARCH PROJECT THE UNITED...aggression is added to the ICC’s jurisdiction. Aggression as a war crime was charged at Nuremburg , and is a long standing concept in international law

Top