Sample records for largest error source

  1. New empirically-derived solar radiation pressure model for GPS satellites

    NASA Technical Reports Server (NTRS)

    Bar-Sever, Y.; Kuang, D.

    2003-01-01

    Solar radiation pressure force is the second largest perturbation acting on GPS satellites, after the gravitational attraction from the Earth, Sun, and Moon. It is the largest error source in the modeling of GPS orbital dynamics.

  2. Interdisciplinary Coordination Reviews: A Process to Reduce Construction Costs.

    ERIC Educational Resources Information Center

    Fewell, Dennis A.

    1998-01-01

    Interdisciplinary Coordination design review is instrumental in detecting coordination errors and omissions in construction documents. Cleansing construction documents of interdisciplinary coordination errors reduces time extensions, the largest source of change orders, and limits exposure to liability claims. Improving the quality of design…

  3. Quantifying the Contributions of Environmental Parameters to Ceres Surface Net Radiation Error in China

    NASA Astrophysics Data System (ADS)

    Pan, X.; Yang, Y.; Liu, Y.; Fan, X.; Shan, L.; Zhang, X.

    2018-04-01

    Error source analyses are critical for the satellite-retrieved surface net radiation (Rn) products. In this study, we evaluate the Rn error sources in the Clouds and the Earth's Radiant Energy System (CERES) project at 43 sites from July in 2007 to December in 2007 in China. The results show that cloud fraction (CF), land surface temperature (LST), atmospheric temperature (AT) and algorithm error dominate the Rn error, with error contributions of -20, 15, 10 and 10 W/m2 (net shortwave (NSW)/longwave (NLW) radiation), respectively. For NSW, the dominant error source is algorithm error (more than 10 W/m2), particularly in spring and summer with abundant cloud. For NLW, due to the high sensitivity of algorithm and large LST/CF error, LST and CF are the largest error sources, especially in northern China. The AT influences the NLW error large in southern China because of the large AT error in there. The total precipitable water has weak influence on Rn error even with the high sensitivity of algorithm. In order to improve Rn quality, CF and LST (AT) error in northern (southern) China should be decreased.

  4. Impact of numerical choices on water conservation in the E3SM Atmosphere Model Version 1 (EAM V1)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.

    The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations for sea level rise projection. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods formore » fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model is negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in the new model results in a very thin model layer at the Earth’s surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for this model.« less

  5. Impact of numerical choices on water conservation in the E3SM Atmosphere Model version 1 (EAMv1)

    NASA Astrophysics Data System (ADS)

    Zhang, Kai; Rasch, Philip J.; Taylor, Mark A.; Wan, Hui; Leung, Ruby; Ma, Po-Lun; Golaz, Jean-Christophe; Wolfe, Jon; Lin, Wuyin; Singh, Balwinder; Burrows, Susannah; Yoon, Jin-Ho; Wang, Hailong; Qian, Yun; Tang, Qi; Caldwell, Peter; Xie, Shaocheng

    2018-06-01

    The conservation of total water is an important numerical feature for global Earth system models. Even small conservation problems in the water budget can lead to systematic errors in century-long simulations. This study quantifies and reduces various sources of water conservation error in the atmosphere component of the Energy Exascale Earth System Model. Several sources of water conservation error have been identified during the development of the version 1 (V1) model. The largest errors result from the numerical coupling between the resolved dynamics and the parameterized sub-grid physics. A hybrid coupling using different methods for fluid dynamics and tracer transport provides a reduction of water conservation error by a factor of 50 at 1° horizontal resolution as well as consistent improvements at other resolutions. The second largest error source is the use of an overly simplified relationship between the surface moisture flux and latent heat flux at the interface between the host model and the turbulence parameterization. This error can be prevented by applying the same (correct) relationship throughout the entire model. Two additional types of conservation error that result from correcting the surface moisture flux and clipping negative water concentrations can be avoided by using mass-conserving fixers. With all four error sources addressed, the water conservation error in the V1 model becomes negligible and insensitive to the horizontal resolution. The associated changes in the long-term statistics of the main atmospheric features are small. A sensitivity analysis is carried out to show that the magnitudes of the conservation errors in early V1 versions decrease strongly with temporal resolution but increase with horizontal resolution. The increased vertical resolution in V1 results in a very thin model layer at the Earth's surface, which amplifies the conservation error associated with the surface moisture flux correction. We note that for some of the identified error sources, the proposed fixers are remedies rather than solutions to the problems at their roots. Future improvements in time integration would be beneficial for V1.

  6. Evaluation of Acoustic Doppler Current Profiler measurements of river discharge

    USGS Publications Warehouse

    Morlock, S.E.

    1996-01-01

    The standard deviations of the ADCP measurements ranged from approximately 1 to 6 percent and were generally higher than the measurement errors predicted by error-propagation analysis of ADCP instrument performance. These error-prediction methods assume that the largest component of ADCP discharge measurement error is instrument related. The larger standard deviations indicate that substantial portions of measurement error may be attributable to sources unrelated to ADCP electronics or signal processing and are functions of the field environment.

  7. On the Limitations of Variational Bias Correction

    NASA Technical Reports Server (NTRS)

    Moradi, Isaac; Mccarty, Will; Gelaro, Ronald

    2018-01-01

    Satellite radiances are the largest dataset assimilated into Numerical Weather Prediction (NWP) models, however the data are subject to errors and uncertainties that need to be accounted for before assimilating into the NWP models. Variational bias correction uses the time series of observation minus background to estimate the observations bias. This technique does not distinguish between the background error, forward operator error, and observations error so that all these errors are summed up together and counted as observation error. We identify some sources of observations errors (e.g., antenna emissivity, non-linearity in the calibration, and antenna pattern) and show the limitations of variational bias corrections on estimating these errors.

  8. A method for reducing the largest relative errors in Monte Carlo iterated-fission-source calculations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunter, J. L.; Sutton, T. M.

    2013-07-01

    In Monte Carlo iterated-fission-source calculations relative uncertainties on local tallies tend to be larger in lower-power regions and smaller in higher-power regions. Reducing the largest uncertainties to an acceptable level simply by running a larger number of neutron histories is often prohibitively expensive. The uniform fission site method has been developed to yield a more spatially-uniform distribution of relative uncertainties. This is accomplished by biasing the density of fission neutron source sites while not biasing the solution. The method is integrated into the source iteration process, and does not require any auxiliary forward or adjoint calculations. For a given amountmore » of computational effort, the use of the method results in a reduction of the largest uncertainties relative to the standard algorithm. Two variants of the method have been implemented and tested. Both have been shown to be effective. (authors)« less

  9. Altimeter error sources at the 10-cm performance level

    NASA Technical Reports Server (NTRS)

    Martin, C. F.

    1977-01-01

    Error sources affecting the calibration and operational use of a 10 cm altimeter are examined to determine the magnitudes of current errors and the investigations necessary to reduce them to acceptable bounds. Errors considered include those affecting operational data pre-processing, and those affecting altitude bias determination, with error budgets developed for both. The most significant error sources affecting pre-processing are bias calibration, propagation corrections for the ionosphere, and measurement noise. No ionospheric models are currently validated at the required 10-25% accuracy level. The optimum smoothing to reduce the effects of measurement noise is investigated and found to be on the order of one second, based on the TASC model of geoid undulations. The 10 cm calibrations are found to be feasible only through the use of altimeter passes that are very high elevation for a tracking station which tracks very close to the time of altimeter track, such as a high elevation pass across the island of Bermuda. By far the largest error source, based on the current state-of-the-art, is the location of the island tracking station relative to mean sea level in the surrounding ocean areas.

  10. Determination of Barometric Altimeter Errors for the Orion Exploration Flight Test-1 Entry

    NASA Technical Reports Server (NTRS)

    Brown, Denise L.; Bunoz, Jean-Philippe; Gay, Robert

    2012-01-01

    The Exploration Flight Test 1 (EFT-1) mission is the unmanned flight test for the upcoming Multi-Purpose Crew Vehicle (MPCV). During entry, the EFT-1 vehicle will trigger several Landing and Recovery System (LRS) events, such as parachute deployment, based on on-board altitude information. The primary altitude source is the filtered navigation solution updated with GPS measurement data. The vehicle also has three barometric altimeters that will be used to measure atmospheric pressure during entry. In the event that GPS data is not available during entry, the altitude derived from the barometric altimeter pressure will be used to trigger chute deployment for the drogues and main parachutes. Therefore it is important to understand the impact of error sources on the pressure measured by the barometric altimeters and on the altitude derived from that pressure. The error sources for the barometric altimeters are not independent, and many error sources result in bias in a specific direction. Therefore conventional error budget methods could not be applied. Instead, high fidelity Monte-Carlo simulation was performed and error bounds were determined based on the results of this analysis. Aerodynamic errors were the largest single contributor to the error budget for the barometric altimeters. The large errors drove a change to the altitude trigger setpoint for FBC jettison deploy.

  11. Use of Standalone GPS for Approach with Vertical Guidance.

    DOT National Transportation Integrated Search

    2001-01-22

    The accuracy of GPS has improved dramatically over the past year with the removal of Selective Availability. The largest error source now is the ionosphere which can be removed in the future when the additional civil frequencies become available. Pre...

  12. Enhanced orbit determination filter sensitivity analysis: Error budget development

    NASA Technical Reports Server (NTRS)

    Estefan, J. A.; Burkhart, P. D.

    1994-01-01

    An error budget analysis is presented which quantifies the effects of different error sources in the orbit determination process when the enhanced orbit determination filter, recently developed, is used to reduce radio metric data. The enhanced filter strategy differs from more traditional filtering methods in that nearly all of the principal ground system calibration errors affecting the data are represented as filter parameters. Error budget computations were performed for a Mars Observer interplanetary cruise scenario for cases in which only X-band (8.4-GHz) Doppler data were used to determine the spacecraft's orbit, X-band ranging data were used exclusively, and a combined set in which the ranging data were used in addition to the Doppler data. In all three cases, the filter model was assumed to be a correct representation of the physical world. Random nongravitational accelerations were found to be the largest source of error contributing to the individual error budgets. Other significant contributors, depending on the data strategy used, were solar-radiation pressure coefficient uncertainty, random earth-orientation calibration errors, and Deep Space Network (DSN) station location uncertainty.

  13. Evaluation of the accuracy of GPS as a method of locating traffic collisions.

    DOT National Transportation Integrated Search

    2004-06-01

    The objective of this study were to determine the accuracy of GPS units as a traffic crash location tool, evaluate the accuracy of the location data obtained using the GPS units, and determine the largest sources of any errors found. : The analysis s...

  14. Error of semiclassical eigenvalues in the semiclassical limit - an asymptotic analysis of the Sinai billiard

    NASA Astrophysics Data System (ADS)

    Dahlqvist, Per

    1999-10-01

    We estimate the error in the semiclassical trace formula for the Sinai billiard under the assumption that the largest source of error is due to penumbra diffraction: namely, diffraction effects for trajectories passing within a distance Ricons/Journals/Common/cdot" ALT="cdot" ALIGN="TOP"/>O((kR)-2/3) to the disc and trajectories being scattered in very forward directions. Here k is the momentum and R the radius of the scatterer. The semiclassical error is estimated by perturbing the Berry-Keating formula. The analysis necessitates an asymptotic analysis of very long periodic orbits. This is obtained within an approximation originally due to Baladi, Eckmann and Ruelle. We find that the average error, for sufficiently large values of kR, will exceed the mean level spacing.

  15. Closing the Seasonal Ocean Surface Temperature Balance in the Eastern Tropical Oceans from Remote Sensing and Model Reanalyses

    NASA Technical Reports Server (NTRS)

    Roberts, J. Brent; Clayson, C. A.

    2012-01-01

    Residual forcing necessary to close the MLTB on seasonal time scales are largest in regions of strongest surface heat flux forcing. Identifying the dominant source of error - surface heat flux error, mixed layer depth estimation, ocean dynamical forcing - remains a challenge in the eastern tropical oceans where ocean processes are very active. Improved sub-surface observations are necessary to better constrain errors. 1. Mixed layer depth evolution is critical to the seasonal evolution of mixed layer temperatures. It determines the inertia of the mixed layer, and scales the sensitivity of the MLTB to errors in surface heat flux and ocean dynamical forcing. This role produces timing impacts for errors in SST prediction. 2. Errors in the MLTB are larger than the historical 10Wm-2 target accuracy. In some regions, a larger accuracy can be tolerated if the goal is to resolve the seasonal SST cycle.

  16. Higher-order ionospheric error at Arecibo, Millstone, and Jicamarca

    NASA Astrophysics Data System (ADS)

    Matteo, N. A.; Morton, Y. T.

    2010-12-01

    The ionosphere is a dominant source of Global Positioning System receiver range measurement error. Although dual-frequency receivers can eliminate the first-order ionospheric error, most second- and third-order errors remain in the range measurements. Higher-order ionospheric error is a function of both electron density distribution and the magnetic field vector along the GPS signal propagation path. This paper expands previous efforts by combining incoherent scatter radar (ISR) electron density measurements, the International Reference Ionosphere model, exponential decay extensions of electron densities, the International Geomagnetic Reference Field, and total electron content maps to compute higher-order error at ISRs in Arecibo, Puerto Rico; Jicamarca, Peru; and Millstone Hill, Massachusetts. Diurnal patterns, dependency on signal direction, seasonal variation, and geomagnetic activity dependency are analyzed. Higher-order error is largest at Arecibo with code phase maxima circa 7 cm for low-elevation southern signals. The maximum variation of the error over all angles of arrival is circa 8 cm.

  17. What Air Quality Models Tell Us About Sources and Sinks of Atmospheric Aldehydes

    NASA Astrophysics Data System (ADS)

    Luecken, D.; Hutzell, W. T.; Phillips, S.

    2010-12-01

    Atmospheric aldehydes play important roles in several aspects of air quality: they are critical radical sources that drive ozone formation, they are hazardous air pollutants that are national drivers for cancer risk, they participate in aqueous chemistry and potentially aerosol formation, and are key species for evaluating the accuracy of isoprene emissions. For these reasons, it is important to accurately understand their sources and sinks, and the sensitivity of their concentrations to emission controls. While both compounds have been included in air quality modeling for many years, current, state-of-the-science chemical mechanisms have difficulty reproducing measured values of aldehydes, which calls into question the robustness of ozone, HAPs and aerosol predictions. In the past, we have attributed discrepancies to measurement errors, inventory errors, or the focus on high-NOx urban regimes. Despite improvements in all of these areas, the measurements still diverge from model predictions, with formaldehyde often underpredicted by 50% and acetaldehyde showing a large degree of scatter - from 20% overprediction to 50% underprediction. To better examine the sources of aldehydes, we implemented the new SAPRC07T mechanism in the Community Multi-Scale Air Quality (CMAQ) model. This mechanism incorporates current recommendations for kinetic data and has the most detailed representation of product formation under a wide variety of conditions of any mechanism used in regional air quality models. We use model simulations to pinpoint where and when aldehyde concentrations tend to deviate from measurements. We demonstrate the role of secondary production versus primary emissions in aldehdye concentrations and find that secondary sources produce the largest deviations from measurements. We identify which VOCs are most responsible for aldehyde secondary production in the areas of the U.S. where the largest health effects are seen, and discuss how this affects consideration of control strategies.

  18. Parameter Estimation for GRACE-FO Geometric Ranging Errors

    NASA Astrophysics Data System (ADS)

    Wegener, H.; Mueller, V.; Darbeheshti, N.; Naeimi, M.; Heinzel, G.

    2017-12-01

    Onboard GRACE-FO, the novel Laser Ranging Instrument (LRI) serves as a technology demonstrator, but it is a fully functional instrument to provide an additional high-precision measurement of the primary mission observable: the biased range between the two spacecraft. Its (expectedly) two largest error sources are laser frequency noise and tilt-to-length (TTL) coupling. While not much can be done about laser frequency noise, the mechanics of the TTL error are widely understood. They depend, however, on unknown parameters. In order to improve the quality of the ranging data, it is hence essential to accurately estimate these parameters and remove the resulting TTL error from the data.Means to do so will be discussed. In particular, the possibility of using calibration maneuvers, the utility of the attitude information provided by the LRI via Differential Wavefront Sensing (DWS), and the benefit from combining ranging data from LRI with ranging data from the established microwave ranging, will be mentioned.

  19. Diagnostic Air Quality Model Evaluation of Source-Specific ...

    EPA Pesticide Factsheets

    Ambient measurements of 78 source-specific tracers of primary and secondary carbonaceous fine particulate matter collected at four midwestern United States locations over a full year (March 2004–February 2005) provided an unprecedented opportunity to diagnostically evaluate the results of a numerical air quality model. Previous analyses of these measurements demonstrated excellent mass closure for the variety of contributing sources. In this study, a carbon-apportionment version of the Community Multiscale Air Quality (CMAQ) model was used to track primary organic and elemental carbon emissions from 15 independent sources such as mobile sources and biomass burning in addition to four precursor-specific classes of secondary organic aerosol (SOA) originating from isoprene, terpenes, aromatics, and sesquiterpenes. Conversion of the source-resolved model output into organic tracer concentrations yielded a total of 2416 data pairs for comparison with observations. While emission source contributions to the total model bias varied by season and measurement location, the largest absolute bias of −0.55 μgC/m3 was attributed to insufficient isoprene SOA in the summertime CMAQ simulation. Biomass combustion was responsible for the second largest summertime model bias (−0.46 μgC/m3 on average). Several instances of compensating errors were also evident; model underpredictions in some sectors were masked by overpredictions in others. The National Exposure Research L

  20. Diagnostic air quality model evaluation of source-specific primary and secondary fine particulate carbon.

    PubMed

    Napelenok, Sergey L; Simon, Heather; Bhave, Prakash V; Pye, Havala O T; Pouliot, George A; Sheesley, Rebecca J; Schauer, James J

    2014-01-01

    Ambient measurements of 78 source-specific tracers of primary and secondary carbonaceous fine particulate matter collected at four midwestern United States locations over a full year (March 2004-February 2005) provided an unprecedented opportunity to diagnostically evaluate the results of a numerical air quality model. Previous analyses of these measurements demonstrated excellent mass closure for the variety of contributing sources. In this study, a carbon-apportionment version of the Community Multiscale Air Quality (CMAQ) model was used to track primary organic and elemental carbon emissions from 15 independent sources such as mobile sources and biomass burning in addition to four precursor-specific classes of secondary organic aerosol (SOA) originating from isoprene, terpenes, aromatics, and sesquiterpenes. Conversion of the source-resolved model output into organic tracer concentrations yielded a total of 2416 data pairs for comparison with observations. While emission source contributions to the total model bias varied by season and measurement location, the largest absolute bias of -0.55 μgC/m(3) was attributed to insufficient isoprene SOA in the summertime CMAQ simulation. Biomass combustion was responsible for the second largest summertime model bias (-0.46 μgC/m(3) on average). Several instances of compensating errors were also evident; model underpredictions in some sectors were masked by overpredictions in others.

  1. Assessing uncertainty in high-resolution spatial climate data across the US Northeast.

    PubMed

    Bishop, Daniel A; Beier, Colin M

    2013-01-01

    Local and regional-scale knowledge of climate change is needed to model ecosystem responses, assess vulnerabilities and devise effective adaptation strategies. High-resolution gridded historical climate (GHC) products address this need, but come with multiple sources of uncertainty that are typically not well understood by data users. To better understand this uncertainty in a region with a complex climatology, we conducted a ground-truthing analysis of two 4 km GHC temperature products (PRISM and NRCC) for the US Northeast using 51 Cooperative Network (COOP) weather stations utilized by both GHC products. We estimated GHC prediction error for monthly temperature means and trends (1980-2009) across the US Northeast and evaluated any landscape effects (e.g., elevation, distance from coast) on those prediction errors. Results indicated that station-based prediction errors for the two GHC products were similar in magnitude, but on average, the NRCC product predicted cooler than observed temperature means and trends, while PRISM was cooler for means and warmer for trends. We found no evidence for systematic sources of uncertainty across the US Northeast, although errors were largest at high elevations. Errors in the coarse-scale (4 km) digital elevation models used by each product were correlated with temperature prediction errors, more so for NRCC than PRISM. In summary, uncertainty in spatial climate data has many sources and we recommend that data users develop an understanding of uncertainty at the appropriate scales for their purposes. To this end, we demonstrate a simple method for utilizing weather stations to assess local GHC uncertainty and inform decisions among alternative GHC products.

  2. Numerical Procedure to Forecast the Tsunami Parameters from a Database of Pre-Simulated Seismic Unit Sources

    NASA Astrophysics Data System (ADS)

    Jiménez, César; Carbonel, Carlos; Rojas, Joel

    2018-04-01

    We have implemented a numerical procedure to forecast the parameters of a tsunami, such as the arrival time of the front of the first wave and the maximum wave height in real and virtual tidal stations along the Peruvian coast, with this purpose a database of pre-computed synthetic tsunami waveforms (or Green functions) was obtained from numerical simulation of seismic unit sources (dimension: 50 × 50 km2) for subduction zones from southern Chile to northern Mexico. A bathymetry resolution of 30 arc-sec (approximately 927 m) was used. The resulting tsunami waveform is obtained from the superposition of synthetic waveforms corresponding to several seismic unit sources contained within the tsunami source geometry. The numerical procedure was applied to the Chilean tsunami of April 1, 2014. The results show a very good correlation for stations with wave amplitude greater than 1 m, in the case of the Arica tide station an error (from the maximum height of the observed and simulated waveform) of 3.5% was obtained, for Callao station the error was 12% and the largest error was in Chimbote with 53.5%, however, due to the low amplitude of the Chimbote wave (<1 m), the overestimated error, in this case, is not important for evacuation purposes. The aim of the present research is tsunami early warning, where speed is required rather than accuracy, so the results should be taken as preliminary.

  3. Numerical Procedure to Forecast the Tsunami Parameters from a Database of Pre-Simulated Seismic Unit Sources

    NASA Astrophysics Data System (ADS)

    Jiménez, César; Carbonel, Carlos; Rojas, Joel

    2017-09-01

    We have implemented a numerical procedure to forecast the parameters of a tsunami, such as the arrival time of the front of the first wave and the maximum wave height in real and virtual tidal stations along the Peruvian coast, with this purpose a database of pre-computed synthetic tsunami waveforms (or Green functions) was obtained from numerical simulation of seismic unit sources (dimension: 50 × 50 km2) for subduction zones from southern Chile to northern Mexico. A bathymetry resolution of 30 arc-sec (approximately 927 m) was used. The resulting tsunami waveform is obtained from the superposition of synthetic waveforms corresponding to several seismic unit sources contained within the tsunami source geometry. The numerical procedure was applied to the Chilean tsunami of April 1, 2014. The results show a very good correlation for stations with wave amplitude greater than 1 m, in the case of the Arica tide station an error (from the maximum height of the observed and simulated waveform) of 3.5% was obtained, for Callao station the error was 12% and the largest error was in Chimbote with 53.5%, however, due to the low amplitude of the Chimbote wave (<1 m), the overestimated error, in this case, is not important for evacuation purposes. The aim of the present research is tsunami early warning, where speed is required rather than accuracy, so the results should be taken as preliminary.

  4. A variable acceleration calibration system

    NASA Astrophysics Data System (ADS)

    Johnson, Thomas H.

    2011-12-01

    A variable acceleration calibration system that applies loads using gravitational and centripetal acceleration serves as an alternative, efficient and cost effective method for calibrating internal wind tunnel force balances. Two proof-of-concept variable acceleration calibration systems are designed, fabricated and tested. The NASA UT-36 force balance served as the test balance for the calibration experiments. The variable acceleration calibration systems are shown to be capable of performing three component calibration experiments with an approximate applied load error on the order of 1% of the full scale calibration loads. Sources of error are indentified using experimental design methods and a propagation of uncertainty analysis. Three types of uncertainty are indentified for the systems and are attributed to prediction error, calibration error and pure error. Angular velocity uncertainty is shown to be the largest indentified source of prediction error. The calibration uncertainties using a production variable acceleration based system are shown to be potentially equivalent to current methods. The production quality system can be realized using lighter materials and a more precise instrumentation. Further research is needed to account for balance deflection, forcing effects due to vibration, and large tare loads. A gyroscope measurement technique is shown to be capable of resolving the balance deflection angle calculation. Long term research objectives include a demonstration of a six degree of freedom calibration, and a large capacity balance calibration.

  5. Reducing Uncertainty in the American Community Survey through Data-Driven Regionalization

    PubMed Central

    Spielman, Seth E.; Folch, David C.

    2015-01-01

    The American Community Survey (ACS) is the largest survey of US households and is the principal source for neighborhood scale information about the US population and economy. The ACS is used to allocate billions in federal spending and is a critical input to social scientific research in the US. However, estimates from the ACS can be highly unreliable. For example, in over 72% of census tracts, the estimated number of children under 5 in poverty has a margin of error greater than the estimate. Uncertainty of this magnitude complicates the use of social data in policy making, research, and governance. This article presents a heuristic spatial optimization algorithm that is capable of reducing the margins of error in survey data via the creation of new composite geographies, a process called regionalization. Regionalization is a complex combinatorial problem. Here rather than focusing on the technical aspects of regionalization we demonstrate how to use a purpose built open source regionalization algorithm to process survey data in order to reduce the margins of error to a user-specified threshold. PMID:25723176

  6. Reducing uncertainty in the american community survey through data-driven regionalization.

    PubMed

    Spielman, Seth E; Folch, David C

    2015-01-01

    The American Community Survey (ACS) is the largest survey of US households and is the principal source for neighborhood scale information about the US population and economy. The ACS is used to allocate billions in federal spending and is a critical input to social scientific research in the US. However, estimates from the ACS can be highly unreliable. For example, in over 72% of census tracts, the estimated number of children under 5 in poverty has a margin of error greater than the estimate. Uncertainty of this magnitude complicates the use of social data in policy making, research, and governance. This article presents a heuristic spatial optimization algorithm that is capable of reducing the margins of error in survey data via the creation of new composite geographies, a process called regionalization. Regionalization is a complex combinatorial problem. Here rather than focusing on the technical aspects of regionalization we demonstrate how to use a purpose built open source regionalization algorithm to process survey data in order to reduce the margins of error to a user-specified threshold.

  7. Results from a Sting Whip Correction Verification Test at the Langley 16-Foot Transonic Tunnel

    NASA Technical Reports Server (NTRS)

    Crawford, B. L.; Finley, T. D.

    2002-01-01

    In recent years, great strides have been made toward correcting the largest error in inertial Angle of Attack (AoA) measurements in wind tunnel models. This error source is commonly referred to as 'sting whip' and is caused by aerodynamically induced forces imparting dynamics on sting-mounted models. These aerodynamic forces cause the model to whip through an arc section in the pitch and/or yaw planes, thus generating a centrifugal acceleration and creating a bias error in the AoA measurement. It has been shown that, under certain conditions, this induced AoA error can be greater than one third of a degree. An error of this magnitude far exceeds the target AoA goal of 0.01 deg established at NASA Langley Research Center (LaRC) and elsewhere. New sting whip correction techniques being developed at LaRC are able to measure and reduce this sting whip error by an order of magnitude. With this increase of accuracy, the 0.01 deg AoA target is achievable under all but the most severe conditions.

  8. Work Program. Fiscal Year 1969 for The Department of the Army. Research and Development in Training, Motivation, and Leadership

    DTIC Science & Technology

    1969-01-01

    job requiremeits in these skills , and (2) developing technique.; for improving literacy skills through training. In addition, manpower pools for a given...job requirements in these skills , and (2) developing techniques for improving literacy skills through training. In addition, manpower pools for a...visual and psychomotor skills -for accurate .nd’efficient operation, and performance variations among gunners are the largest source of error in system

  9. Investigation of Primary Mirror Segment's Residual Errors for the Thirty Meter Telescope

    NASA Technical Reports Server (NTRS)

    Seo, Byoung-Joon; Nissly, Carl; Angeli, George; MacMynowski, Doug; Sigrist, Norbert; Troy, Mitchell; Williams, Eric

    2009-01-01

    The primary mirror segment aberrations after shape corrections with warping harness have been identified as the single largest error term in the Thirty Meter Telescope (TMT) image quality error budget. In order to better understand the likely errors and how they will impact the telescope performance we have performed detailed simulations. We first generated unwarped primary mirror segment surface shapes that met TMT specifications. Then we used the predicted warping harness influence functions and a Shack-Hartmann wavefront sensor model to determine estimates for the 492 corrected segment surfaces that make up the TMT primary mirror. Surface and control parameters, as well as the number of subapertures were varied to explore the parameter space. The corrected segment shapes were then passed to an optical TMT model built using the Jet Propulsion Laboratory (JPL) developed Modeling and Analysis for Controlled Optical Systems (MACOS) ray-trace simulator. The generated exit pupil wavefront error maps provided RMS wavefront error and image-plane characteristics like the Normalized Point Source Sensitivity (PSSN). The results have been used to optimize the segment shape correction and wavefront sensor designs as well as provide input to the TMT systems engineering error budgets.

  10. Colorimetry for CRT displays.

    PubMed

    Golz, Jürgen; MacLeod, Donald I A

    2003-05-01

    We analyze the sources of error in specifying color in CRT displays. These include errors inherent in the use of the color matching functions of the CIE 1931 standard observer when only colorimetric, not radiometric, calibrations are available. We provide transformation coefficients that prove to correct the deficiencies of this observer very well. We consider four different candidate sets of cone sensitivities. Some of these differ substantially; variation among candidate cone sensitivities exceeds the variation among phosphors. Finally, the effects of the recognized forms of observer variation on the visual responses (cone excitations or cone contrasts) generated by CRT stimuli are investigated and quantitatively specified. Cone pigment polymorphism gives rise to variation of a few per cent in relative excitation by the different phosphors--a variation larger than the errors ensuing from the adoption of the CIE standard observer, though smaller than the differences between some candidate cone sensitivities. Macular pigmentation has a larger influence, affecting mainly responses to the blue phosphor. The estimated combined effect of all sources of observer variation is comparable in magnitude with the largest differences between competing cone sensitivity estimates but is not enough to disrupt very seriously the relation between the L and M cone weights and the isoluminance settings of individual observers. It is also comparable with typical instrumental colorimetric errors, but we discuss these only briefly.

  11. Integrated Modeling Activities for the James Webb Space Telescope: Optical Jitter Analysis

    NASA Technical Reports Server (NTRS)

    Hyde, T. Tupper; Ha, Kong Q.; Johnston, John D.; Howard, Joseph M.; Mosier, Gary E.

    2004-01-01

    This is a continuation of a series of papers on the integrated modeling activities for the James Webb Space Telescope(JWST). Starting with the linear optical model discussed in part one, and using the optical sensitivities developed in part two, we now assess the optical image motion and wavefront errors from the structural dynamics. This is often referred to as "jitter: analysis. The optical model is combined with the structural model and the control models to create a linear structural/optical/control model. The largest jitter is due to spacecraft reaction wheel assembly disturbances which are harmonic in nature and will excite spacecraft and telescope structural. The structural/optic response causes image quality degradation due to image motion (centroid error) as well as dynamic wavefront error. Jitter analysis results are used to predict imaging performance, improve the structural design, and evaluate the operational impact of the disturbance sources.

  12. Correction of electrode modelling errors in multi-frequency EIT imaging.

    PubMed

    Jehl, Markus; Holder, David

    2016-06-01

    The differentiation of haemorrhagic from ischaemic stroke using electrical impedance tomography (EIT) requires measurements at multiple frequencies, since the general lack of healthy measurements on the same patient excludes time-difference imaging methods. It has previously been shown that the inaccurate modelling of electrodes constitutes one of the largest sources of image artefacts in non-linear multi-frequency EIT applications. To address this issue, we augmented the conductivity Jacobian matrix with a Jacobian matrix with respect to electrode movement. Using this new algorithm, simulated ischaemic and haemorrhagic strokes in a realistic head model were reconstructed for varying degrees of electrode position errors. The simultaneous recovery of conductivity spectra and electrode positions removed most artefacts caused by inaccurately modelled electrodes. Reconstructions were stable for electrode position errors of up to 1.5 mm standard deviation along both surface dimensions. We conclude that this method can be used for electrode model correction in multi-frequency EIT.

  13. Numerical relativity waveform surrogate model for generically precessing binary black hole mergers

    NASA Astrophysics Data System (ADS)

    Blackman, Jonathan; Field, Scott E.; Scheel, Mark A.; Galley, Chad R.; Ott, Christian D.; Boyle, Michael; Kidder, Lawrence E.; Pfeiffer, Harald P.; Szilágyi, Béla

    2017-07-01

    A generic, noneccentric binary black hole (BBH) system emits gravitational waves (GWs) that are completely described by seven intrinsic parameters: the black hole spin vectors and the ratio of their masses. Simulating a BBH coalescence by solving Einstein's equations numerically is computationally expensive, requiring days to months of computing resources for a single set of parameter values. Since theoretical predictions of the GWs are often needed for many different source parameters, a fast and accurate model is essential. We present the first surrogate model for GWs from the coalescence of BBHs including all seven dimensions of the intrinsic noneccentric parameter space. The surrogate model, which we call NRSur7dq2, is built from the results of 744 numerical relativity simulations. NRSur7dq2 covers spin magnitudes up to 0.8 and mass ratios up to 2, includes all ℓ≤4 modes, begins about 20 orbits before merger, and can be evaluated in ˜50 ms . We find the largest NRSur7dq2 errors to be comparable to the largest errors in the numerical relativity simulations, and more than an order of magnitude smaller than the errors of other waveform models. Our model, and more broadly the methods developed here, will enable studies that were not previously possible when using highly accurate waveforms, such as parameter inference and tests of general relativity with GW observations.

  14. Assessment of Ionospheric Gradient Impacts on Ground-Based Augmentation System (GBAS) Data in Guangdong Province, China

    PubMed Central

    Wang, Zhipeng; Wang, Shujing; Zhu, Yanbo; Xin, Pumin

    2017-01-01

    Ionospheric delay is one of the largest and most variable sources of error for Ground-Based Augmentation System (GBAS) users because inospheric activity is unpredictable. Under normal conditions, GBAS eliminates ionospheric delays, but during extreme ionospheric storms, GBAS users and GBAS ground facilities may experience different ionospheric delays, leading to considerable differential errors and threatening the safety of users. Therefore, ionospheric monitoring and assessment are important parts of GBAS integrity monitoring. To study the effects of the ionosphere on the GBAS of Guangdong Province, China, GPS data collected from 65 reference stations were processed using the improved “Simple Truth” algorithm. In addition, the ionospheric characteristics of Guangdong Province were calculated and an ionospheric threat model was established. Finally, we evaluated the influence of the standard deviation and maximum ionospheric gradient on GBAS. The results show that, under normal ionospheric conditions, the vertical protection level of GBAS was increased by 0.8 m for the largest over bound σvig (sigma of vertical ionospheric gradient), and in the case of the maximum ionospheric gradient conditions, the differential correction error may reach 5 m. From an airworthiness perspective, when the satellite is at a low elevation, this interference does not cause airworthiness risks, but when the satellite is at a high elevation, this interference can cause airworthiness risks. PMID:29019953

  15. Assessment of Ionospheric Gradient Impacts on Ground-Based Augmentation System (GBAS) Data in Guangdong Province, China.

    PubMed

    Wang, Zhipeng; Wang, Shujing; Zhu, Yanbo; Xin, Pumin

    2017-10-11

    Ionospheric delay is one of the largest and most variable sources of error for Ground-Based Augmentation System (GBAS) users because inospheric activity is unpredictable. Under normal conditions, GBAS eliminates ionospheric delays, but during extreme ionospheric storms, GBAS users and GBAS ground facilities may experience different ionospheric delays, leading to considerable differential errors and threatening the safety of users. Therefore, ionospheric monitoring and assessment are important parts of GBAS integrity monitoring. To study the effects of the ionosphere on the GBAS of Guangdong Province, China, GPS data collected from 65 reference stations were processed using the improved "Simple Truth" algorithm. In addition, the ionospheric characteristics of Guangdong Province were calculated and an ionospheric threat model was established. Finally, we evaluated the influence of the standard deviation and maximum ionospheric gradient on GBAS. The results show that, under normal ionospheric conditions, the vertical protection level of GBAS was increased by 0.8 m for the largest over bound σ v i g (sigma of vertical ionospheric gradient), and in the case of the maximum ionospheric gradient conditions, the differential correction error may reach 5 m. From an airworthiness perspective, when the satellite is at a low elevation, this interference does not cause airworthiness risks, but when the satellite is at a high elevation, this interference can cause airworthiness risks.

  16. Transition year labeling error characterization study. [Kansas, Minnesota, Montana, North Dakota, South Dakota, and Oklahoma

    NASA Technical Reports Server (NTRS)

    Clinton, N. J. (Principal Investigator)

    1980-01-01

    Labeling errors made in the large area crop inventory experiment transition year estimates by Earth Observation Division image analysts are identified and quantified. The analysis was made from a subset of blind sites in six U.S. Great Plains states (Oklahoma, Kansas, Montana, Minnesota, North and South Dakota). The image interpretation basically was well done, resulting in a total omission error rate of 24 percent and a commission error rate of 4 percent. The largest amount of error was caused by factors beyond the control of the analysts who were following the interpretation procedures. The odd signatures, the largest error cause group, occurred mostly in areas of moisture abnormality. Multicrop labeling was tabulated showing the distribution of labeling for all crops.

  17. Improved Uncertainty Quantification in Groundwater Flux Estimation Using GRACE

    NASA Astrophysics Data System (ADS)

    Reager, J. T., II; Rao, P.; Famiglietti, J. S.; Turmon, M.

    2015-12-01

    Groundwater change is difficult to monitor over large scales. One of the most successful approaches is in the remote sensing of time-variable gravity using NASA Gravity Recovery and Climate Experiment (GRACE) mission data, and successful case studies have created the opportunity to move towards a global groundwater monitoring framework for the world's largest aquifers. To achieve these estimates, several approximations are applied, including those in GRACE processing corrections, the formulation of the formal GRACE errors, destriping and signal recovery, and the numerical model estimation of snow water, surface water and soil moisture storage states used to isolate a groundwater component. A major weakness in these approaches is inconsistency: different studies have used different sources of primary and ancillary data, and may achieve different results based on alternative choices in these approximations. In this study, we present two cases of groundwater change estimation in California and the Colorado River basin, selected for their good data availability and varied climates. We achieve a robust numerical estimate of post-processing uncertainties resulting from land-surface model structural shortcomings and model resolution errors. Groundwater variations should demonstrate less variability than the overlying soil moisture state does, as groundwater has a longer memory of past events due to buffering by infiltration and drainage rate limits. We apply a model ensemble approach in a Bayesian framework constrained by the assumption of decreasing signal variability with depth in the soil column. We also discuss time variable errors vs. time constant errors, across-scale errors v. across-model errors, and error spectral content (across scales and across model). More robust uncertainty quantification for GRACE-based groundwater estimates would take all of these issues into account, allowing for more fair use in management applications and for better integration of GRACE-based measurements with observations from other sources.

  18. Investigation into the limitations of straightness interferometers using a multisensor-based error separation method

    NASA Astrophysics Data System (ADS)

    Weichert, Christoph; Köchert, Paul; Schötka, Eugen; Flügge, Jens; Manske, Eberhard

    2018-06-01

    The uncertainty of a straightness interferometer is independent of the component used to introduce the divergence angle between the two probing beams, and is limited by three main error sources, which are linked to each other: their resolution, the influence of refractive index gradients and the topography of the straightness reflector. To identify the configuration with minimal uncertainties under laboratory conditions, a fully fibre-coupled heterodyne interferometer was successively equipped with three different wedge prisms, resulting in three different divergence angles (4°, 8° and 20°). To separate the error sources an independent reference with a smaller reproducibility is needed. Therefore, the straightness measurement capability of the Nanometer Comparator, based on a multisensor error separation method, was improved to provide measurements with a reproducibility of 0.2 nm. The comparison results revealed that the influence of the refractive index gradients of air did not increase with interspaces between the probing beams of more than 11.3 mm. Therefore, over a movement range of 220 mm, the lowest uncertainty was achieved with the largest divergence angle. The dominant uncertainty contribution arose from the mirror topography, which was additionally determined with a Fizeau interferometer. The measured topography agreed within  ±1.3 nm with the systematic deviations revealed in the straightness comparison, resulting in an uncertainty contribution of 2.6 nm for the straightness interferometer.

  19. The dissociation energy of N2

    NASA Technical Reports Server (NTRS)

    Almloef, Jan; Deleeuw, Bradley J.; Taylor, Peter R.; Bauschlicher, Charles W., Jr.; Siegbahn, Per

    1989-01-01

    The requirements for very accurate ab initio quantum chemical prediction of dissociation energies are examined using a detailed investigation of the nitrogen molecule. Although agreement with experiment to within 1 kcal/mol is not achieved even with the most elaborate multireference CI (configuration interaction) wave functions and largest basis sets currently feasible, it is possible to obtain agreement to within about 2 kcal/mol, or 1 percent of the dissociation energy. At this level it is necessary to account for core-valence correlation effects and to include up to h-type functions in the basis. The effect of i-type functions, the use of different reference configuration spaces, and basis set superposition error were also investigated. After discussing these results, the remaining sources of error in our best calculations are examined.

  20. Study on the total amount control of atmospheric pollutant based on GIS.

    PubMed

    Wang, Jian-Ping; Guo, Xi-Kun

    2005-08-01

    To provide effective environmental management for total amount control of atmospheric pollutants. An atmospheric diffusion model of sulfur dioxide on the surface of the earth was established and tested in Shantou of Guangdong Province on the basis of an overall assessment of regional natural environment, social economic state of development, pollution sources and atmospheric environmental quality. Compared with actual monitoring results in a studied region, simulation values fell within the range of two times of error and were evenly distributed in the two sides of the monitored values. Predicted with the largest emission model method, the largest emission of sulfur dioxide would be 54,279.792 tons per year in 2010. The mathematical model established and revised on the basis of GIS is more rational and suitable for the regional characteristics of total amount control of air pollutants.

  1. Study of a selection of 10 historical types of dosemeter: variation of the response to Hp(10) with photon energy and geometry of exposure.

    PubMed

    Thierry-Chef, I; Pernicka, F; Marshall, M; Cardis, E; Andreo, P

    2002-01-01

    An international collaborative study of cancer risk among workers in the nuclear industry is tinder way to estimate direetly the cancer risk following protracted low-dose exposure to ionising radiation. An essential aspect of this study is the characterisation and quantification of errors in available dose estimates. One major source of errors is dosemeter response in workplace exposure conditions. Little information is available on energy and geometry response for most of the 124 different dosemeters used historically in participating facilities. Experiments were therefore set up to assess this. using 10 dosemeter types representative of those used over time. Results show that the largest errors were associated with the response of early dosemeters to low-energy photon radiation. Good response was found with modern dosemeters. even at low energy. These results are being used to estimate errors in the response for each dosemeter type, used in the participating facilities, so that these can be taken into account in the estimates of cancer risk.

  2. Design, performance, and calculated error of a Faraday cup for absolute beam current measurements of 600-MeV protons

    NASA Technical Reports Server (NTRS)

    Beck, S. M.

    1975-01-01

    A mobile self-contained Faraday cup system for beam current measurments of nominal 600 MeV protons was designed, constructed, and used at the NASA Space Radiation Effects Laboratory. The cup is of reentrant design with a length of 106.7 cm and an outside diameter of 20.32 cm. The inner diameter is 15.24 cm and the base thickness is 30.48 cm. The primary absorber is commercially available lead hermetically sealed in a 0.32-cm-thick copper jacket. Several possible systematic errors in using the cup are evaluated. The largest source of error arises from high-energy electrons which are ejected from the entrance window and enter the cup. A total systematic error of -0.83 percent is calculated to be the decrease from the true current value. From data obtained in calibrating helium-filled ion chambers with the Faraday cup, the mean energy required to produce one ion pair in helium is found to be 30.76 + or - 0.95 eV for nominal 600 MeV protons. This value agrees well, within experimental error, with reported values of 29.9 eV and 30.2 eV.

  3. Systematic neutron guide misalignment for an accelerator-driven spallation neutron source

    NASA Astrophysics Data System (ADS)

    Zendler, C.; Bentley, P. M.

    2016-08-01

    The European Spallation Source (ESS) is a long pulse spallation neutron source that is currently under construction in Lund, Sweden. A considerable fraction of the 22 planned instruments extend as far as 75-150 m from the source. In such long beam lines, misalignment between neutron guide segments can decrease the neutron transmission significantly. In addition to a random misalignment from installation tolerances, the ground on which ESS is built can be expected to sink with time, and thus shift the neutron guide segments further away from the ideal alignment axis in a systematic way. These systematic errors are correlated to the ground structure, position of buildings and shielding installation. Since the largest deformation is expected close to the target, even short instruments might be noticeably affected. In this study, the effect of this systematic misalignment on short and long ESS beam lines is analyzed, and a possible mitigation by overillumination of subsequent guide sections investigated.

  4. New approach for point pollution source identification in rivers based on the backward probability method.

    PubMed

    Wang, Jiabiao; Zhao, Jianshi; Lei, Xiaohui; Wang, Hao

    2018-06-13

    Pollution risk from the discharge of industrial waste or accidental spills during transportation poses a considerable threat to the security of rivers. The ability to quickly identify the pollution source is extremely important to enable emergency disposal of pollutants. This study proposes a new approach for point source identification of sudden water pollution in rivers, which aims to determine where (source location), when (release time) and how much pollutant (released mass) was introduced into the river. Based on the backward probability method (BPM) and the linear regression model (LR), the proposed LR-BPM converts the ill-posed problem of source identification into an optimization model, which is solved using a Differential Evolution Algorithm (DEA). The decoupled parameters of released mass are not dependent on prior information, which improves the identification efficiency. A hypothetical case study with a different number of pollution sources was conducted to test the proposed approach, and the largest relative errors for identified location, release time, and released mass in all tests were not greater than 10%. Uncertainty in the LR-BPM is mainly due to a problem with model equifinality, but averaging the results of repeated tests greatly reduces errors. Furthermore, increasing the gauging sections further improves identification results. A real-world case study examines the applicability of the LR-BPM in practice, where it is demonstrated to be more accurate and time-saving than two existing approaches, Bayesian-MCMC and basic DEA. Copyright © 2018 Elsevier Ltd. All rights reserved.

  5. Attitude estimation from magnetometer and earth-albedo-corrected coarse sun sensor measurements

    NASA Astrophysics Data System (ADS)

    Appel, Pontus

    2005-01-01

    For full 3-axes attitude determination the magnetic field vector and the Sun vector can be used. A Coarse Sun Sensor consisting of six solar cells placed on each of the six outer surfaces of the satellite is used for Sun vector determination. This robust and low cost setup is sensitive to surrounding light sources as it sees the whole sky. To compensate for the largest error source, the Earth, an albedo model is developed. The total albedo light vector has contributions from the Earth surface which is illuminated by the Sun and visible from the satellite. Depending on the reflectivity of the Earth surface, the satellite's position and the Sun's position the albedo light changes. This cannot be calculated analytically and hence a numerical model is developed. For on-board computer use the Earth albedo model consisting of data tables is transferred into polynomial functions in order to save memory space. For an absolute worst case the attitude determination error can be held below 2∘. In a nominal case it is better than 1∘.

  6. A comprehensive evaluation of input data-induced uncertainty in nonpoint source pollution modeling

    NASA Astrophysics Data System (ADS)

    Chen, L.; Gong, Y.; Shen, Z.

    2015-11-01

    Watershed models have been used extensively for quantifying nonpoint source (NPS) pollution, but few studies have been conducted on the error-transitivity from different input data sets to NPS modeling. In this paper, the effects of four input data, including rainfall, digital elevation models (DEMs), land use maps, and the amount of fertilizer, on NPS simulation were quantified and compared. A systematic input-induced uncertainty was investigated using watershed model for phosphorus load prediction. Based on the results, the rain gauge density resulted in the largest model uncertainty, followed by DEMs, whereas land use and fertilizer amount exhibited limited impacts. The mean coefficient of variation for errors in single rain gauges-, multiple gauges-, ASTER GDEM-, NFGIS DEM-, land use-, and fertilizer amount information was 0.390, 0.274, 0.186, 0.073, 0.033 and 0.005, respectively. The use of specific input information, such as key gauges, is also highlighted to achieve the required model accuracy. In this sense, these results provide valuable information to other model-based studies for the control of prediction uncertainty.

  7. [From the concept of guilt to the value-free notification of errors in medicine. Risks, errors and patient safety].

    PubMed

    Haller, U; Welti, S; Haenggi, D; Fink, D

    2005-06-01

    The number of liability cases but also the size of individual claims due to alleged treatment errors are increasing steadily. Spectacular sentences, especially in the USA, encourage this trend. Wherever human beings work, errors happen. The health care system is particularly susceptible and shows a high potential for errors. Therefore risk management has to be given top priority in hospitals. Preparing the introduction of critical incident reporting (CIR) as the means to notify errors is time-consuming and calls for a change in attitude because in many places the necessary base of trust has to be created first. CIR is not made to find the guilty and punish them but to uncover the origins of errors in order to eliminate them. The Department of Anesthesiology of the University Hospital of Basel has developed an electronic error notification system, which, in collaboration with the Swiss Medical Association, allows each specialist society to participate electronically in a CIR system (CIRS) in order to create the largest database possible and thereby to allow statements concerning the extent and type of error sources in medicine. After a pilot project in 2000-2004, the Swiss Society of Gynecology and Obstetrics is now progressively introducing the 'CIRS Medical' of the Swiss Medical Association. In our country, such programs are vulnerable to judicial intervention due to the lack of explicit legal guarantees of protection. High-quality data registration and skillful counseling are all the more important. Hospital directors and managers are called upon to examine those incidents which are based on errors inherent in the system.

  8. The continuous UV flux of Alpha Lyrae - Non-LTE results

    NASA Technical Reports Server (NTRS)

    Snijders, M. A. J.

    1977-01-01

    Non-LTE calculations for the ultraviolet C I and Si I continuous opacity show that LTE results overestimate the importance of these sources of opacity and underestimate the emergent flux in Alpha Lyr. The largest errors occur between 1100 and 1160 A, where the predicted flux in non-LTE is as much as 50 times larger than in LTE, in reasonable accord with Copernicus observations. The discrepancy between LTE models and observations has been interpreted to result from the existence of a chromosphere. Until a self-consistent non-LTE model atmosphere becomes available, such an interpretation is premature.

  9. Tropospheric Delay Raytracing Applied in VLBI Analysis

    NASA Astrophysics Data System (ADS)

    MacMillan, D. S.; Eriksson, D.; Gipson, J. M.

    2013-12-01

    Tropospheric delay modeling error continues to be one of the largest sources of error in VLBI analysis. For standard operational solutions, we use the VMF1 elevation-dependent mapping functions derived from ECMWF data. These mapping functions assume that tropospheric delay at a site is azimuthally symmetric. As this assumption does not reflect reality, we have determined the raytrace delay along the signal path through the troposphere for each VLBI quasar observation. We determined the troposphere refractivity fields from the pressure, temperature, specific humidity and geopotential height fields of the NASA GSFC GEOS-5 numerical weather model. We discuss results from analysis of the CONT11 R&D and the weekly operational R1+R4 experiment sessions. When applied in VLBI analysis, baseline length repeatabilities were better for 66-72% of baselines with raytraced delays than with VMF1 mapping functions. Vertical repeatabilities were better for 65% of sites.

  10. Identification and proposed control of helicopter transmission noise at the source

    NASA Technical Reports Server (NTRS)

    Coy, John J.; Handschuh, Robert F.; Lewicki, David G.; Huff, Ronald G.; Krejsa, Eugene A.; Karchmer, Allan M.

    1987-01-01

    Helicopter cabin interiors require noise treatment which is expensive and adds weight. The gears inside the main power transmission are major sources of cabin noise. Work conducted by the NASA Lewis Research Center in measuring cabin interior noise and in relating the noise spectrum to the gear vibration of the Army OH-58 helicopter is described. Flight test data indicate that the planetary gear train is a major source of cabin noise and that other low frequency sources are present that could dominate the cabin noise. Companion vibration measurements were made in a transmission test stand, revealing that the single largest contributor to the transmission vibration was the spiral bevel gear mesh. The current understanding of the nature and causes of gear and transmission noise is discussed. It is believed that the kinematical errors of the gear mesh have a strong influence on that noise. The completed NASA/Army sponsored research that applies to transmission noise reduction is summarized. The continuing research program is also reviewed.

  11. Identification and proposed control of helicopter transmission noise at the source

    NASA Technical Reports Server (NTRS)

    Coy, John J.; Handschuh, Robert F.; Lewicki, David G.; Huff, Ronald G.; Krejsa, Eugene A.; Karchmer, Allan M.; Coy, John J.

    1988-01-01

    Helicopter cabin interiors require noise treatment which is expensive and adds weight. The gears inside the main power transmission are major sources of cabin noise. Work conducted by the NASA Lewis Research Center in measuring cabin interior noise and in relating the noise spectrum to the gear vibration of the Army OH-58 helicopter is described. Flight test data indicate that the planetary gear train is a major source of cabin noise and that other low frequency sources are present that could dominate the cabin noise. Companion vibration measurements were made in a transmission test stand, revealing that the single largest contributor to the transmission vibration was the spiral bevel gear mesh. The current understanding of the nature and causes of gear and transmission noise is discussed. It is believed that the kinematical errors of the gear mesh have a strong influence on that noise. The completed NASA/Army sponsored research that applies to transmission noise reduction is summarized. The continuing research program is also reviewed.

  12. Readers of Largest U.S. History Textbooks Discover a Storehouse of Misinformation.

    ERIC Educational Resources Information Center

    Putka, Gary

    1992-01-01

    Reports that a Texas advocacy group discovered thousands of errors in U.S. history textbooks. Notes that the books underwent the review after drawing favorable reactions from Texas education officials. Identifies possible explanations for the errors and steps being taken to reduce errors in the future. (SG)

  13. A predictability study of Lorenz's 28-variable model as a dynamical system

    NASA Technical Reports Server (NTRS)

    Krishnamurthy, V.

    1993-01-01

    The dynamics of error growth in a two-layer nonlinear quasi-geostrophic model has been studied to gain an understanding of the mathematical theory of atmospheric predictability. The growth of random errors of varying initial magnitudes has been studied, and the relation between this classical approach and the concepts of the nonlinear dynamical systems theory has been explored. The local and global growths of random errors have been expressed partly in terms of the properties of an error ellipsoid and the Liapunov exponents determined by linear error dynamics. The local growth of small errors is initially governed by several modes of the evolving error ellipsoid but soon becomes dominated by the longest axis. The average global growth of small errors is exponential with a growth rate consistent with the largest Liapunov exponent. The duration of the exponential growth phase depends on the initial magnitude of the errors. The subsequent large errors undergo a nonlinear growth with a steadily decreasing growth rate and attain saturation that defines the limit of predictability. The degree of chaos and the largest Liapunov exponent show considerable variation with change in the forcing, which implies that the time variation in the external forcing can introduce variable character to the predictability.

  14. Troposphere Delay Raytracing Applied in VLBI Analysis

    NASA Astrophysics Data System (ADS)

    Eriksson, David; MacMillan, Daniel; Gipson, John

    2014-12-01

    Tropospheric delay modeling error is one of the largest sources of error in VLBI analysis. For standard operational solutions, we use the VMF1 elevation-dependent mapping functions derived from European Centre for Medium Range Forecasting (ECMWF) data. These mapping functions assume that tropospheric delay at a site is azimuthally symmetric. As this assumption does not reflect reality, we have instead determined the raytrace delay along the signal path through the three-dimensional troposphere refractivity field for each VLBI quasar observation. We calculated the troposphere refractivity fields from the pressure, temperature, specific humidity, and geopotential height fields of the NASA GSFC GEOS-5 numerical weather model. We discuss results using raytrace delay in the analysis of the CONT11 R&D sessions. When applied in VLBI analysis, baseline length repeatabilities were better for 70% of baselines with raytraced delays than with VMF1 mapping functions. Vertical repeatabilities were better for 2/3 of all stations. The reference frame scale bias error was 0.02 ppb for raytracing versus 0.08 ppb and 0.06 ppb for VMF1 and NMF, respectively.

  15. Calculation of surface and top of atmosphere radiative fluxes from physical quantities based on ISCCP data sets. 1: Method and sensitivity to input data uncertainties

    NASA Technical Reports Server (NTRS)

    Zhang, Y.-C.; Rossow, W. B.; Lacis, A. A.

    1995-01-01

    The largest uncertainty in upwelling shortwave (SW) fluxes (approximately equal 10-15 W/m(exp 2), regional daily mean) is caused by uncertainties in land surface albedo, whereas the largest uncertainty in downwelling SW at the surface (approximately equal 5-10 W/m(exp 2), regional daily mean) is related to cloud detection errors. The uncertainty of upwelling longwave (LW) fluxes (approximately 10-20 W/m(exp 2), regional daily mean) depends on the accuracy of the surface temperature for the surface LW fluxes and the atmospheric temperature for the top of atmosphere LW fluxes. The dominant source of uncertainty is downwelling LW fluxes at the surface (approximately equal 10-15 W/m(exp 2)) is uncertainty in atmospheric temperature and, secondarily, atmospheric humidity; clouds play little role except in the polar regions. The uncertainties of the individual flux components and the total net fluxes are largest over land (15-20 W/m(exp 2)) because of uncertainties in surface albedo (especially its spectral dependence) and surface temperature and emissivity (including its spectral dependence). Clouds are the most important modulator of the SW fluxes, but over land areas, uncertainties in net SW at the surface depend almost as much on uncertainties in surface albedo. Although atmospheric and surface temperature variations cause larger LW flux variations, the most notable feature of the net LW fluxes is the changing relative importance of clouds and water vapor with latitude. Uncertainty in individual flux values is dominated by sampling effects because of large natrual variations, but uncertainty in monthly mean fluxes is dominated by bias errors in the input quantities.

  16. Spatially Resolved Isotopic Source Signatures of Wetland Methane Emissions

    NASA Astrophysics Data System (ADS)

    Ganesan, A. L.; Stell, A. C.; Gedney, N.; Comyn-Platt, E.; Hayman, G.; Rigby, M.; Poulter, B.; Hornibrook, E. R. C.

    2018-04-01

    We present the first spatially resolved wetland δ13C(CH4) source signature map based on data characterizing wetland ecosystems and demonstrate good agreement with wetland signatures derived from atmospheric observations. The source signature map resolves a latitudinal difference of 10‰ between northern high-latitude (mean -67.8‰) and tropical (mean -56.7‰) wetlands and shows significant regional variations on top of the latitudinal gradient. We assess the errors in inverse modeling studies aiming to separate CH4 sources and sinks by comparing atmospheric δ13C(CH4) derived using our spatially resolved map against the common assumption of globally uniform wetland δ13C(CH4) signature. We find a larger interhemispheric gradient, a larger high-latitude seasonal cycle, and smaller trend over the period 2000-2012. The implication is that erroneous CH4 fluxes would be derived to compensate for the biases imposed by not utilizing spatially resolved signatures for the largest source of CH4 emissions. These biases are significant when compared to the size of observed signals.

  17. Bolus-dependent dosimetric effect of positioning errors for tangential scalp radiotherapy with helical tomotherapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lobb, Eric, E-mail: eclobb2@gmail.com

    2014-04-01

    The dosimetric effect of errors in patient position is studied on-phantom as a function of simulated bolus thickness to assess the need for bolus utilization in scalp radiotherapy with tomotherapy. A treatment plan is generated on a cylindrical phantom, mimicking a radiotherapy technique for the scalp utilizing primarily tangential beamlets. A planning target volume with embedded scalplike clinical target volumes (CTVs) is planned to a uniform dose of 200 cGy. Translational errors in phantom position are introduced in 1-mm increments and dose is recomputed from the original sinogram. For each error the maximum dose, minimum dose, clinical target dose homogeneitymore » index (HI), and dose-volume histogram (DVH) are presented for simulated bolus thicknesses from 0 to 10 mm. Baseline HI values for all bolus thicknesses were in the 5.5 to 7.0 range, increasing to a maximum of 18.0 to 30.5 for the largest positioning errors when 0 to 2 mm of bolus is used. Utilizing 5 mm of bolus resulted in a maximum HI value of 9.5 for the largest positioning errors. Using 0 to 2 mm of bolus resulted in minimum and maximum dose values of 85% to 94% and 118% to 125% of the prescription dose, respectively. When using 5 mm of bolus these values were 98.5% and 109.5%. DVHs showed minimal changes in CTV dose coverage when using 5 mm of bolus, even for the largest positioning errors. CTV dose homogeneity becomes increasingly sensitive to errors in patient position as bolus thickness decreases when treating the scalp with primarily tangential beamlets. Performing a radial expansion of the scalp CTV into 5 mm of bolus material minimizes dosimetric sensitivity to errors in patient position as large as 5 mm and is therefore recommended.« less

  18. High-resolution inversion of methane emissions in the Southeast US using SEAC4RS aircraft observations of atmospheric methane: anthropogenic and wetland sources

    NASA Astrophysics Data System (ADS)

    Sheng, Jian-Xiong; Jacob, Daniel J.; Turner, Alexander J.; Maasakkers, Joannes D.; Sulprizio, Melissa P.; Bloom, A. Anthony; Andrews, Arlyn E.; Wunch, Debra

    2018-05-01

    We use observations of boundary layer methane from the SEAC4RS aircraft campaign over the Southeast US in August-September 2013 to estimate methane emissions in that region through an inverse analysis with up to 0.25° × 0.3125° (25×25 km2) resolution and with full error characterization. The Southeast US is a major source region for methane including large contributions from oil and gas production and wetlands. Our inversion uses state-of-the-art emission inventories as prior estimates, including a gridded version of the anthropogenic EPA Greenhouse Gas Inventory and the mean of the WetCHARTs ensemble for wetlands. Inversion results are independently verified by comparison with surface (NOAA/ESRL) and column (TCCON) methane observations. Our posterior estimates for the Southeast US are 12.8 ± 0.9 Tg a-1 for anthropogenic sources (no significant change from the gridded EPA inventory) and 9.4 ± 0.8 Tg a-1 for wetlands (27 % decrease from the mean in the WetCHARTs ensemble). The largest source of error in the WetCHARTs wetlands ensemble is the land cover map specification of wetland areal extent. Our results support the accuracy of the EPA anthropogenic inventory on a regional scale but there are significant local discrepancies for oil and gas production fields, suggesting that emission factors are more variable than assumed in the EPA inventory.

  19. Accuracy of iodine quantification using dual energy CT in latest generation dual source and dual layer CT.

    PubMed

    Pelgrim, Gert Jan; van Hamersvelt, Robbert W; Willemink, Martin J; Schmidt, Bernhard T; Flohr, Thomas; Schilham, Arnold; Milles, Julien; Oudkerk, Matthijs; Leiner, Tim; Vliegenthart, Rozemarijn

    2017-09-01

    To determine the accuracy of iodine quantification with dual energy computed tomography (DECT) in two high-end CT systems with different spectral imaging techniques. Five tubes with different iodine concentrations (0, 5, 10, 15, 20 mg/ml) were analysed in an anthropomorphic thoracic phantom. Adding two phantom rings simulated increased patient size. For third-generation dual source CT (DSCT), tube voltage combinations of 150Sn and 70, 80, 90, 100 kVp were analysed. For dual layer CT (DLCT), 120 and 140 kVp were used. Scans were repeated three times. Median normalized values and interquartile ranges (IQRs) were calculated for all kVp settings and phantom sizes. Correlation between measured and known iodine concentrations was excellent for both systems (R = 0.999-1.000, p < 0.0001). For DSCT, median measurement errors ranged from -0.5% (IQR -2.0, 2.0%) at 150Sn/70 kVp and -2.3% (IQR -4.0, -0.1%) at 150Sn/80 kVp to -4.0% (IQR -6.0, -2.8%) at 150Sn/90 kVp. For DLCT, median measurement errors ranged from -3.3% (IQR -4.9, -1.5%) at 140 kVp to -4.6% (IQR -6.0, -3.6%) at 120 kVp. Larger phantom sizes increased variability of iodine measurements (p < 0.05). Iodine concentration can be accurately quantified with state-of-the-art DECT systems from two vendors. The lowest absolute errors were found for DSCT using the 150Sn/70 kVp or 150Sn/80 kVp combinations, which was slightly more accurate than 140 kVp in DLCT. • High-end CT scanners allow accurate iodine quantification using different DECT techniques. • Lowest measurement error was found in scans with largest photon energy separation. • Dual-source CT quantified iodine slightly more accurately than dual layer CT.

  20. Role-modeling and medical error disclosure: a national survey of trainees.

    PubMed

    Martinez, William; Hickson, Gerald B; Miller, Bonnie M; Doukas, David J; Buckley, John D; Song, John; Sehgal, Niraj L; Deitz, Jennifer; Braddock, Clarence H; Lehmann, Lisa Soleymani

    2014-03-01

    To measure trainees' exposure to negative and positive role-modeling for responding to medical errors and to examine the association between that exposure and trainees' attitudes and behaviors regarding error disclosure. Between May 2011 and June 2012, 435 residents at two large academic medical centers and 1,187 medical students from seven U.S. medical schools received anonymous, electronic questionnaires. The questionnaire asked respondents about (1) experiences with errors, (2) training for responding to errors, (3) behaviors related to error disclosure, (4) exposure to role-modeling for responding to errors, and (5) attitudes regarding disclosure. Using multivariate regression, the authors analyzed whether frequency of exposure to negative and positive role-modeling independently predicted two primary outcomes: (1) attitudes regarding disclosure and (2) nontransparent behavior in response to a harmful error. The response rate was 55% (884/1,622). Training on how to respond to errors had the largest independent, positive effect on attitudes (standardized effect estimate, 0.32, P < .001); negative role-modeling had the largest independent, negative effect (standardized effect estimate, -0.26, P < .001). Positive role-modeling had a positive effect on attitudes (standardized effect estimate, 0.26, P < .001). Exposure to negative role-modeling was independently associated with an increased likelihood of trainees' nontransparent behavior in response to an error (OR 1.37, 95% CI 1.15-1.64; P < .001). Exposure to role-modeling predicts trainees' attitudes and behavior regarding the disclosure of harmful errors. Negative role models may be a significant impediment to disclosure among trainees.

  1. Analytical study of the effects of soft tissue artefacts on functional techniques to define axes of rotation.

    PubMed

    De Rosario, Helios; Page, Álvaro; Besa, Antonio

    2017-09-06

    The accurate location of the main axes of rotation (AoR) is a crucial step in many applications of human movement analysis. There are different formal methods to determine the direction and position of the AoR, whose performance varies across studies, depending on the pose and the source of errors. Most methods are based on minimizing squared differences between observed and modelled marker positions or rigid motion parameters, implicitly assuming independent and uncorrelated errors, but the largest error usually results from soft tissue artefacts (STA), which do not have such statistical properties and are not effectively cancelled out by such methods. However, with adequate methods it is possible to assume that STA only account for a small fraction of the observed motion and to obtain explicit formulas through differential analysis that relate STA components to the resulting errors in AoR parameters. In this paper such formulas are derived for three different functional calibration techniques (Geometric Fitting, mean Finite Helical Axis, and SARA), to explain why each technique behaves differently from the others, and to propose strategies to compensate for those errors. These techniques were tested with published data from a sit-to-stand activity, where the true axis was defined using bi-planar fluoroscopy. All the methods were able to estimate the direction of the AoR with an error of less than 5°, whereas there were errors in the location of the axis of 30-40mm. Such location errors could be reduced to less than 17mm by the methods based on equations that use rigid motion parameters (mean Finite Helical Axis, SARA) when the translation component was calculated using the three markers nearest to the axis. Copyright © 2017 Elsevier Ltd. All rights reserved.

  2. Possible sources of forecast errors generated by the global/regional assimilation and prediction system for landfalling tropical cyclones. Part I: Initial uncertainties

    NASA Astrophysics Data System (ADS)

    Zhou, Feifan; Yamaguchi, Munehiko; Qin, Xiaohao

    2016-07-01

    This paper investigates the possible sources of errors associated with tropical cyclone (TC) tracks forecasted using the Global/Regional Assimilation and Prediction System (GRAPES). The GRAPES forecasts were made for 16 landfalling TCs in the western North Pacific basin during the 2008 and 2009 seasons, with a forecast length of 72 hours, and using the default initial conditions ("initials", hereafter), which are from the NCEP-FNL dataset, as well as ECMWF initials. The forecasts are compared with ECMWF forecasts. The results show that in most TCs, the GRAPES forecasts are improved when using the ECMWF initials compared with the default initials. Compared with the ECMWF initials, the default initials produce lower intensity TCs and a lower intensity subtropical high, but a higher intensity South Asia high and monsoon trough, as well as a higher temperature but lower specific humidity at the TC center. Replacement of the geopotential height and wind fields with the ECMWF initials in and around the TC center at the initial time was found to be the most efficient way to improve the forecasts. In addition, TCs that showed the greatest improvement in forecast accuracy usually had the largest initial uncertainties in TC intensity and were usually in the intensifying phase. The results demonstrate the importance of the initial intensity for TC track forecasts made using GRAPES, and indicate the model is better in describing the intensifying phase than the decaying phase of TCs. Finally, the limit of the improvement indicates that the model error associated with GRAPES forecasts may be the main cause of poor forecasts of landfalling TCs. Thus, further examinations of the model errors are required.

  3. A Wavelet Based Suboptimal Kalman Filter for Assimilation of Stratospheric Chemical Tracer Observations

    NASA Technical Reports Server (NTRS)

    Auger, Ludovic; Tangborn, Andrew; Atlas, Robert (Technical Monitor)

    2002-01-01

    A suboptimal Kalman filter system which evolves error covariances in terms of a truncated set of wavelet coefficients has been developed for the assimilation of chemical tracer observations of CH4. The truncation is carried out in such a way that the resolution of the error covariance, is reduced only in the zonal direction, where gradients are smaller. Assimilation experiments which last 24 days, and used different degrees of truncation were carried out. These reduced the covariance, by 90, 97 and 99 % and the computational cost of covariance propagation by 80, 93 and 96 % respectively. The difference in both error covariance and the tracer field between the truncated and full systems over this period were found to be not growing in the first case, and a growing relatively slowly in the later two cases. The largest errors in the tracer fields were found to occur in regions of largest zonal gradients in the tracer field.

  4. Calculations of atmospheric transmittance in the 11 micrometer window for estimating skin temperature from VISSR infrared brightness temperatures

    NASA Technical Reports Server (NTRS)

    Chesters, D.

    1984-01-01

    An algorithm for calculating the atmospheric transmittance in the 10 to 20 micro m spectral band from a known temperature and dewpoint profile, and then using this transmittance to estimate the surface (skin) temperature from a VISSR observation in the 11 micro m window is presented. Parameterizations are drawn from the literature for computing the molecular absorption due to the water vapor continuum, water vapor lines, and carbon dioxide lines. The FORTRAN code is documented for this application, and the sensitivity of the derived skin temperature to variations in the model's parameters is calculated. The VISSR calibration uncertainties are identified as the largest potential source of error.

  5. Statistics of the epoch of reionization 21-cm signal - I. Power spectrum error-covariance

    NASA Astrophysics Data System (ADS)

    Mondal, Rajesh; Bharadwaj, Somnath; Majumdar, Suman

    2016-02-01

    The non-Gaussian nature of the epoch of reionization (EoR) 21-cm signal has a significant impact on the error variance of its power spectrum P(k). We have used a large ensemble of seminumerical simulations and an analytical model to estimate the effect of this non-Gaussianity on the entire error-covariance matrix {C}ij. Our analytical model shows that {C}ij has contributions from two sources. One is the usual variance for a Gaussian random field which scales inversely of the number of modes that goes into the estimation of P(k). The other is the trispectrum of the signal. Using the simulated 21-cm Signal Ensemble, an ensemble of the Randomized Signal and Ensembles of Gaussian Random Ensembles we have quantified the effect of the trispectrum on the error variance {C}II. We find that its relative contribution is comparable to or larger than that of the Gaussian term for the k range 0.3 ≤ k ≤ 1.0 Mpc-1, and can be even ˜200 times larger at k ˜ 5 Mpc-1. We also establish that the off-diagonal terms of {C}ij have statistically significant non-zero values which arise purely from the trispectrum. This further signifies that the error in different k modes are not independent. We find a strong correlation between the errors at large k values (≥0.5 Mpc-1), and a weak correlation between the smallest and largest k values. There is also a small anticorrelation between the errors in the smallest and intermediate k values. These results are relevant for the k range that will be probed by the current and upcoming EoR 21-cm experiments.

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Xing, Y; Macq, B; Bondar, L

    Purpose: To quantify the accuracy in predicting the Bragg peak position using simulated in-room measurements of prompt gamma (PG) emissions for realistic treatment error scenarios that combine several sources of errors. Methods: Prompt gamma measurements by a knife-edge slit camera were simulated using an experimentally validated analytical simulation tool. Simulations were performed, for 143 treatment error scenarios, on an anthropomorphic phantom and a pencil beam scanning plan for nasal cavity. Three types of errors were considered: translation along each axis, rotation around each axis, and CT-calibration errors with magnitude ranging respectively, between −3 and 3 mm, −5 and 5 degrees,more » and between −5 and +5%. We investigated the correlation between the Bragg peak (BP) shift and the horizontal shift of PG profiles. The shifts were calculated between the planned (reference) position and the position by the error scenario. The prediction error for one spot was calculated as the absolute difference between the PG profile shift and the BP shift. Results: The PG shift was significantly and strongly correlated with the BP shift for 92% of the cases (p<0.0001, Pearson correlation coefficient R>0.8). Moderate but significant correlations were obtained for all cases that considered only CT-calibration errors and for 1 case that combined translation and CT-errors (p<0.0001, R ranged between 0.61 and 0.8). The average prediction errors for the simulated scenarios ranged between 0.08±0.07 and 1.67±1.3 mm (grand mean 0.66±0.76 mm). The prediction error was moderately correlated with the value of the BP shift (p=0, R=0.64). For the simulated scenarios the average BP shift ranged between −8±6.5 mm and 3±1.1 mm. Scenarios that considered combinations of the largest treatment errors were associated with large BP shifts. Conclusion: Simulations of in-room measurements demonstrate that prompt gamma profiles provide reliable estimation of the Bragg peak position for complex error scenarios. Yafei Xing and Luiza Bondar are funded by BEWARE grants from the Walloon Region. The work presents simulations results for a prompt gamma camera prototype developed by IBA.« less

  7. Modeling suspended sediment sources and transport in the Ishikari River basin, Japan, using SPARROW

    NASA Astrophysics Data System (ADS)

    Duan, W. L.; He, B.; Takara, K.; Luo, P. P.; Nover, D.; Hu, M. C.

    2015-03-01

    It is important to understand the mechanisms that control the fate and transport of suspended sediment (SS) in rivers, because high suspended sediment loads have significant impacts on riverine hydroecology. In this study, the SPARROW (SPAtially Referenced Regression on Watershed Attributes) watershed model was applied to estimate the sources and transport of SS in surface waters of the Ishikari River basin (14 330 km2), the largest watershed in Hokkaido, Japan. The final developed SPARROW model has four source variables (developing lands, forest lands, agricultural lands, and stream channels), three landscape delivery variables (slope, soil permeability, and precipitation), two in-stream loss coefficients, including small streams (streams with drainage area < 200 km2) and large streams, and reservoir attenuation. The model was calibrated using measurements of SS from 31 monitoring sites of mixed spatial data on topography, soils and stream hydrography. Calibration results explain approximately 96% (R2) of the spatial variability in the natural logarithm mean annual SS flux (kg yr-1) and display relatively small prediction errors at the 31 monitoring stations. Results show that developing land is associated with the largest sediment yield at around 1006 kg km-2 yr-1, followed by agricultural land (234 kg km-2 yr-1). Estimation of incremental yields shows that 35% comes from agricultural lands, 23% from forested lands, 23% from developing lands, and 19% from stream channels. The results of this study improve our understanding of sediment production and transportation in the Ishikari River basin in general, which will benefit both the scientific and management communities in safeguarding water resources.

  8. THE PANCHROMATIC HUBBLE ANDROMEDA TREASURY. X. ULTRAVIOLET TO INFRARED PHOTOMETRY OF 117 MILLION EQUIDISTANT STARS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, Benjamin F.; Dalcanton, Julianne J.; Weisz, Daniel R.

    We have measured stellar photometry with the Hubble Space Telescope Wide Field Camera 3 (WFC3) and Advanced Camera for Surveys in near ultraviolet (F275W, F336W), optical (F475W, F814W), and near infrared (F110W, F160W) bands for 117 million resolved stars in M31. As part of the Panchromatic Hubble Andromeda Treasury survey, we measured photometry with simultaneous point-spread function (PSF) fitting across all bands and at all source positions after precise astrometric image alignment (<5-10 mas accuracy). In the outer disk, the photometry reaches a completeness-limited depth of F475W ∼ 28, while in the crowded, high surface brightness bulge, the photometry reachesmore » F475W ∼ 25. We find that simultaneous photometry and optimized measurement parameters significantly increase the detection limit of the lowest-resolution filters (WFC3/IR) providing color-magnitude diagrams (CMDs) that are up to 2.5 mag deeper when compared with CMDs from WFC3/IR photometry alone. We present extensive analysis of the data quality including comparisons of luminosity functions and repeat measurements, and we use artificial star tests to quantify photometric completeness, uncertainties and biases. We find that the largest sources of systematic error in the photometry are due to spatial variations in the PSF models and charge transfer efficiency corrections. This stellar catalog is the largest ever produced for equidistant sources, and is publicly available for download by the community.« less

  9. Performance of the Gemini Planet Imager’s adaptive optics system

    DOE PAGES

    Poyneer, Lisa A.; Palmer, David W.; Macintosh, Bruce; ...

    2016-01-07

    The Gemini Planet Imager’s adaptive optics (AO) subsystem was designed specifically to facilitate high-contrast imaging. We give a definitive description of the system’s algorithms and technologies as built. Ultimately, the error budget indicates that for all targets and atmospheric conditions AO bandwidth error is the largest term.

  10. Sediment source apportionment in Laurel Hill Creek, PA, using Bayesian chemical mass balance and isotope fingerprinting

    USGS Publications Warehouse

    Stewart, Heather; Massoudieh, Arash; Gellis, Allen C.

    2015-01-01

    A Bayesian chemical mass balance (CMB) approach was used to assess the contribution of potential sources for fluvial samples from Laurel Hill Creek in southwest Pennsylvania. The Bayesian approach provides joint probability density functions of the sources' contributions considering the uncertainties due to source and fluvial sample heterogeneity and measurement error. Both elemental profiles of sources and fluvial samples and 13C and 15N isotopes were used for source apportionment. The sources considered include stream bank erosion, forest, roads and agriculture (pasture and cropland). Agriculture was found to have the largest contribution, followed by stream bank erosion. Also, road erosion was found to have a significant contribution in three of the samples collected during lower-intensity rain events. The source apportionment was performed with and without isotopes. The results were largely consistent; however, the use of isotopes was found to slightly increase the uncertainty in most of the cases. The correlation analysis between the contributions of sources shows strong correlations between stream bank and agriculture, whereas roads and forest seem to be less correlated to other sources. Thus, the method was better able to estimate road and forest contributions independently. The hypothesis that the contributions of sources are not seasonally changing was tested by assuming that all ten fluvial samples had the same source contributions. This hypothesis was rejected, demonstrating a significant seasonal variation in the sources of sediments in the stream.

  11. Estimating Climatological Bias Errors for the Global Precipitation Climatology Project (GPCP)

    NASA Technical Reports Server (NTRS)

    Adler, Robert; Gu, Guojun; Huffman, George

    2012-01-01

    A procedure is described to estimate bias errors for mean precipitation by using multiple estimates from different algorithms, satellite sources, and merged products. The Global Precipitation Climatology Project (GPCP) monthly product is used as a base precipitation estimate, with other input products included when they are within +/- 50% of the GPCP estimates on a zonal-mean basis (ocean and land separately). The standard deviation s of the included products is then taken to be the estimated systematic, or bias, error. The results allow one to examine monthly climatologies and the annual climatology, producing maps of estimated bias errors, zonal-mean errors, and estimated errors over large areas such as ocean and land for both the tropics and the globe. For ocean areas, where there is the largest question as to absolute magnitude of precipitation, the analysis shows spatial variations in the estimated bias errors, indicating areas where one should have more or less confidence in the mean precipitation estimates. In the tropics, relative bias error estimates (s/m, where m is the mean precipitation) over the eastern Pacific Ocean are as large as 20%, as compared with 10%-15% in the western Pacific part of the ITCZ. An examination of latitudinal differences over ocean clearly shows an increase in estimated bias error at higher latitudes, reaching up to 50%. Over land, the error estimates also locate regions of potential problems in the tropics and larger cold-season errors at high latitudes that are due to snow. An empirical technique to area average the gridded errors (s) is described that allows one to make error estimates for arbitrary areas and for the tropics and the globe (land and ocean separately, and combined). Over the tropics this calculation leads to a relative error estimate for tropical land and ocean combined of 7%, which is considered to be an upper bound because of the lack of sign-of-the-error canceling when integrating over different areas with a different number of input products. For the globe the calculated relative error estimate from this study is about 9%, which is also probably a slight overestimate. These tropical and global estimated bias errors provide one estimate of the current state of knowledge of the planet's mean precipitation.

  12. The QUIET Instrument

    NASA Technical Reports Server (NTRS)

    Gaier, T.; Kangaslahti, P.; Lawrence, C. R.; Leitch, E. M.; Wollack, E. J.

    2012-01-01

    The Q/U Imaging ExperimenT (QUIET) is designed to measure polarization in the Cosmic Microwave Background, targeting the imprint of inflationary gravitational waves at large angular scales ( approx 1 deg.) . Between 2008 October and 2010 December, two independent receiver arrays were deployed sequentially on a 1.4 m side-fed Dragonian telescope. The polarimeters which form the focal planes use a highly compact design based on High Electron Mobility Transistors (HEMTs) that provides simultaneous measurements of the Stokes parameters Q, U, and I in a single module. The 17-element Q-band polarimeter array, with a central frequency of 43.1 GHz, has the best sensitivity (69 micro Ks(exp 1/2)) and the lowest instrumental systematic errors ever achieved in this band, contributing to the tensor-to-scalar ratio at r < 0.1. The 84-element W-band polarimeter array has a sensitivity of 87 micro Ks(exp 1/2) at a central frequency of 94.5 GHz. It has the lowest systematic errors to date, contributing at r < 0.01 (QUIET Collaboration 2012) The two arrays together cover multipoles in the range l approximately equals 25-975 . These are the largest HEMT-ba.sed arrays deployed to date. This article describes the design, calibration, performance of, and sources of systematic error for the instrument,

  13. Causal impulse response for circular sources in viscous media

    PubMed Central

    Kelly, James F.; McGough, Robert J.

    2008-01-01

    The causal impulse response of the velocity potential for the Stokes wave equation is derived for calculations of transient velocity potential fields generated by circular pistons in viscous media. The causal Green’s function is numerically verified using the material impulse response function approach. The causal, lossy impulse response for a baffled circular piston is then calculated within the near field and the far field regions using expressions previously derived for the fast near field method. Transient velocity potential fields in viscous media are computed with the causal, lossy impulse response and compared to results obtained with the lossless impulse response. The numerical error in the computed velocity potential field is quantitatively analyzed for a range of viscous relaxation times and piston radii. Results show that the largest errors are generated in locations near the piston face and for large relaxation times, and errors are relatively small otherwise. Unlike previous frequency-domain methods that require numerical inverse Fourier transforms for the evaluation of the lossy impulse response, the present approach calculates the lossy impulse response directly in the time domain. The results indicate that this causal impulse response is ideal for time-domain calculations that simultaneously account for diffraction and quadratic frequency-dependent attenuation in viscous media. PMID:18397018

  14. The role of model errors represented by nonlinear forcing singular vector tendency error in causing the "spring predictability barrier" within ENSO predictions

    NASA Astrophysics Data System (ADS)

    Duan, Wansuo; Zhao, Peng

    2017-04-01

    Within the Zebiak-Cane model, the nonlinear forcing singular vector (NFSV) approach is used to investigate the role of model errors in the "Spring Predictability Barrier" (SPB) phenomenon within ENSO predictions. NFSV-related errors have the largest negative effect on the uncertainties of El Niño predictions. NFSV errors can be classified into two types: the first is characterized by a zonal dipolar pattern of SST anomalies (SSTA), with the western poles centered in the equatorial central-western Pacific exhibiting positive anomalies and the eastern poles in the equatorial eastern Pacific exhibiting negative anomalies; and the second is characterized by a pattern almost opposite the first type. The first type of error tends to have the worst effects on El Niño growth-phase predictions, whereas the latter often yields the largest negative effects on decaying-phase predictions. The evolution of prediction errors caused by NFSV-related errors exhibits prominent seasonality, with the fastest error growth in the spring and/or summer seasons; hence, these errors result in a significant SPB related to El Niño events. The linear counterpart of NFSVs, the (linear) forcing singular vector (FSV), induces a less significant SPB because it contains smaller prediction errors. Random errors cannot generate a SPB for El Niño events. These results show that the occurrence of an SPB is related to the spatial patterns of tendency errors. The NFSV tendency errors cause the most significant SPB for El Niño events. In addition, NFSVs often concentrate these large value errors in a few areas within the equatorial eastern and central-western Pacific, which likely represent those areas sensitive to El Niño predictions associated with model errors. Meanwhile, these areas are also exactly consistent with the sensitive areas related to initial errors determined by previous studies. This implies that additional observations in the sensitive areas would not only improve the accuracy of the initial field but also promote the reduction of model errors to greatly improve ENSO forecasts.

  15. On the feasibility of monitoring carbon monoxide in the lower troposphere from a constellation of northern hemisphere geostationary satellites: Global scale assimilation experiments (Part II)

    NASA Astrophysics Data System (ADS)

    Barré, Jérôme; Edwards, David; Worden, Helen; Arellano, Avelino; Gaubert, Benjamin; Da Silva, Arlindo; Lahoz, William; Anderson, Jeffrey

    2016-09-01

    This paper describes the second phase of an Observing System Simulation Experiment (OSSE) that utilizes the synthetic measurements from a constellation of satellites measuring atmospheric composition from geostationary (GEO) Earth orbit presented in part I of the study. Our OSSE is focused on carbon monoxide observations over North America, East Asia and Europe where most of the anthropogenic sources are located. Here we assess the impact of a potential GEO constellation on constraining northern hemisphere (NH) carbon monoxide (CO) using data assimilation. We show how cloud cover affects the GEO constellation data density with the largest cloud cover (i.e., lowest data density) occurring during Asian summer. We compare the modeled state of the atmosphere (Control Run), before CO data assimilation, with the known "true" state of the atmosphere (Nature Run) and show that our setup provides realistic atmospheric CO fields and emission budgets. Overall, the Control Run underestimates CO concentrations in the northern hemisphere, especially in areas close to CO sources. Assimilation experiments show that constraining CO close to the main anthropogenic sources significantly reduces errors in NH CO compared to the Control Run. We assess the changes in error reduction when only single satellite instruments are available as compared to the full constellation. We find large differences in how measurements for each continental scale observation system affect the hemispherical improvement in long-range transport patterns, especially due to seasonal cloud cover. A GEO constellation will provide the most efficient constraint on NH CO during winter when CO lifetime is longer and increments from data assimilation associated with source regions are advected further around the globe.

  16. On the Feasibility of Monitoring Carbon Monoxide in the Lower Troposphere from a Constellation of Northern Hemisphere Geostationary Satellites: Global Scale Assimilation Experiments (Part II)

    NASA Technical Reports Server (NTRS)

    Barre, Jerome; Edwards, David; Worden, Helen; Arellano, Avelino; Gaubert, Benjamin; Da Silva, Arlindo; Lahoz, William; Anderson, Jeffrey

    2016-01-01

    This paper describes the second phase of an Observing System Simulation Experiment (OSSE) that utilizes the synthetic measurements from a constellation of satellites measuring atmospheric composition from geostationary (GEO) Earth orbit presented in part I of the study. Our OSSE is focused on carbon monoxide observations over North America, East Asia and Europe where most of the anthropogenic sources are located. Here we assess the impact of a potential GEO constellation on constraining northern hemisphere (NH) carbon monoxide (CO) using data assimilation. We show how cloud cover affects the GEO constellation data density with the largest cloud cover (i.e., lowest data density) occurring during Asian summer. We compare the modeled state of the atmosphere (Control Run), before CO data assimilation, with the known 'true' state of the atmosphere (Nature Run) and show that our setup provides realistic atmospheric CO fields and emission budgets. Overall, the Control Run underestimates CO concentrations in the northern hemisphere, especially in areas close to CO sources. Assimilation experiments show that constraining CO close to the main anthropogenic sources significantly reduces errors in NH CO compared to the Control Run. We assess the changes in error reduction when only single satellite instruments are available as compared to the full constellation. We find large differences in how measurements for each continental scale observation system affect the hemispherical improvement in long-range transport patterns, especially due to seasonal cloud cover. A GEO constellation will provide the most efficient constraint on NH CO during winter when CO lifetime is longer and increments from data assimilation associated with source regions are advected further around the globe.

  17. Analysis of the Effect of UTI-UTC to High Precision Orbit

    NASA Astrophysics Data System (ADS)

    Shin, Dongseok; Kwak, Sunghee; Kim, Tag-Gon

    1999-12-01

    As the spatial resolution of remote sensing satellites becomes higher, very accurate determination of the position of a LEO (Low Earth Orbit) satellite is demanding more than ever. Non-symmetric Earth gravity is the major perturbation force to LEO satellites. Since the orbit propagation is performed in the celestial frame while Earth gravity is defined in the terrestrial frame, it is required to convert the coordinates of the satellite from one to the other accurately. Unless the coordinate conversion between the two frames is performed accurately the orbit propagation calculates incorrect Earth gravitational force at a specific time instant, and hence, causes errors in orbit prediction. The coordinate conversion between the two frames involves precession, nutation, Earth rotation and polar motion. Among these factors, unpredictability and uncertainty of Earth rotation, called UTI-UTC, is the largest error source. In this paper, the effect of UTI-UTC on the accuracy of the LEO propagation is introduced, tested and analzed. Considering the maximum unpredictability of UTI-UTC, 0.9 seconds, the meaningful order of non-spherical Earth harmonic functions is derived.

  18. Precision Orbit Derived Atmospheric Density: Development and Performance

    NASA Astrophysics Data System (ADS)

    McLaughlin, C.; Hiatt, A.; Lechtenberg, T.; Fattig, E.; Mehta, P.

    2012-09-01

    Precision orbit ephemerides (POE) are used to estimate atmospheric density along the orbits of CHAMP (Challenging Minisatellite Payload) and GRACE (Gravity Recovery and Climate Experiment). The densities are calibrated against accelerometer derived densities and considering ballistic coefficient estimation results. The 14-hour density solutions are stitched together using a linear weighted blending technique to obtain continuous solutions over the entire mission life of CHAMP and through 2011 for GRACE. POE derived densities outperform the High Accuracy Satellite Drag Model (HASDM), Jacchia 71 model, and NRLMSISE-2000 model densities when comparing cross correlation and RMS with accelerometer derived densities. Drag is the largest error source for estimating and predicting orbits for low Earth orbit satellites. This is one of the major areas that should be addressed to improve overall space surveillance capabilities; in particular, catalog maintenance. Generally, density is the largest error source in satellite drag calculations and current empirical density models such as Jacchia 71 and NRLMSISE-2000 have significant errors. Dynamic calibration of the atmosphere (DCA) has provided measurable improvements to the empirical density models and accelerometer derived densities of extremely high precision are available for a few satellites. However, DCA generally relies on observations of limited accuracy and accelerometer derived densities are extremely limited in terms of measurement coverage at any given time. The goal of this research is to provide an additional data source using satellites that have precision orbits available using Global Positioning System measurements and/or satellite laser ranging. These measurements strike a balance between the global coverage provided by DCA and the precise measurements of accelerometers. The temporal resolution of the POE derived density estimates is around 20-30 minutes, which is significantly worse than that of accelerometer derived density estimates. However, major variations in density are observed in the POE derived densities. These POE derived densities in combination with other data sources can be assimilated into physics based general circulation models of the thermosphere and ionosphere with the possibility of providing improved density forecasts for satellite drag analysis. POE derived density estimates were initially developed using CHAMP and GRACE data so comparisons could be made with accelerometer derived density estimates. This paper presents the results of the most extensive calibration of POE derived densities compared to accelerometer derived densities and provides the reasoning for selecting certain parameters in the estimation process. The factors taken into account for these selections are the cross correlation and RMS performance compared to the accelerometer derived densities and the output of the ballistic coefficient estimation that occurs simultaneously with the density estimation. This paper also presents the complete data set of CHAMP and GRACE results and shows that the POE derived densities match the accelerometer densities better than empirical models or DCA. This paves the way to expand the POE derived densities to include other satellites with quality GPS and/or satellite laser ranging observations.

  19. Global Marine Gravity and Bathymetry at 1-Minute Resolution

    NASA Astrophysics Data System (ADS)

    Sandwell, D. T.; Smith, W. H.

    2008-12-01

    We have developed global gravity and bathymetry grids at 1-minute resolution. Three approaches are used to reduce the error in the satellite-derived marine gravity anomalies. First, we have retracked the raw waveforms from the ERS-1 and Geosat/GM missions resulting in improvements in range precision of 40% and 27%, respectively. Second, we have used the recently published EGM2008 global gravity model as a reference field to provide a seamless gravity transition from land to ocean. Third we have used a biharmonic spline interpolation method to construct residual vertical deflection grids. Comparisons between shipboard gravity and the global gravity grid show errors ranging from 2.0 mGal in the Gulf of Mexico to 4.0 mGal in areas with rugged seafloor topography. The largest errors occur on the crests of narrow large seamounts. The bathymetry grid is based on prediction from satellite gravity and available ship soundings. Global soundings were assembled from a wide variety of sources including NGDC/GEODAS, NOAA Coastal Relief, CCOM, IFREMER, JAMSTEC, NSF Polar Programs, UKHO, LDEO, HIG, SIO and numerous miscellaneous contributions. The National Geospatial-intelligence Agency and other volunteering hydrographic offices within the International Hydrographic Organization provided global significant shallow water (< 300 m) soundings derived from their nautical charts. All soundings were converted to a common format and were hand-edited in relation to a smooth bathymetric model. Land elevations and shoreline location are based on a combination SRTM30, GTOPO30, and ICESAT data. A new feature of the bathymetry grid is a matching grid of source identification number that enables one to establish the origin of the depth estimate in each grid cell. Both the gravity and bathymetry grids are freely available.

  20. A posteriori error estimates in voice source recovery

    NASA Astrophysics Data System (ADS)

    Leonov, A. S.; Sorokin, V. N.

    2017-12-01

    The inverse problem of voice source pulse recovery from a segment of a speech signal is under consideration. A special mathematical model is used for the solution that relates these quantities. A variational method of solving inverse problem of voice source recovery for a new parametric class of sources, that is for piecewise-linear sources (PWL-sources), is proposed. Also, a technique for a posteriori numerical error estimation for obtained solutions is presented. A computer study of the adequacy of adopted speech production model with PWL-sources is performed in solving the inverse problems for various types of voice signals, as well as corresponding study of a posteriori error estimates. Numerical experiments for speech signals show satisfactory properties of proposed a posteriori error estimates, which represent the upper bounds of possible errors in solving the inverse problem. The estimate of the most probable error in determining the source-pulse shapes is about 7-8% for the investigated speech material. It is noted that a posteriori error estimates can be used as a criterion of the quality for obtained voice source pulses in application to speaker recognition.

  1. An error covariance model for sea surface topography and velocity derived from TOPEX/POSEIDON altimetry

    NASA Technical Reports Server (NTRS)

    Tsaoussi, Lucia S.; Koblinsky, Chester J.

    1994-01-01

    In order to facilitate the use of satellite-derived sea surface topography and velocity oceanographic models, methodology is presented for deriving the total error covariance and its geographic distribution from TOPEX/POSEIDON measurements. The model is formulated using a parametric model fit to the altimeter range observations. The topography and velocity modeled with spherical harmonic expansions whose coefficients are found through optimal adjustment to the altimeter range residuals using Bayesian statistics. All other parameters, including the orbit, geoid, surface models, and range corrections are provided as unadjusted parameters. The maximum likelihood estimates and errors are derived from the probability density function of the altimeter range residuals conditioned with a priori information. Estimates of model errors for the unadjusted parameters are obtained from the TOPEX/POSEIDON postlaunch verification results and the error covariances for the orbit and the geoid, except for the ocean tides. The error in the ocean tides is modeled, first, as the difference between two global tide models and, second, as the correction to the present tide model, the correction derived from the TOPEX/POSEIDON data. A formal error covariance propagation scheme is used to derive the total error. Our global total error estimate for the TOPEX/POSEIDON topography relative to the geoid for one 10-day period is found tio be 11 cm RMS. When the error in the geoid is removed, thereby providing an estimate of the time dependent error, the uncertainty in the topography is 3.5 cm root mean square (RMS). This level of accuracy is consistent with direct comparisons of TOPEX/POSEIDON altimeter heights with tide gauge measurements at 28 stations. In addition, the error correlation length scales are derived globally in both east-west and north-south directions, which should prove useful for data assimilation. The largest error correlation length scales are found in the tropics. Errors in the velocity field are smallest in midlatitude regions. For both variables the largest errors caused by uncertainty in the geoid. More accurate representations of the geoid await a dedicated geopotential satellite mission. Substantial improvements in the accuracy of ocean tide models are expected in the very near future from research with TOPEX/POSEIDON data.

  2. Creating Illusions of Knowledge: Learning Errors that Contradict Prior Knowledge

    ERIC Educational Resources Information Center

    Fazio, Lisa K.; Barber, Sarah J.; Rajaram, Suparna; Ornstein, Peter A.; Marsh, Elizabeth J.

    2013-01-01

    Most people know that the Pacific is the largest ocean on Earth and that Edison invented the light bulb. Our question is whether this knowledge is stable, or if people will incorporate errors into their knowledge bases, even if they have the correct knowledge stored in memory. To test this, we asked participants general-knowledge questions 2 weeks…

  3. Modeling suspended sediment sources and transport in the Ishikari River Basin, Japan using SPARROW

    NASA Astrophysics Data System (ADS)

    Duan, W.; He, B.; Takara, K.; Luo, P.; Nover, D.; Hu, M.

    2014-10-01

    It is important to understand the mechanisms that control suspended sediment (SS) fate and transport in rivers as high suspended sediment loads have significant impacts on riverine hydroecology. In this study, the watershed model SPARROW (SPAtially Referenced Regression on Watershed Attributes) was applied to estimate the sources and transport of SS in surface waters of the Ishikari River Basin (14 330 km2), the largest watershed on Hokkaido Island, Japan. The final developed SPARROW model has four source variables (developing lands, forest lands, agricultural lands, and stream channels), three landscape delivery variables (slope, soil permeability, and precipitation), two in-stream loss coefficients including small stream (streams with drainage area < 200 km2), large stream, and reservoir attenuation. The model was calibrated using measurements of SS from 31 monitoring sites of mixed spatial data on topography, soils and stream hydrography. Calibration results explain approximately 95.96% (R2) of the spatial variability in the natural logarithm mean annual SS flux (kg km-2 yr-1) and display relatively small prediction errors at the 31 monitoring stations. Results show that developing-land is associated with the largest sediment yield at around 1006.27 kg km-2 yr-1, followed by agricultural-land (234.21 kg km-2 yr-1). Estimation of incremental yields shows that 35.11% comes from agricultural lands, 23.42% from forested lands, 22.91% from developing lands, and 18.56% from stream channels. The results of this study improve our understanding of sediments production and transportation in the Ishikari River Basin in general, which will benefit both the scientific and the management community in safeguarding water resources.

  4. Quantitative modeling of the accuracy in registering preoperative patient-specific anatomic models into left atrial cardiac ablation procedures

    PubMed Central

    Rettmann, Maryam E.; Holmes, David R.; Kwartowitz, David M.; Gunawan, Mia; Johnson, Susan B.; Camp, Jon J.; Cameron, Bruce M.; Dalegrave, Charles; Kolasa, Mark W.; Packer, Douglas L.; Robb, Richard A.

    2014-01-01

    Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Data from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamic in vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved landmark-only registration provided the noise in the surface points is not excessively high. Increased variability on the landmark fiducials resulted in increased registration errors; however, refinement of the initial landmark registration by the surface-based algorithm can compensate for small initial misalignments. The surface-based registration algorithm is quite robust to noise on the surface points and continues to improve landmark registration even at high levels of noise on the surface points. Both the canine and patient studies also demonstrate that combined landmark and surface registration has lower errors than landmark registration alone. Conclusions: In this work, we describe a model for evaluating the impact of noise variability on the input parameters of a registration algorithm in the context of cardiac ablation therapy. The model can be used to predict both registration error as well as assess which inputs have the largest effect on registration accuracy. PMID:24506630

  5. Quantitative modeling of the accuracy in registering preoperative patient-specific anatomic models into left atrial cardiac ablation procedures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rettmann, Maryam E., E-mail: rettmann.maryam@mayo.edu; Holmes, David R.; Camp, Jon J.

    2014-02-15

    Purpose: In cardiac ablation therapy, accurate anatomic guidance is necessary to create effective tissue lesions for elimination of left atrial fibrillation. While fluoroscopy, ultrasound, and electroanatomic maps are important guidance tools, they lack information regarding detailed patient anatomy which can be obtained from high resolution imaging techniques. For this reason, there has been significant effort in incorporating detailed, patient-specific models generated from preoperative imaging datasets into the procedure. Both clinical and animal studies have investigated registration and targeting accuracy when using preoperative models; however, the effect of various error sources on registration accuracy has not been quantitatively evaluated. Methods: Datamore » from phantom, canine, and patient studies are used to model and evaluate registration accuracy. In the phantom studies, data are collected using a magnetically tracked catheter on a static phantom model. Monte Carlo simulation studies were run to evaluate both baseline errors as well as the effect of different sources of error that would be present in a dynamicin vivo setting. Error is simulated by varying the variance parameters on the landmark fiducial, physical target, and surface point locations in the phantom simulation studies. In vivo validation studies were undertaken in six canines in which metal clips were placed in the left atrium to serve as ground truth points. A small clinical evaluation was completed in three patients. Landmark-based and combined landmark and surface-based registration algorithms were evaluated in all studies. In the phantom and canine studies, both target registration error and point-to-surface error are used to assess accuracy. In the patient studies, no ground truth is available and registration accuracy is quantified using point-to-surface error only. Results: The phantom simulation studies demonstrated that combined landmark and surface-based registration improved landmark-only registration provided the noise in the surface points is not excessively high. Increased variability on the landmark fiducials resulted in increased registration errors; however, refinement of the initial landmark registration by the surface-based algorithm can compensate for small initial misalignments. The surface-based registration algorithm is quite robust to noise on the surface points and continues to improve landmark registration even at high levels of noise on the surface points. Both the canine and patient studies also demonstrate that combined landmark and surface registration has lower errors than landmark registration alone. Conclusions: In this work, we describe a model for evaluating the impact of noise variability on the input parameters of a registration algorithm in the context of cardiac ablation therapy. The model can be used to predict both registration error as well as assess which inputs have the largest effect on registration accuracy.« less

  6. Analysis of the load selection on the error of source characteristics identification for an engine exhaust system

    NASA Astrophysics Data System (ADS)

    Zheng, Sifa; Liu, Haitao; Dan, Jiabi; Lian, Xiaomin

    2015-05-01

    Linear time-invariant assumption for the determination of acoustic source characteristics, the source strength and the source impedance in the frequency domain has been proved reasonable in the design of an exhaust system. Different methods have been proposed to its identification and the multi-load method is widely used for its convenience by varying the load number and impedance. Theoretical error analysis has rarely been referred to and previous results have shown an overdetermined set of open pipes can reduce the identification error. This paper contributes a theoretical error analysis for the load selection. The relationships between the error in the identification of source characteristics and the load selection were analysed. A general linear time-invariant model was built based on the four-load method. To analyse the error of the source impedance, an error estimation function was proposed. The dispersion of the source pressure was obtained by an inverse calculation as an indicator to detect the accuracy of the results. It was found that for a certain load length, the load resistance at the frequency points of one-quarter wavelength of odd multiples results in peaks and in the maximum error for source impedance identification. Therefore, the load impedance of frequency range within the one-quarter wavelength of odd multiples should not be used for source impedance identification. If the selected loads have more similar resistance values (i.e., the same order of magnitude), the identification error of the source impedance could be effectively reduced.

  7. A Wavelet based Suboptimal Kalman Filter for Assimilation of Stratospheric Chemical Tracer Observations

    NASA Technical Reports Server (NTRS)

    Tangborn, Andrew; Auger, Ludovic

    2003-01-01

    A suboptimal Kalman filter system which evolves error covariances in terms of a truncated set of wavelet coefficients has been developed for the assimilation of chemical tracer observations of CH4. This scheme projects the discretized covariance propagation equations and covariance matrix onto an orthogonal set of compactly supported wavelets. Wavelet representation is localized in both location and scale, which allows for efficient representation of the inherently anisotropic structure of the error covariances. The truncation is carried out in such a way that the resolution of the error covariance is reduced only in the zonal direction, where gradients are smaller. Assimilation experiments which last 24 days, and used different degrees of truncation were carried out. These reduced the covariance size by 90, 97 and 99 % and the computational cost of covariance propagation by 80, 93 and 96 % respectively. The difference in both error covariance and the tracer field between the truncated and full systems over this period were found to be not growing in the first case, and growing relatively slowly in the later two cases. The largest errors in the tracer fields were found to occur in regions of largest zonal gradients in the constituent field. This results indicate that propagation of error covariances for a global two-dimensional data assimilation system are currently feasible. Recommendations for further reduction in computational cost are made with the goal of extending this technique to three-dimensional global assimilation systems.

  8. Application of Intra-Oral Dental Scanners in the Digital Workflow of Implantology

    PubMed Central

    van der Meer, Wicher J.; Andriessen, Frank S.; Wismeijer, Daniel; Ren, Yijin

    2012-01-01

    Intra-oral scanners will play a central role in digital dentistry in the near future. In this study the accuracy of three intra-oral scanners was compared. Materials and methods: A master model made of stone was fitted with three high precision manufactured PEEK cylinders and scanned with three intra-oral scanners: the CEREC (Sirona), the iTero (Cadent) and the Lava COS (3M). In software the digital files were imported and the distance between the centres of the cylinders and the angulation between the cylinders was assessed. These values were compared to the measurements made on a high accuracy 3D scan of the master model. Results: The distance errors were the smallest and most consistent for the Lava COS. The distance errors for the Cerec were the largest and least consistent. All the angulation errors were small. Conclusions: The Lava COS in combination with a high accuracy scanning protocol resulted in the smallest and most consistent errors of all three scanners tested when considering mean distance errors in full arch impressions both in absolute values and in consistency for both measured distances. For the mean angulation errors, the Lava COS had the smallest errors between cylinders 1–2 and the largest errors between cylinders 1–3, although the absolute difference with the smallest mean value (iTero) was very small (0,0529°). An expected increase in distance and/or angular errors over the length of the arch due to an accumulation of registration errors of the patched 3D surfaces could be observed in this study design, but the effects were statistically not significant. Clinical relevance For making impressions of implant cases for digital workflows, the most accurate scanner with the scanning protocol that will ensure the most accurate digital impression should be used. In our study model that was the Lava COS with the high accuracy scanning protocol. PMID:22937030

  9. Predictors of Errors of Novice Java Programmers

    ERIC Educational Resources Information Center

    Bringula, Rex P.; Manabat, Geecee Maybelline A.; Tolentino, Miguel Angelo A.; Torres, Edmon L.

    2012-01-01

    This descriptive study determined which of the sources of errors would predict the errors committed by novice Java programmers. Descriptive statistics revealed that the respondents perceived that they committed the identified eighteen errors infrequently. Thought error was perceived to be the main source of error during the laboratory programming…

  10. Characterising large scenario earthquakes and their influence on NDSHA maps

    NASA Astrophysics Data System (ADS)

    Magrin, Andrea; Peresan, Antonella; Panza, Giuliano F.

    2016-04-01

    The neo-deterministic approach to seismic zoning, NDSHA, relies on physically sound modelling of ground shaking from a large set of credible scenario earthquakes, which can be defined based on seismic history and seismotectonics, as well as incorporating information from a wide set of geological and geophysical data (e.g. morphostructural features and present day deformation processes identified by Earth observations). NDSHA is based on the calculation of complete synthetic seismograms; hence it does not make use of empirical attenuation models (i.e. ground motion prediction equations). From the set of synthetic seismograms, maps of seismic hazard that describe the maximum of different ground shaking parameters at the bedrock can be produced. As a rule, the NDSHA, defines the hazard as the envelope ground shaking at the site, computed from all of the defined seismic sources; accordingly, the simplest outcome of this method is a map where the maximum of a given seismic parameter is associated to each site. In this way, the standard NDSHA maps permit to account for the largest observed or credible earthquake sources identified in the region in a quite straightforward manner. This study aims to assess the influence of unavoidable uncertainties in the characterisation of large scenario earthquakes on the NDSHA estimates. The treatment of uncertainties is performed by sensitivity analyses for key modelling parameters and accounts for the uncertainty in the prediction of fault radiation and in the use of Green's function for a given medium. Results from sensitivity analyses with respect to the definition of possible seismic sources are discussed. A key parameter is the magnitude of seismic sources used in the simulation, which is based on information from earthquake catalogue, seismogenic zones and seismogenic nodes. The largest part of the existing Italian catalogues is based on macroseismic intensities, a rough estimate of the error in peak values of ground motion can therefore be the factor of two, intrinsic in MCS and other discrete scales. A simple test supports this hypothesis: an increase of 0.5 in the magnitude, i.e. one degrees in epicentral MCS, of all sources used in the national scale seismic zoning produces a doubling of the maximum ground motion. The analysis of uncertainty in ground motion maps, due to the catalogue random errors in magnitude and localization, shows a not uniform distribution of ground shaking uncertainty. The available information from catalogues of past events, that is not complete and may well not be representative of future earthquakes, can be substantially completed using independent indicators of the seismogenic potential of a given area, such as active faulting data and the seismogenic nodes.

  11. Recent Improvements in Retrieving Near-Surface Air Temperature and Humidity Using Microwave Remote Sensing

    NASA Technical Reports Server (NTRS)

    Roberts, J. Brent

    2010-01-01

    Detailed studies of the energy and water cycles require accurate estimation of the turbulent fluxes of moisture and heat across the atmosphere-ocean interface at regional to basin scale. Providing estimates of these latent and sensible heat fluxes over the global ocean necessitates the use of satellite or reanalysis-based estimates of near surface variables. Recent studies have shown that errors in the surface (10 meter)estimates of humidity and temperature are currently the largest sources of uncertainty in the production of turbulent fluxes from satellite observations. Therefore, emphasis has been placed on reducing the systematic errors in the retrieval of these parameters from microwave radiometers. This study discusses recent improvements in the retrieval of air temperature and humidity through improvements in the choice of algorithms (linear vs. nonlinear) and the choice of microwave sensors. Particular focus is placed on improvements using a neural network approach with a single sensor (Special Sensor Microwave/Imager) and the use of combined sensors from the NASA AQUA satellite platform. The latter algorithm utilizes the unique sampling available on AQUA from the Advanced Microwave Scanning Radiometer (AMSR-E) and the Advanced Microwave Sounding Unit (AMSU-A). Current estimates of uncertainty in the near-surface humidity and temperature from single and multi-sensor approaches are discussed and used to estimate errors in the turbulent fluxes.

  12. RCSLenS: The Red Cluster Sequence Lensing Survey

    NASA Astrophysics Data System (ADS)

    Hildebrandt, H.; Choi, A.; Heymans, C.; Blake, C.; Erben, T.; Miller, L.; Nakajima, R.; van Waerbeke, L.; Viola, M.; Buddendiek, A.; Harnois-Déraps, J.; Hojjati, A.; Joachimi, B.; Joudaki, S.; Kitching, T. D.; Wolf, C.; Gwyn, S.; Johnson, N.; Kuijken, K.; Sheikhbahaee, Z.; Tudorica, A.; Yee, H. K. C.

    2016-11-01

    We present the Red Cluster Sequence Lensing Survey (RCSLenS), an application of the methods developed for the Canada-France-Hawaii Telescope Lensing Survey (CFHTLenS) to the ˜785 deg2, multi-band imaging data of the Red-sequence Cluster Survey 2. This project represents the largest public, sub-arcsecond seeing, multi-band survey to date that is suited for weak gravitational lensing measurements. With a careful assessment of systematic errors in shape measurements and photometric redshifts, we extend the use of this data set to allow cross-correlation analyses between weak lensing observables and other data sets. We describe the imaging data, the data reduction, masking, multi-colour photometry, photometric redshifts, shape measurements, tests for systematic errors, and a blinding scheme to allow for more objective measurements. In total, we analyse 761 pointings with r-band coverage, which constitutes our lensing sample. Residual large-scale B-mode systematics prevent the use of this shear catalogue for cosmic shear science. The effective number density of lensing sources over an unmasked area of 571.7 deg2 and down to a magnitude limit of r ˜ 24.5 is 8.1 galaxies per arcmin2 (weighted: 5.5 arcmin-2) distributed over 14 patches on the sky. Photometric redshifts based on four-band griz data are available for 513 pointings covering an unmasked area of 383.5 deg2. We present weak lensing mass reconstructions of some example clusters as well as the full survey representing the largest areas that have been mapped in this way. All our data products are publicly available through Canadian Astronomy Data Centre at http://www.cadc-ccda.hia-iha.nrc-cnrc.gc.ca/en/community/rcslens/query.html in a format very similar to the CFHTLenS data release.

  13. Randoms Counter Analysis

    NASA Astrophysics Data System (ADS)

    Hensley, Winston; Giovanetti, Kevin

    2008-10-01

    A 1 ppm precision measurement of the muon lifetime is being conducted by the MULAN collaboration. The reason for this new measurement lies in recent advances in theory that have reduced the uncertainty in calculating the Fermi Coupling Constant from the measured lifetime to a few tenths ppm. The largest uncertainty is now experimental. To achieve a 1ppm level of precision it is necessary to control all sources of systematic error and to understand their influences on the lifetime measurement. James Madison University is contributing by examine the response of the timing system to uncorrelated events, randoms. A radioactive source was placed in front of paired detectors similar to those in the main experiment. These detectors were integrated in an identical fashion into the data acquisition and measurement system and data from these detectors was recorded during the entire experiment. The pair were placed in a shielded enclosure away from the main experiment to minimize interference. The data from these detectors should have a flat time spectrum as the decay of a radioactive source is a random event and has no time correlation. Thus the spectrum can be used as an important diagnostic in studying the method of determining event times and timing system performance.

  14. Design of the first optical system for real-time tomographic holography (RTTH)

    NASA Astrophysics Data System (ADS)

    Galeotti, John M.; Siegel, Mel; Rallison, Richard D.; Stetten, George

    2008-08-01

    The design of the first Real-Time-Tomographic-Holography (RTTH) optical system for augmented-reality applications is presented. RTTH places a viewpoint-independent real-time (RT) virtual image (VI) of an object into its actual location, enabling natural hand-eye coordination to guide invasive procedures, without requiring tracking or a head-mounted device. The VI is viewed through a narrow-band Holographic Optical Element (HOE) with built-in power that generates the largest possible near-field, in-situ VI from a small display chip without noticeable parallax error or obscuring direct view of the physical world. Rigidly fixed upon a medical-ultrasound probe, RTTH could show the scan in its actual location inside the patient, because the VI would move with the probe. We designed the image source along with the system-optics, allowing us to ignore both planer geometric distortions and field curvature, respectively compensated by using RT pre-processing software and attaching a custom-surfaced fiber-optic-faceplate (FOFP) to our image source. Focus in our fast, non-axial system was achieved by placing correcting lenses near the FOFP and custom-optically-fabricating our volume-phase HOE using a recording beam that was specially shaped by extra lenses. By simultaneously simulating and optimizing the system's playback performance across variations in both the total playback and HOE-recording optical systems, we derived and built a design that projects a 104x112 mm planar VI 1 m from the HOE using a laser-illuminated 19x16 mm LCD+FOFP image-source. The VI appeared fixed in space and well focused. Viewpoint-induced location errors were <3 mm, and unexpected first-order astigmatism produced 3 cm (3% of 1 m) ambiguity in depth, typically unnoticed by human observers.

  15. Ball bearing vibrations amplitude modeling and test comparisons

    NASA Technical Reports Server (NTRS)

    Hightower, Richard A., III; Bailey, Dave

    1995-01-01

    Bearings generate disturbances that, when combined with structural gains of a momentum wheel, contribute to induced vibration in the wheel. The frequencies generated by a ball bearing are defined by the bearing's geometry and defects. The amplitudes at these frequencies are dependent upon the actual geometry variations from perfection; therefore, a geometrically perfect bearing will produce no amplitudes at the kinematic frequencies that the design generates. Because perfect geometry can only be approached, emitted vibrations do occur. The most significant vibration is at the spin frequency and can be balanced out in the build process. Other frequencies' amplitudes, however, cannot be balanced out. Momentum wheels are usually the single largest source of vibrations in a spacecraft and can contribute to pointing inaccuracies if emitted vibrations ring the structure or are in the high-gain bandwidth of a sensitive pointing control loop. It is therefore important to be able to provide an a priori knowledge of possible amplitudes that are singular in source or are a result of interacting defects that do not reveal themselves in normal frequency prediction equations. This paper will describe the computer model that provides for the incorporation of bearing geometry errors and then develops an estimation of actual amplitudes and frequencies. Test results were correlated with the model. A momentum wheel was producing an unacceptable 74 Hz amplitude. The model was used to simulate geometry errors and proved successful in identifying a cause that was verified when the parts were inspected.

  16. Assessing a local ensemble Kalman filter: perfect model experiments with the National Centers for Environmental Prediction global model

    NASA Astrophysics Data System (ADS)

    Szunyogh, Istvan; Kostelich, Eric J.; Gyarmati, G.; Patil, D. J.; Hunt, Brian R.; Kalnay, Eugenia; Ott, Edward; Yorke, James A.

    2005-08-01

    The accuracy and computational efficiency of the recently proposed local ensemble Kalman filter (LEKF) data assimilation scheme is investigated on a state-of-the-art operational numerical weather prediction model using simulated observations. The model selected for this purpose is the T62 horizontal- and 28-level vertical-resolution version of the Global Forecast System (GFS) of the National Center for Environmental Prediction. The performance of the data assimilation system is assessed for different configurations of the LEKF scheme. It is shown that a modest size (40-member) ensemble is sufficient to track the evolution of the atmospheric state with high accuracy. For this ensemble size, the computational time per analysis is less than 9 min on a cluster of PCs. The analyses are extremely accurate in the mid-latitude storm track regions. The largest analysis errors, which are typically much smaller than the observational errors, occur where parametrized physical processes play important roles. Because these are also the regions where model errors are expected to be the largest, limitations of a real-data implementation of the ensemble-based Kalman filter may be easily mistaken for model errors. In light of these results, the importance of testing the ensemble-based Kalman filter data assimilation systems on simulated observations is stressed.

  17. Introduction to CAUSES: Description of Weather and Climate Models and Their Near-Surface Temperature Errors in 5 day Hindcasts Near the Southern Great Plains

    DOE PAGES

    Morcrette, C. J.; Van Weverberg, K.; Ma, H. -Y.; ...

    2018-02-16

    We introduce the Clouds Above the United States and Errors at the Surface (CAUSES) project with its aim of better understanding the physical processes leading to warm screen temperature biases over the American Midwest in many numerical models. In this first of four companion papers, 11 different models, from nine institutes, perform a series of 5 day hindcasts, each initialized from reanalyses. After describing the common experimental protocol and detailing each model configuration, a gridded temperature data set is derived from observations and used to show that all the models have a warm bias over parts of the Midwest. Additionally,more » a strong diurnal cycle in the screen temperature bias is found in most models. In some models the bias is largest around midday, while in others it is largest during the night. At the Department of Energy Atmospheric Radiation Measurement Southern Great Plains (SGP) site, the model biases are shown to extend several kilometers into the atmosphere. Finally, to provide context for the companion papers, in which observations from the SGP site are used to evaluate the different processes contributing to errors there, it is shown that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP. This suggests that conclusions drawn from detailed evaluation of models using instruments located at SGP will be representative of errors that are prevalent over a larger spatial scale.« less

  18. Introduction to CAUSES: Description of Weather and Climate Models and Their Near-Surface Temperature Errors in 5 day Hindcasts Near the Southern Great Plains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morcrette, C. J.; Van Weverberg, K.; Ma, H. -Y.

    We introduce the Clouds Above the United States and Errors at the Surface (CAUSES) project with its aim of better understanding the physical processes leading to warm screen temperature biases over the American Midwest in many numerical models. In this first of four companion papers, 11 different models, from nine institutes, perform a series of 5 day hindcasts, each initialized from reanalyses. After describing the common experimental protocol and detailing each model configuration, a gridded temperature data set is derived from observations and used to show that all the models have a warm bias over parts of the Midwest. Additionally,more » a strong diurnal cycle in the screen temperature bias is found in most models. In some models the bias is largest around midday, while in others it is largest during the night. At the Department of Energy Atmospheric Radiation Measurement Southern Great Plains (SGP) site, the model biases are shown to extend several kilometers into the atmosphere. Finally, to provide context for the companion papers, in which observations from the SGP site are used to evaluate the different processes contributing to errors there, it is shown that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP. This suggests that conclusions drawn from detailed evaluation of models using instruments located at SGP will be representative of errors that are prevalent over a larger spatial scale.« less

  19. Introduction to CAUSES: Description of Weather and Climate Models and Their Near-Surface Temperature Errors in 5 day Hindcasts Near the Southern Great Plains

    NASA Astrophysics Data System (ADS)

    Morcrette, C. J.; Van Weverberg, K.; Ma, H.-Y.; Ahlgrimm, M.; Bazile, E.; Berg, L. K.; Cheng, A.; Cheruy, F.; Cole, J.; Forbes, R.; Gustafson, W. I.; Huang, M.; Lee, W.-S.; Liu, Y.; Mellul, L.; Merryfield, W. J.; Qian, Y.; Roehrig, R.; Wang, Y.-C.; Xie, S.; Xu, K.-M.; Zhang, C.; Klein, S.; Petch, J.

    2018-03-01

    We introduce the Clouds Above the United States and Errors at the Surface (CAUSES) project with its aim of better understanding the physical processes leading to warm screen temperature biases over the American Midwest in many numerical models. In this first of four companion papers, 11 different models, from nine institutes, perform a series of 5 day hindcasts, each initialized from reanalyses. After describing the common experimental protocol and detailing each model configuration, a gridded temperature data set is derived from observations and used to show that all the models have a warm bias over parts of the Midwest. Additionally, a strong diurnal cycle in the screen temperature bias is found in most models. In some models the bias is largest around midday, while in others it is largest during the night. At the Department of Energy Atmospheric Radiation Measurement Southern Great Plains (SGP) site, the model biases are shown to extend several kilometers into the atmosphere. Finally, to provide context for the companion papers, in which observations from the SGP site are used to evaluate the different processes contributing to errors there, it is shown that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP. This suggests that conclusions drawn from detailed evaluation of models using instruments located at SGP will be representative of errors that are prevalent over a larger spatial scale.

  20. Creating illusions of knowledge: learning errors that contradict prior knowledge.

    PubMed

    Fazio, Lisa K; Barber, Sarah J; Rajaram, Suparna; Ornstein, Peter A; Marsh, Elizabeth J

    2013-02-01

    Most people know that the Pacific is the largest ocean on Earth and that Edison invented the light bulb. Our question is whether this knowledge is stable, or if people will incorporate errors into their knowledge bases, even if they have the correct knowledge stored in memory. To test this, we asked participants general-knowledge questions 2 weeks before they read stories that contained errors (e.g., "Franklin invented the light bulb"). On a later general-knowledge test, participants reproduced story errors despite previously answering the questions correctly. This misinformation effect was found even for questions that were answered correctly on the initial test with the highest level of confidence. Furthermore, prior knowledge offered no protection against errors entering the knowledge base; the misinformation effect was equivalent for previously known and unknown facts. Errors can enter the knowledge base even when learners have the knowledge necessary to catch the errors. 2013 APA, all rights reserved

  1. A statistical study of radio-source structure effects on astrometric very long baseline interferometry observations

    NASA Technical Reports Server (NTRS)

    Ulvestad, J. S.

    1989-01-01

    Errors from a number of sources in astrometric very long baseline interferometry (VLBI) have been reduced in recent years through a variety of methods of calibration and modeling. Such reductions have led to a situation in which the extended structure of the natural radio sources used in VLBI is a significant error source in the effort to improve the accuracy of the radio reference frame. In the past, work has been done on individual radio sources to establish the magnitude of the errors caused by their particular structures. The results of calculations on 26 radio sources are reported in which an effort is made to determine the typical delay and delay-rate errors for a number of sources having different types of structure. It is found that for single observations of the types of radio sources present in astrometric catalogs, group-delay and phase-delay scatter in the 50 to 100 psec range due to source structure can be expected at 8.4 GHz on the intercontinental baselines available in the Deep Space Network (DSN). Delay-rate scatter of approx. 5 x 10(exp -15) sec sec(exp -1) (or approx. 0.002 mm sec (exp -1) is also expected. If such errors mapped directly into source position errors, they would correspond to position uncertainties of approx. 2 to 5 nrad, similar to the best position determinations in the current JPL VLBI catalog. With the advent of wider bandwidth VLBI systems on the large DSN antennas, the system noise will be low enough so that the structure-induced errors will be a significant part of the error budget. Several possibilities for reducing the structure errors are discussed briefly, although it is likely that considerable effort will have to be devoted to the structure problem in order to reduce the typical error by a factor of two or more.

  2. Polarized point sources in the LOFAR Two-meter Sky Survey: A preliminary catalog

    NASA Astrophysics Data System (ADS)

    Van Eck, C. L.; Haverkorn, M.; Alves, M. I. R.; Beck, R.; Best, P.; Carretti, E.; Chyży, K. T.; Farnes, J. S.; Ferrière, K.; Hardcastle, M. J.; Heald, G.; Horellou, C.; Iacobelli, M.; Jelić, V.; Mulcahy, D. D.; O'Sullivan, S. P.; Polderman, I. M.; Reich, W.; Riseley, C. J.; Röttgering, H.; Schnitzeler, D. H. F. M.; Shimwell, T. W.; Vacca, V.; Vink, J.; White, G. J.

    2018-06-01

    The polarization properties of radio sources at very low frequencies (<200 MHz) have not been widely measured, but the new generation of low-frequency radio telescopes, including the Low Frequency Array (LOFAR: a Square Kilometre Array Low pathfinder), now gives us the opportunity to investigate these properties. In this paper, we report on the preliminary development of a data reduction pipeline to carry out polarization processing and Faraday tomography for data from the LOFAR Two-meter Sky Survey (LOTSS) and present the results of this pipeline from the LOTSS preliminary data release region (10h45m-15h30m right ascension, 45°-57° declination, 570 square degrees). We have produced a catalog of 92 polarized radio sources at 150 MHz at 4.'3 resolution and 1 mJy rms sensitivity, which is the largest catalog of polarized sources at such low frequencies. We estimate a lower limit to the polarized source surface density at 150 MHz, with our resolution and sensitivity, of 1 source per 6.2 square degrees. We find that our Faraday depth measurements are in agreement with previous measurements and have significantly smaller errors. Most of our sources show significant depolarization compared to 1.4 GHz, but there is a small population of sources with low depolarization indicating that their polarized emission is highly localized in Faraday depth. We predict that an extension of this work to the full LOTSS data would detect at least 3400 polarized sources using the same methods, and probably considerably more with improved data processing.

  3. Tropospheric delay ray tracing applied in VLBI analysis

    NASA Astrophysics Data System (ADS)

    Eriksson, David; MacMillan, D. S.; Gipson, John M.

    2014-12-01

    Tropospheric delay modeling error continues to be one of the largest sources of error in VLBI (very long baseline interferometry) analysis. For standard operational solutions, we use the VMF1 elevation-dependent mapping functions derived from European Centre for Medium-Range Weather Forecasts data. These mapping functions assume that tropospheric delay at a site is azimuthally symmetric. As this assumption is not true, we have instead determined the ray trace delay along the signal path through the troposphere for each VLBI quasar observation. We determined the troposphere refractivity fields from the pressure, temperature, specific humidity, and geopotential height fields of the NASA Goddard Space Flight Center Goddard Earth Observing System version 5 numerical weather model. When applied in VLBI analysis, baseline length repeatabilities were improved compared with using the VMF1 mapping function model for 72% of the baselines and site vertical repeatabilities were better for 11 of 13 sites during the 2 week CONT11 observing period in September 2011. When applied to a larger data set (2011-2013), we see a similar improvement in baseline length and also in site position repeatabilities for about two thirds of the stations in each of the site topocentric components.

  4. North Alabama Lightning Mapping Array (LMA): VHF Source Retrieval Algorithm and Error Analyses

    NASA Technical Reports Server (NTRS)

    Koshak, W. J.; Solakiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J.; Bailey, J.; Krider, E. P.; Bateman, M. G.; Boccippio, D.

    2003-01-01

    Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA Marshall Space Flight Center (MSFC) and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix Theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50 ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results. However, for many source locations, the Curvature Matrix Theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.

  5. Dating young geomorphic surfaces using age of colonizing Douglas fir in southwestern Washington and northwestern Oregon, USA

    USGS Publications Warehouse

    Pierson, T.C.

    2007-01-01

    Dating of dynamic, young (<500 years) geomorphic landforms, particularly volcanofluvial features, requires higher precision than is possible with radiocarbon dating. Minimum ages of recently created landforms have long been obtained from tree-ring ages of the oldest trees growing on new surfaces. But to estimate the year of landform creation requires that two time corrections be added to tree ages obtained from increment cores: (1) the time interval between stabilization of the new landform surface and germination of the sampled trees (germination lag time or GLT); and (2) the interval between seedling germination and growth to sampling height, if the trees are not cored at ground level. The sum of these two time intervals is the colonization time gap (CTG). Such time corrections have been needed for more precise dating of terraces and floodplains in lowland river valleys in the Cascade Range, where significant eruption-induced lateral shifting and vertical aggradation of channels can occur over years to decades, and where timing of such geomorphic changes can be critical to emergency planning. Earliest colonizing Douglas fir (Pseudotsuga menziesii) were sampled for tree-ring dating at eight sites on lowland (<750 m a.s.l.), recently formed surfaces of known age near three Cascade volcanoes - Mount Rainier, Mount St. Helens and Mount Hood - in southwestern Washington and northwestern Oregon. Increment cores or stem sections were taken at breast height and, where possible, at ground level from the largest, oldest-looking trees at each study site. At least ten trees were sampled at each site unless the total of early colonizers was less. Results indicate that a correction of four years should be used for GLT and 10 years for CTG if the single largest (and presumed oldest) Douglas fir growing on a surface of unknown age is sampled. This approach would have a potential error of up to 20 years. Error can be reduced by sampling the five largest Douglas fir instead of the single largest. A GLT correction of 5 years should be added to the mean ring-count age of the five largest trees growing on the surface being dated, if the trees are cored at ground level. This correction would have an approximate error of ??5 years. If the trees are cored at about 1.4 m above the round surface (breast height), a CTG correction of 11 years should be added to the mean age of the five sampled trees (with an error of about ??7 years).

  6. Measurement uncertainty and feasibility study of a flush airdata system for a hypersonic flight experiment

    NASA Technical Reports Server (NTRS)

    Whitmore, Stephen A.; Moes, Timothy R.

    1994-01-01

    Presented is a feasibility and error analysis for a hypersonic flush airdata system on a hypersonic flight experiment (HYFLITE). HYFLITE heating loads make intrusive airdata measurement impractical. Although this analysis is specifically for the HYFLITE vehicle and trajectory, the problems analyzed are generally applicable to hypersonic vehicles. A layout of the flush-port matrix is shown. Surface pressures are related airdata parameters using a simple aerodynamic model. The model is linearized using small perturbations and inverted using nonlinear least-squares. Effects of various error sources on the overall uncertainty are evaluated using an error simulation. Error sources modeled include boundarylayer/viscous interactions, pneumatic lag, thermal transpiration in the sensor pressure tubing, misalignment in the matrix layout, thermal warping of the vehicle nose, sampling resolution, and transducer error. Using simulated pressure data for input to the estimation algorithm, effects caused by various error sources are analyzed by comparing estimator outputs with the original trajectory. To obtain ensemble averages the simulation is run repeatedly and output statistics are compiled. Output errors resulting from the various error sources are presented as a function of Mach number. Final uncertainties with all modeled error sources included are presented as a function of Mach number.

  7. Determination of Barometric Altimeter Errors for the Orion Exploration Flight Test-1 Entry

    NASA Technical Reports Server (NTRS)

    Brown, Denise L.; Munoz, Jean-Philippe; Gay, Robert

    2011-01-01

    The EFT-1 mission is the unmanned flight test for the upcoming Multi-Purpose Crew Vehicle (MPCV). During entry, the EFT-1 vehicle will trigger several Landing and Recovery System (LRS) events, such as parachute deployment, based on onboard altitude information. The primary altitude source is the filtered navigation solution updated with GPS measurement data. The vehicle also has three barometric altimeters that will be used to measure atmospheric pressure during entry. In the event that GPS data is not available during entry, the altitude derived from the barometric altimeter pressure will be used to trigger chute deployment for the drogues and main parachutes. Therefore it is important to understand the impact of error sources on the pressure measured by the barometric altimeters and on the altitude derived from that pressure. There are four primary error sources impacting the sensed pressure: sensor errors, Analog to Digital conversion errors, aerodynamic errors, and atmosphere modeling errors. This last error source is induced by the conversion from pressure to altitude in the vehicle flight software, which requires an atmosphere model such as the US Standard 1976 Atmosphere model. There are several secondary error sources as well, such as waves, tides, and latencies in data transmission. Typically, for error budget calculations it is assumed that all error sources are independent, normally distributed variables. Thus, the initial approach to developing the EFT-1 barometric altimeter altitude error budget was to create an itemized error budget under these assumptions. This budget was to be verified by simulation using high fidelity models of the vehicle hardware and software. The simulation barometric altimeter model includes hardware error sources and a data-driven model of the aerodynamic errors expected to impact the pressure in the midbay compartment in which the sensors are located. The aerodynamic model includes the pressure difference between the midbay compartment and the free stream pressure as a function of altitude, oscillations in sensed pressure due to wake effects, and an acoustics model capturing fluctuations in pressure due to motion of the passive vents separating the barometric altimeters from the outside of the vehicle.

  8. Worldwide Survey of Alcohol and Nonmedical Drug Use among Military Personnel: 1982,

    DTIC Science & Technology

    1983-01-01

    cell . The first number is an estimate of the percentage of the population with the characteristics that define the cell . The second number, in...multiplying 1.96 times the standard error for that cell . (Obviously, for very small or very large estimates, the respective smallest or largest value in...that the cell proportions estimate the true population value more precisely, and larger standard errors indicate that the true population value is

  9. Controlled source electromagnetic data analysis with seismic constraints and rigorous uncertainty estimation in the Black Sea

    NASA Astrophysics Data System (ADS)

    Gehrmann, R. A. S.; Schwalenberg, K.; Hölz, S.; Zander, T.; Dettmer, J.; Bialas, J.

    2016-12-01

    In 2014 an interdisciplinary survey was conducted as part of the German SUGAR project in the Western Black Sea targeting gas hydrate occurrences in the Danube Delta. Marine controlled source electromagnetic (CSEM) data were acquired with an inline seafloor-towed array (BGR), and a two-polarization horizontal ocean-bottom source and receiver configuration (GEOMAR). The CSEM data are co-located with high-resolution 2-D and 3-D seismic reflection data (GEOMAR). We present results from 2-D regularized inversion (MARE2DEM by Kerry Key), which provides a smooth model of the electrical resistivity distribution beneath the source and multiple receivers. The 2-D approach includes seafloor topography and structural constraints from seismic data. We estimate uncertainties from the regularized inversion and compare them to 1-D Bayesian inversion results. The probabilistic inversion for a layered subsurface treats the parameter values and the number of layers as unknown by applying reversible-jump Markov-chain Monte Carlo sampling. A non-diagonal data covariance matrix obtained from residual error analysis accounts for correlated errors. The resulting resistivity models show generally high resistivity values between 3 and 10 Ωm on average which can be partly attributed to depleted pore water salinities due to sea-level low stands in the past, and locally up to 30 Ωm which is likely caused by gas hydrates. At the base of the gas hydrate stability zone resistivities rise up to more than 100 Ωm which could be due to gas hydrate as well as a layer of free gas underneath. However, the deeper parts also show the largest model parameter uncertainties. Archie's Law is used to derive estimates of the gas hydrate saturation, which vary between 30 and 80% within the anomalous layers considering salinity and porosity profiles from a distant DSDP bore hole.

  10. Error assessment of local tie vectors in space geodesy

    NASA Astrophysics Data System (ADS)

    Falkenberg, Jana; Heinkelmann, Robert; Schuh, Harald

    2014-05-01

    For the computation of the ITRF, the data of the geometric space-geodetic techniques on co-location sites are combined. The combination increases the redundancy and offers the possibility to utilize the strengths of each technique while mitigating their weaknesses. To enable the combination of co-located techniques each technique needs to have a well-defined geometric reference point. The linking of the geometric reference points enables the combination of the technique-specific coordinate to a multi-technique site coordinate. The vectors between these reference points are called "local ties". The realization of local ties is usually reached by local surveys of the distances and or angles between the reference points. Identified temporal variations of the reference points are considered in the local tie determination only indirectly by assuming a mean position. Finally, the local ties measured in the local surveying network are to be transformed into the ITRF, the global geocentric equatorial coordinate system of the space-geodetic techniques. The current IERS procedure for the combination of the space-geodetic techniques includes the local tie vectors with an error floor of three millimeters plus a distance dependent component. This error floor, however, significantly underestimates the real accuracy of local tie determination. To fullfill the GGOS goals of 1 mm position and 0.1 mm/yr velocity accuracy, an accuracy of the local tie will be mandatory at the sub-mm level, which is currently not achievable. To assess the local tie effects on ITRF computations, investigations of the error sources will be done to realistically assess and consider them. Hence, a reasonable estimate of all the included errors of the various local ties is needed. An appropriate estimate could also improve the separation of local tie error and technique-specific error contributions to uncertainties and thus access the accuracy of space-geodetic techniques. Our investigations concern the simulation of the error contribution of each component of the local tie definition and determination. A closer look into the models of reference point definition, of accessibility, of measurement, and of transformation is necessary to properly model the error of the local tie. The effect of temporal variations on the local ties will be studied as well. The transformation of the local survey into the ITRF can be assumed to be the largest error contributor, in particular the orientation of the local surveying network to the ITRF.

  11. Uncertainty in predictions of forest carbon dynamics: separating driver error from model error.

    PubMed

    Spadavecchia, L; Williams, M; Law, B E

    2011-07-01

    We present an analysis of the relative magnitude and contribution of parameter and driver uncertainty to the confidence intervals on estimates of net carbon fluxes. Model parameters may be difficult or impractical to measure, while driver fields are rarely complete, with data gaps due to sensor failure and sparse observational networks. Parameters are generally derived through some optimization method, while driver fields may be interpolated from available data sources. For this study, we used data from a young ponderosa pine stand at Metolius, Central Oregon, and a simple daily model of coupled carbon and water fluxes (DALEC). An ensemble of acceptable parameterizations was generated using an ensemble Kalman filter and eddy covariance measurements of net C exchange. Geostatistical simulations generated an ensemble of meteorological driving variables for the site, consistent with the spatiotemporal autocorrelations inherent in the observational data from 13 local weather stations. Simulated meteorological data were propagated through the model to derive the uncertainty on the CO2 flux resultant from driver uncertainty typical of spatially extensive modeling studies. Furthermore, the model uncertainty was partitioned between temperature and precipitation. With at least one meteorological station within 25 km of the study site, driver uncertainty was relatively small ( 10% of the total net flux), while parameterization uncertainty was larger, 50% of the total net flux. The largest source of driver uncertainty was due to temperature (8% of the total flux). The combined effect of parameter and driver uncertainty was 57% of the total net flux. However, when the nearest meteorological station was > 100 km from the study site, uncertainty in net ecosystem exchange (NEE) predictions introduced by meteorological drivers increased by 88%. Precipitation estimates were a larger source of bias in NEE estimates than were temperature estimates, although the biases partly compensated for each other. The time scales on which precipitation errors occurred in the simulations were shorter than the temporal scales over which drought developed in the model, so drought events were reasonably simulated. The approach outlined here provides a means to assess the uncertainty and bias introduced by meteorological drivers in regional-scale ecological forecasting.

  12. Evaluation of automated global mapping of Reference Soil Groups of WRB2015

    NASA Astrophysics Data System (ADS)

    Mantel, Stephan; Caspari, Thomas; Kempen, Bas; Schad, Peter; Eberhardt, Einar; Ruiperez Gonzalez, Maria

    2017-04-01

    SoilGrids is an automated system that provides global predictions for standard numeric soil properties at seven standard depths down to 200 cm, currently at spatial resolutions of 1km and 250m. In addition, the system provides predictions of depth to bedrock and distribution of soil classes based on WRB and USDA Soil Taxonomy (ST). In SoilGrids250m(1), soil classes (WRB, version 2006) consist of the RSG and the first prefix qualifier, whereas in SoilGrids1km(2), the soil class was assessed at RSG level. Automated mapping of World Reference Base (WRB) Reference Soil Groups (RSGs) at a global level has great advantages. Maps can be updated in a short time span with relatively little effort when new data become available. To translate soil names of older versions of FAO/WRB and national classification systems of the source data into names according to WRB 2006, correlation tables are used in SoilGrids. Soil properties and classes are predicted independently from each other. This means that the combinations of soil properties for the same cells or soil property-soil class combinations do not necessarily yield logical combinations when the map layers are studied jointly. The model prediction procedure is robust and probably has a low source of error in the prediction of RSGs. It seems that the quality of the original soil classification in the data and the use of correlation tables are the largest sources of error in mapping the RSG distribution patterns. Predicted patterns of dominant RSGs were evaluated in selected areas and sources of error were identified. Suggestions are made for improvement of WRB2015 RSG distribution predictions in SoilGrids. Keywords: Automated global mapping; World Reference Base for Soil Resources; Data evaluation; Data quality assurance References 1 Hengl T, de Jesus JM, Heuvelink GBM, Ruiperez Gonzalez M, Kilibarda M, et al. (2016) SoilGrids250m: global gridded soil information based on Machine Learning. Earth System Science Data (ESSD), in review. 2 Hengl T, de Jesus JM, MacMillan RA, Batjes NH, Heuvelink GBM, et al. (2014) SoilGrids1km — Global Soil Information Based on Automated Mapping. PLoS ONE 9(8): e105992. doi:10.1371/journal.pone.0105992

  13. SU-D-201-01: A Multi-Institutional Study Quantifying the Impact of Simulated Linear Accelerator VMAT Errors for Nasopharynx

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pogson, E; Liverpool and Macarthur Cancer Therapy Centres, Liverpool, NSW; Ingham Institute for Applied Medical Research, Sydney, NSW

    Purpose: To quantify the impact of differing magnitudes of simulated linear accelerator errors on the dose to the target volume and organs at risk for nasopharynx VMAT. Methods: Ten nasopharynx cancer patients were retrospectively replanned twice with one full arc VMAT by two institutions. Treatment uncertainties (gantry angle and collimator in degrees, MLC field size and MLC shifts in mm) were introduced into these plans at increments of 5,2,1,−1,−2 and −5. This was completed using an in-house Python script within Pinnacle3 and analysed using 3DVH and MatLab. The mean and maximum dose were calculated for the Planning Target Volume (PTV1),more » parotids, brainstem, and spinal cord and then compared to the original baseline plan. The D1cc was also calculated for the spinal cord and brainstem. Patient average results were compared across institutions. Results: Introduced gantry angle errors had the smallest effect of dose, no tolerances were exceeded for one institution, and the second institutions VMAT plans were only exceeded for gantry angle of ±5° affecting different sided parotids by 14–18%. PTV1, brainstem and spinal cord tolerances were exceeded for collimator angles of ±5 degrees, MLC shifts and MLC field sizes of ±1 and beyond, at the first institution. At the second institution, sensitivity to errors was marginally higher for some errors including the collimator error producing doses exceeding tolerances above ±2 degrees, and marginally lower with tolerances exceeded above MLC shifts of ±2. The largest differences occur with MLC field sizes, with both institutions reporting exceeded tolerances, for all introduced errors (±1 and beyond). Conclusion: The plan robustness for VMAT nasopharynx plans has been demonstrated. Gantry errors have the least impact on patient doses, however MLC field sizes exceed tolerances even with relatively low introduced errors and also produce the largest errors. This was consistent across both departments. The authors acknowledge funding support from the NSW Cancer Council.« less

  14. When is an error not a prediction error? An electrophysiological investigation.

    PubMed

    Holroyd, Clay B; Krigolson, Olave E; Baker, Robert; Lee, Seung; Gibson, Jessica

    2009-03-01

    A recent theory holds that the anterior cingulate cortex (ACC) uses reinforcement learning signals conveyed by the midbrain dopamine system to facilitate flexible action selection. According to this position, the impact of reward prediction error signals on ACC modulates the amplitude of a component of the event-related brain potential called the error-related negativity (ERN). The theory predicts that ERN amplitude is monotonically related to the expectedness of the event: It is larger for unexpected outcomes than for expected outcomes. However, a recent failure to confirm this prediction has called the theory into question. In the present article, we investigated this discrepancy in three trial-and-error learning experiments. All three experiments provided support for the theory, but the effect sizes were largest when an optimal response strategy could actually be learned. This observation suggests that ACC utilizes dopamine reward prediction error signals for adaptive decision making when the optimal behavior is, in fact, learnable.

  15. The importance of intra-hospital pharmacovigilance in the detection of medication errors

    PubMed

    Villegas, Francisco; Figueroa-Montero, David; Barbero-Becerra, Varenka; Juárez-Hernández, Eva; Uribe, Misael; Chávez-Tapia, Norberto; González-Chon, Octavio

    2018-01-01

    Hospitalized patients are susceptible to medication errors, which represent between the fourth and the sixth cause of death. The department of intra-hospital pharmacovigilance intervenes in the entire process of medication with the purpose to prevent, repair and assess damages. To analyze medication errors reported by Mexican Fundación Clínica Médica Sur pharmacovigilance system and their impact on patients. Prospective study carried out from 2012 to 2015, where medication prescriptions given to patients were recorded. Owing to heterogeneity, data were described as absolute numbers in a logarithmic scale. 292 932 prescriptions of 56 368 patients were analyzed, and 8.9% of medication errors were identified. The treating physician was responsible of 83.32% of medication errors, residents of 6.71% and interns of 0.09%. No error caused permanent damage or death. This is the pharmacovigilance study with the largest sample size reported. Copyright: © 2018 SecretarÍa de Salud.

  16. Field errors in hybrid insertion devices

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schlueter, R.D.

    1995-02-01

    Hybrid magnet theory as applied to the error analyses used in the design of Advanced Light Source (ALS) insertion devices is reviewed. Sources of field errors in hybrid insertion devices are discussed.

  17. Effects of Optical Combiner and IPD Change for Convergence on Near-Field Depth Perception in an Optical See-Through HMD.

    PubMed

    Lee, Sangyoon; Hu, Xinda; Hua, Hong

    2016-05-01

    Many error sources have been explored in regards to the depth perception problem in augmented reality environments using optical see-through head-mounted displays (OST-HMDs). Nonetheless, two error sources are commonly neglected: the ray-shift phenomenon and the change in interpupillary distance (IPD). The first source of error arises from the difference in refraction for virtual and see-through optical paths caused by an optical combiner, which is required of OST-HMDs. The second occurs from the change in the viewer's IPD due to eye convergence. In this paper, we analyze the effects of these two error sources on near-field depth perception and propose methods to compensate for these two types of errors. Furthermore, we investigate their effectiveness through an experiment comparing the conditions with and without our error compensation methods applied. In our experiment, participants estimated the egocentric depth of a virtual and a physical object located at seven different near-field distances (40∼200 cm) using a perceptual matching task. Although the experimental results showed different patterns depending on the target distance, the results demonstrated that the near-field depth perception error can be effectively reduced to a very small level (at most 1 percent error) by compensating for the two mentioned error sources.

  18. Meteorological Error Budget Using Open Source Data

    DTIC Science & Technology

    2016-09-01

    ARL-TR-7831 ● SEP 2016 US Army Research Laboratory Meteorological Error Budget Using Open- Source Data by J Cogan, J Smith, P...needed. Do not return it to the originator. ARL-TR-7831 ● SEP 2016 US Army Research Laboratory Meteorological Error Budget Using...Error Budget Using Open-Source Data 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) J Cogan, J Smith, P Haines

  19. TU-H-CAMPUS-JeP3-05: Adaptive Determination of Needle Sequence HDR Prostate Brachytherapy with Divergent Needle-By-Needle Delivery

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Borot de Battisti, M; Maenhout, M; Lagendijk, J J W

    Purpose: To develop a new method which adaptively determines the optimal needle insertion sequence for HDR prostate brachytherapy involving divergent needle-by-needle dose delivery by e.g. a robotic device. A needle insertion sequence is calculated at the beginning of the intervention and updated after each needle insertion with feedback on needle positioning errors. Methods: Needle positioning errors and anatomy changes may occur during HDR brachytherapy which can lead to errors in the delivered dose. A novel strategy was developed to calculate and update the needle sequence and the dose plan after each needle insertion with feedback on needle positioning errors. Themore » dose plan optimization was performed by numerical simulations. The proposed needle sequence determination optimizes the final dose distribution based on the dose coverage impact of each needle. This impact is predicted stochastically by needle insertion simulations. HDR procedures were simulated with varying number of needle insertions (4 to 12) using 11 patient MR data-sets with PTV, prostate, urethra, bladder and rectum delineated. Needle positioning errors were modeled by random normally distributed angulation errors (standard deviation of 3 mm at the needle’s tip). The final dose parameters were compared in the situations where the needle with the largest vs. the smallest dose coverage impact was selected at each insertion. Results: Over all scenarios, the percentage of clinically acceptable final dose distribution improved when the needle selected had the largest dose coverage impact (91%) compared to the smallest (88%). The differences were larger for few (4 to 6) needle insertions (maximum difference scenario: 79% vs. 60%). The computation time of the needle sequence optimization was below 60s. Conclusion: A new adaptive needle sequence determination for HDR prostate brachytherapy was developed. Coupled to adaptive planning, the selection of the needle with the largest dose coverage impact increases chances of reaching the clinical constraints. M. Borot de Battisti is funded by Philips Medical Systems Nederland B.V.; M. Moerland is principal investigator on a contract funded by Philips Medical Systems Nederland B.V.; G. Hautvast and D. Binnekamp are fulltime employees of Philips Medical Systems Nederland B.V.« less

  20. Error Sources in Asteroid Astrometry

    NASA Technical Reports Server (NTRS)

    Owen, William M., Jr.

    2000-01-01

    Asteroid astrometry, like any other scientific measurement process, is subject to both random and systematic errors, not all of which are under the observer's control. To design an astrometric observing program or to improve an existing one requires knowledge of the various sources of error, how different errors affect one's results, and how various errors may be minimized by careful observation or data reduction techniques.

  1. The use of source memory to identify one's own episodic confusion errors.

    PubMed

    Smith, S M; Tindell, D R; Pierce, B H; Gilliland, T R; Gerkens, D R

    2001-03-01

    In 4 category cued recall experiments, participants falsely recalled nonlist common members, a semantic confusion error. Errors were more likely if critical nonlist words were presented on an incidental task, causing source memory failures called episodic confusion errors. Participants could better identify the source of falsely recalled words if they had deeply processed the words on the incidental task. For deep but not shallow processing, participants could reliably include or exclude incidentally shown category members in recall. The illusion that critical items actually appeared on categorized lists was diminished but not eradicated when participants identified episodic confusion errors post hoc among their own recalled responses; participants often believed that critical items had been on both the incidental task and the study list. Improved source monitoring can potentially mitigate episodic (but not semantic) confusion errors.

  2. Expertise effects in the Moses illusion: detecting contradictions with stored knowledge.

    PubMed

    Cantor, Allison D; Marsh, Elizabeth J

    2017-02-01

    People frequently miss contradictions with stored knowledge; for example, readers often fail to notice any problem with a reference to the Atlantic as the largest ocean. Critically, such effects occur even though participants later demonstrate knowing the Pacific is the largest ocean (the Moses Illusion) [Erickson, T. D., & Mattson, M. E. (1981). From words to meaning: A semantic illusion. Journal of Verbal Learning & Verbal Behavior, 20, 540-551]. We investigated whether such oversights disappear when erroneous references contradict information in one's expert domain, material which likely has been encountered many times and is particularly well-known. Biology and history graduate students monitored for errors while answering biology and history questions containing erroneous presuppositions ("In what US state were the forty-niners searching for oil?"). Expertise helped: participants were less susceptible to the illusion and less likely to later reproduce errors in their expert domain. However, expertise did not eliminate the illusion, even when errors were bolded and underlined, meaning that it was unlikely that people simply skipped over errors. The results support claims that people often use heuristics to judge truth, as opposed to directly retrieving information from memory, likely because such heuristics are adaptive and often lead to the correct answer. Even experts sometimes use such shortcuts, suggesting that overlearned and accessible knowledge does not guarantee retrieval of that information.

  3. Motion-based nonuniformity correction in DoFP polarimeters

    NASA Astrophysics Data System (ADS)

    Kumar, Rakesh; Tyo, J. Scott; Ratliff, Bradley M.

    2007-09-01

    Division of Focal Plane polarimeters (DoFP) operate by integrating an array of micropolarizer elements with a focal plane array. These devices have been investigated for over a decade, and example systems have been built in all regions of the optical spectrum. DoFP devices have the distinct advantage that they are mechanically rugged, inherently temporally synchronized, and optically aligned. They have the concomitant disadvantage that each pixel in the FPA has a different instantaneous field of view (IFOV), meaning that the polarization component measurements that go into estimating the Stokes vector across the image come from four different points in the field. In addition to IFOV errors, microgrid camera systems operating in the LWIR have the additional problem that FPA nonuniformity (NU) noise can be quite severe. The spatial differencing nature of a DoFP system exacerbates the residual NU noise that is remaining after calibration, and is often the largest source of false polarization signatures away from regions where IFOV error dominates. We have recently presented a scene based algorithm that uses frame-to-frame motion to compensate for NU noise in unpolarized IR imagers. In this paper, we have extended that algorithm so that it can be used to compensate for NU noise on a DoFP polarimeter. Furthermore, the additional information provided by the scene motion can be used to significantly reduce the IFOV error. We have found a reduction of IFOV error by a factor of 10 if the scene motion is known exactly. Performance is reduced when the motion must be estimated from the scene, but still shows a marked improvement over static DoFP images.

  4. Missed Diagnosis of Cardiovascular Disease in Outpatient General Medicine: Insights from Malpractice Claims Data.

    PubMed

    Quinn, Gene R; Ranum, Darrell; Song, Ellen; Linets, Margarita; Keohane, Carol; Riah, Heather; Greenberg, Penny

    2017-10-01

    Diagnostic errors are an underrecognized source of patient harm, and cardiovascular disease can be challenging to diagnose in the ambulatory setting. Although malpractice data can inform diagnostic error reduction efforts, no studies have examined outpatient cardiovascular malpractice cases in depth. A study was conducted to examine the characteristics of outpatient cardiovascular malpractice cases brought against general medicine practitioners. Some 3,407 closed malpractice claims were analyzed in outpatient general medicine from CRICO Strategies' Comparative Benchmarking System database-the largest detailed database of paid and unpaid malpractice in the world-and multivariate models were created to determine the factors that predicted case outcomes. Among the 153 patients in cardiovascular malpractice cases for whom patient comorbidities were coded, the majority (63%) had at least one traditional cardiac risk factor, such as diabetes, tobacco use, or previous cardiovascular disease. Cardiovascular malpractice cases were more likely to involve an allegation of error in diagnosis (75% vs. 47%, p <0.0001), have high clinical severity (86% vs. 49%, p <0.0001) and result in death (75% vs. 27%, p <0.0001), as compared to noncardiovascular cases. Initial diagnoses of nonspecific chest pain and mimics of cardiovascular pain (for example, esophageal disease) were common and independently increased the likelihood of a claim resulting in a payment (p <0.01). Cardiovascular malpractice cases against outpatient general medicine physicians mostly occur in patients with conventional risk factors for coronary artery disease and are often diagnosed with common mimics of cardiovascular pain. These findings suggest that these patients may be high-yield targets for preventing diagnostic errors in the ambulatory setting. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  5. Sources and Delivery of Nutrients to the Northwestern Gulf of Mexico from Streams in the South-Central United States1

    PubMed Central

    Rebich, Richard A; Houston, Natalie A; Mize, Scott V; Pearson, Daniel K; Ging, Patricia B; Evan Hornig, C

    2011-01-01

    Abstract SPAtially Referenced Regressions On Watershed attributes (SPARROW) models were developed to estimate nutrient inputs [total nitrogen (TN) and total phosphorus (TP)] to the northwestern part of the Gulf of Mexico from streams in the South-Central United States (U.S.). This area included drainages of the Lower Mississippi, Arkansas-White-Red, and Texas-Gulf hydrologic regions. The models were standardized to reflect nutrient sources and stream conditions during 2002. Model predictions of nutrient loads (mass per time) and yields (mass per area per time) generally were greatest in streams in the eastern part of the region and along reaches near the Texas and Louisiana shoreline. The Mississippi River and Atchafalaya River watersheds, which drain nearly two-thirds of the conterminous U.S., delivered the largest nutrient loads to the Gulf of Mexico, as expected. However, the three largest delivered TN yields were from the Trinity River/Galveston Bay, Calcasieu River, and Aransas River watersheds, while the three largest delivered TP yields were from the Calcasieu River, Mermentau River, and Trinity River/Galveston Bay watersheds. Model output indicated that the three largest sources of nitrogen from the region were atmospheric deposition (42%), commercial fertilizer (20%), and livestock manure (unconfined, 17%). The three largest sources of phosphorus were commercial fertilizer (28%), urban runoff (23%), and livestock manure (confined and unconfined, 23%). PMID:22457582

  6. Long-range transport of black carbon to the Pacific Ocean and its dependence on aging timescale

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Liu, J.; Tao, S.; Ban-Weiss, G. A.

    2015-06-01

    Improving the ability of global models to predict concentrations of black carbon (BC) over the Pacific Ocean is essential to evaluate the impact of BC on marine climate. In this study, we tag BC tracers from 13 source regions around the globe in a global chemical transport model MOZART-4. Numerous sensitivity simulations are carried out varying the aging timescale of BC emitted from each source region. The aging timescale for each source region is optimized by minimizing errors in vertical profiles of BC mass mixing ratios between simulations and HIAPER Pole-to-Pole Observations (HIPPO). For most HIPPO deployments, in the Northern Hemisphere, optimized aging timescales are less than half a day for BC emitted from tropical and mid-latitude source regions, and about 1 week for BC emitted from high latitude regions in all seasons except summer. We find that East Asian emissions contribute most to the BC loading over the North Pacific, while South American, African and Australian emissions dominate BC loadings over the South Pacific. Dominant source regions contributing to BC loadings in other parts of the globe are also assessed. The lifetime of BC originating from East Asia (i.e., the world's largest BC emitter) is found to be only 2.2 days, much shorter than the global average lifetime of 4.9 days, making East Asia's contribution to global burden only 36 % of BC from the second largest emitter, Africa. Thus, evaluating only relative emission rates without accounting for differences in aging timescales and deposition rates is not predictive of the contribution of a given source region to climate impacts. Our simulations indicate that lifetime of BC increases nearly linearly with aging timescale for all source regions. When aging rate is fast, the lifetime of BC is largely determined by factors that control local deposition rates (e.g. precipitation). The sensitivity of lifetime to aging timescale depends strongly on the initial hygroscopicity of freshly emitted BC. Our findings suggest that the aging timescale of BC varies significantly by region and season, and can strongly influence the contribution of source regions to BC burdens around the globe. Improving parameterizations of the aging process for BC is important for enhancing the predictive skill of air quality and climate models. Future observations that investigate the evolution of hygroscopicity of BC as it ages from different source regions to the remote atmosphere are urgently needed.

  7. Long-range transport of black carbon to the Pacific Ocean and its dependence on aging timescale

    NASA Astrophysics Data System (ADS)

    Zhang, J.; Liu, J.; Tao, S.; Ban-Weiss, G. A.

    2015-10-01

    Improving the ability of global models to predict concentrations of black carbon (BC) over the Pacific Ocean is essential to evaluate the impact of BC on marine climate. In this study, we tag BC tracers from 13 source regions around the globe in a global chemical transport model, Model for Ozone and Related Chemical Tracers, version 4 (MOZART-4). Numerous sensitivity simulations are carried out varying the aging timescale of BC emitted from each source region. The aging timescale for each source region is optimized by minimizing errors in vertical profiles of BC mass mixing ratios between simulations and HIAPER Pole-to-Pole Observations (HIPPO). For most HIPPO deployments, in the Northern Hemisphere, optimized aging timescales are less than half a day for BC emitted from tropical and midlatitude source regions and about 1 week for BC emitted from high-latitude regions in all seasons except summer. We find that East Asian emissions contribute most to the BC loading over the North Pacific, while South American, African and Australian emissions dominate BC loadings over the South Pacific. Dominant source regions contributing to BC loadings in other parts of the globe are also assessed. The lifetime of BC originating from East Asia (i.e., the world's largest BC emitter) is found to be only 2.2 days, much shorter than the global average lifetime of 4.9 days, making the contribution from East Asia to the global BC burden only 36 % of that from the second largest emitter, Africa. Thus, evaluating only relative emission rates without accounting for differences in aging timescales and deposition rates is not predictive of the contribution of a given source region to climate impacts. Our simulations indicate that the lifetime of BC increases nearly linearly with aging timescale for all source regions. When the aging rate is fast, the lifetime of BC is largely determined by factors that control local deposition rates (e.g., precipitation). The sensitivity of lifetime to aging timescale depends strongly on the initial hygroscopicity of freshly emitted BC. Our findings suggest that the aging timescale of BC varies significantly by region and season and can strongly influence the contribution of source regions to BC burdens around the globe. Therefore, improving parameterizations of the aging process for BC is important for enhancing the predictive skill of global models. Future observations that investigate the evolution of the hygroscopicity of BC as it ages from different source regions to the remote atmosphere are urgently needed.

  8. The Impact of Aerosol Sources and Aging on CCN Formation in the Houston-Galveston-Gulf of Mexico Region

    NASA Astrophysics Data System (ADS)

    Quinn, P.; Bates, T.; Coffman, D.; Covert, D.

    2007-12-01

    The impact of anthropogenic aerosol on cloud properties, cloud lifetime, and precipitation processes is one of the largest uncertainties in our current understanding of climate change. Aerosols affect cloud properties by serving as cloud condensation nuclei (CCN) thereby leading to the formation of cloud droplets. The process of cloud drop activation is a function of both the size and chemistry of the aerosol particles which, in turn, depend on the source of the aerosol and transformations that occur downwind. In situ field measurements that can lead to an improved understanding of the process of cloud drop formation and simplifying parameterizations for improving the accuracy of climate models are highly desirable. During the Gulf of Mexico Atmospheric Composition and Climate Study (GoMACCS), the NOAA RV Ronald H. Brown encountered a wide variety of aerosol types ranging from marine near the Florida panhandle to urban and industrial in the Houston-Galveston area. These varied sources provided an opportunity to investigate the role of aerosol sources, aging, chemistry, and size in the activation of particles to form cloud droplets. Here, we use the correlation between variability in critical diameter for activation (determined empirically from measured CCN concentrations and the number size distribution) and aerosol composition to quantify the impact of composition on particle activation. Variability in aerosol composition is parameterized by the mass fraction of Hydrocarbon-like Organic Aerosol (HOA) for particle diameters less than 200 nm (vacuum aerodynamic). The HOA mass fraction in this size range is lowest for marine aerosol and higher for aerosol impacted by anthropogenic emissions. Combining all data collected at 0.44 percent supersaturation (SS) reveals that composition (defined in this way) explains 40 percent of the variance in the critical diameter. As expected, the dependence of activation on composition is strongest at lower SS. At the same time, correlations between HOA mass fraction and aerosol mean diameter show that these two parameters are essentially independent of one another for this data set. We conclude that, based on the variability of the HOA mass fraction observed during GoMACCS, composition plays a dominant role in determining the fraction of particles that are activated to form cloud droplets. Using Kohler theory, we estimate the error that results in calculated CCN concentrations if the organic fraction of the aerosol is neglected (i.e., a fully soluble composition of ammonium sulfate is assumed) for the range of organic mass fractions and mean diameters observed during GoMACCS. We then relate this error to the source and age of the aerosol. At 0.22 and 0.44 percent SS, the error is considerable for anthropogenic aerosol sampled near the source region as this aerosol has, on average, a high POM mass fraction and smaller particle mean diameter. The error is lower for more aged aerosol as it has a lower POM mass fraction and larger mean particle diameter. Hence, the percent error in calculated CCN concentration is expected to be larger for younger, organic- rich aerosol and smaller for aged, sulfate rich aerosol and for marine aerosol. We extend this analysis to continental and marine data sets recently reported by Dusek et al. [Science, 312, 1375, 2006] and Hudson [Geophys. Res., Lett., 34, L08801, 2007].

  9. Post-Colonization Interval Estimates Using Multi-Species Calliphoridae Larval Masses and Spatially Distinct Temperature Data Sets: A Case Study

    PubMed Central

    Weatherbee, Courtney R.; Pechal, Jennifer L.; Stamper, Trevor; Benbow, M. Eric

    2017-01-01

    Common forensic entomology practice has been to collect the largest Diptera larvae from a scene and use published developmental data, with temperature data from the nearest weather station, to estimate larval development time and post-colonization intervals (PCIs). To evaluate the accuracy of PCI estimates among Calliphoridae species and spatially distinct temperature sources, larval communities and ambient air temperature were collected at replicate swine carcasses (N = 6) throughout decomposition. Expected accumulated degree hours (ADH) associated with Cochliomyia macellaria and Phormia regina third instars (presence and length) were calculated using published developmental data sets. Actual ADH ranges were calculated using temperatures recorded from multiple sources at varying distances (0.90 m–7.61 km) from the study carcasses: individual temperature loggers at each carcass, a local weather station, and a regional weather station. Third instars greatly varied in length and abundance. The expected ADH range for each species successfully encompassed the average actual ADH for each temperature source, but overall under-represented the range. For both calliphorid species, weather station data were associated with more accurate PCI estimates than temperature loggers associated with each carcass. These results provide an important step towards improving entomological evidence collection and analysis techniques, and developing forensic error rates. PMID:28375172

  10. Potential-field sounding using Euler's homogeneity equation and Zidarov bubbling

    USGS Publications Warehouse

    Cordell, Lindrith

    1994-01-01

    Potential-field (gravity) data are transformed into a physical-property (density) distribution in a lower half-space, constrained solely by assumed upper bounds on physical-property contrast and data error. A two-step process is involved. The data are first transformed to an equivalent set of line (2-D case) or point (3-D case) sources, using Euler's homogeneity equation evaluated iteratively on the largest residual data value. Then, mass is converted to a volume-density product, constrained to an upper density bound, by 'bubbling,' which exploits circular or radial expansion to redistribute density without changing the associated gravity field. The method can be developed for gravity or magnetic data in two or three dimensions. The results can provide a beginning for interpretation of potential-field data where few independent constraints exist, or more likely, can be used to develop models and confirm or extend interpretation of other geophysical data sets.

  11. Error Analyses of the North Alabama Lightning Mapping Array (LMA)

    NASA Technical Reports Server (NTRS)

    Koshak, W. J.; Solokiewicz, R. J.; Blakeslee, R. J.; Goodman, S. J.; Christian, H. J.; Hall, J. M.; Bailey, J. C.; Krider, E. P.; Bateman, M. G.; Boccippio, D. J.

    2003-01-01

    Two approaches are used to characterize how accurately the North Alabama Lightning Mapping Array (LMA) is able to locate lightning VHF sources in space and in time. The first method uses a Monte Carlo computer simulation to estimate source retrieval errors. The simulation applies a VHF source retrieval algorithm that was recently developed at the NASA-MSFC and that is similar, but not identical to, the standard New Mexico Tech retrieval algorithm. The second method uses a purely theoretical technique (i.e., chi-squared Curvature Matrix theory) to estimate retrieval errors. Both methods assume that the LMA system has an overall rms timing error of 50ns, but all other possible errors (e.g., multiple sources per retrieval attempt) are neglected. The detailed spatial distributions of retrieval errors are provided. Given that the two methods are completely independent of one another, it is shown that they provide remarkably similar results, except that the chi-squared theory produces larger altitude error estimates than the (more realistic) Monte Carlo simulation.

  12. Understanding EFL Students' Errors in Writing

    ERIC Educational Resources Information Center

    Phuket, Pimpisa Rattanadilok Na; Othman, Normah Binti

    2015-01-01

    Writing is the most difficult skill in English, so most EFL students tend to make errors in writing. In assisting the learners to successfully acquire writing skill, the analysis of errors and the understanding of their sources are necessary. This study attempts to explore the major sources of errors occurred in the writing of EFL students. It…

  13. Target/error overlap in jargonaphasia: The case for a one-source model, lexical and non-lexical summation, and the special status of correct responses.

    PubMed

    Olson, Andrew; Halloran, Elizabeth; Romani, Cristina

    2015-12-01

    We present three jargonaphasic patients who made phonological errors in naming, repetition and reading. We analyse target/response overlap using statistical models to answer three questions: 1) Is there a single phonological source for errors or two sources, one for target-related errors and a separate source for abstruse errors? 2) Can correct responses be predicted by the same distribution used to predict errors or do they show a completion boost (CB)? 3) Is non-lexical and lexical information summed during reading and repetition? The answers were clear. 1) Abstruse errors did not require a separate distribution created by failure to access word forms. Abstruse and target-related errors were the endpoints of a single overlap distribution. 2) Correct responses required a special factor, e.g., a CB or lexical/phonological feedback, to preserve their integrity. 3) Reading and repetition required separate lexical and non-lexical contributions that were combined at output. Copyright © 2015 Elsevier Ltd. All rights reserved.

  14. Measurement-device-independent quantum key distribution with source state errors and statistical fluctuation

    NASA Astrophysics Data System (ADS)

    Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin

    2017-03-01

    We show how to calculate the secure final key rate in the four-intensity decoy-state measurement-device-independent quantum key distribution protocol with both source errors and statistical fluctuations with a certain failure probability. Our results rely only on the range of only a few parameters in the source state. All imperfections in this protocol have been taken into consideration without assuming any specific error patterns of the source.

  15. The impact of individual materials parameters on color temperature reproducibility among phosphor converted LED sources

    NASA Astrophysics Data System (ADS)

    Schweitzer, Susanne; Nemitz, Wolfgang; Sommer, Christian; Hartmann, Paul; Fulmek, Paul; Nicolics, Johann; Pachler, Peter; Hoschopf, Hans; Schrank, Franz; Langer, Gregor; Wenzl, Franz P.

    2014-09-01

    For a systematic approach to improve the white light quality of phosphor converted light-emitting diodes (LEDs) for general lighting applications it is imperative to get the individual sources of error for color temperature reproducibility under control. In this regard, it is imperative to understand how compositional, optical and materials properties of the color conversion element (CCE), which typically consists of phosphor particles embedded in a transparent matrix material, affect the constancy of a desired color temperature of a white LED source. In this contribution we use an LED assembly consisting of an LED die mounted on a printed circuit board (PCB) by chip-on-board technology and a CCE with a glob-top configuration as a model system and discuss the impact of potential sources for color temperature deviation among individual devices. Parameters that are investigated include imprecisions in the amount of materials deposition, deviations from the target value for the phosphor concentration in the matrix material, deviations from the target value for the particle sizes of the phosphor material, deviations from the target values for the refractive indexes of phosphor and matrix material as well as deviations from the reflectivity of the substrate surface. From these studies, some general conclusions can be drawn which of these parameters have the largest impact on color deviation and have to be controlled most precisely in a fabrication process in regard of color temperature reproducibility among individual white LED sources.

  16. Acoustic holography as a metrological tool for characterizing medical ultrasound sources and fields

    PubMed Central

    Sapozhnikov, Oleg A.; Tsysar, Sergey A.; Khokhlova, Vera A.; Kreider, Wayne

    2015-01-01

    Acoustic holography is a powerful technique for characterizing ultrasound sources and the fields they radiate, with the ability to quantify source vibrations and reduce the number of required measurements. These capabilities are increasingly appealing for meeting measurement standards in medical ultrasound; however, associated uncertainties have not been investigated systematically. Here errors associated with holographic representations of a linear, continuous-wave ultrasound field are studied. To facilitate the analysis, error metrics are defined explicitly, and a detailed description of a holography formulation based on the Rayleigh integral is provided. Errors are evaluated both for simulations of a typical therapeutic ultrasound source and for physical experiments with three different ultrasound sources. Simulated experiments explore sampling errors introduced by the use of a finite number of measurements, geometric uncertainties in the actual positions of acquired measurements, and uncertainties in the properties of the propagation medium. Results demonstrate the theoretical feasibility of keeping errors less than about 1%. Typical errors in physical experiments were somewhat larger, on the order of a few percent; comparison with simulations provides specific guidelines for improving the experimental implementation to reduce these errors. Overall, results suggest that holography can be implemented successfully as a metrological tool with small, quantifiable errors. PMID:26428789

  17. Emission factors for open and domestic biomass burning for use in atmospheric models

    Treesearch

    S. K. Akagi; R. J. Yokelson; C. Wiedinmyer; M. J. Alvarado; J. S. Reid; T. Karl; J. D. Crounse; P. O. Wennberg

    2010-01-01

    Biomass burning (BB) is the second largest source of trace gases and the largest source of primary fine carbonaceous particles in the global troposphere. Many recent BB studies have provided new emission factor (EF) measurements. This is especially 5 true for non methane organic compounds (NMOC), which influence secondary organic aerosol (SOA) and ozone formation. New...

  18. Common but unappreciated sources of error in one, two, and multiple-color pyrometry

    NASA Technical Reports Server (NTRS)

    Spjut, R. Erik

    1988-01-01

    The most common sources of error in optical pyrometry are examined. They can be classified as either noise and uncertainty errors, stray radiation errors, or speed-of-response errors. Through judicious choice of detectors and optical wavelengths the effect of noise errors can be minimized, but one should strive to determine as many of the system properties as possible. Careful consideration of the optical-collection system can minimize stray radiation errors. Careful consideration must also be given to the slowest elements in a pyrometer when measuring rapid phenomena.

  19. Syndrome-source-coding and its universal generalization. [error correcting codes for data compression

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1976-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A 'universal' generalization of syndrome-source-coding is formulated which provides robustly effective distortionless coding of source ensembles. Two examples are given, comparing the performance of noiseless universal syndrome-source-coding to (1) run-length coding and (2) Lynch-Davisson-Schalkwijk-Cover universal coding for an ensemble of binary memoryless sources.

  20. Surface-water nutrient conditions and sources in the United States Pacific Northwest

    USGS Publications Warehouse

    Wise, D.R.; Johnson, H.M.

    2011-01-01

    The SPAtially Referenced Regressions On Watershed attributes (SPARROW) model was used to perform an assessment of surface-water nutrient conditions and to identify important nutrient sources in watersheds of the Pacific Northwest region of the United States (U.S.) for the year 2002. Our models included variables representing nutrient sources as well as landscape characteristics that affect nutrient delivery to streams. Annual nutrient yields were higher in watersheds on the wetter, west side of the Cascade Range compared to watersheds on the drier, east side. High nutrient enrichment (relative to the U.S. Environmental Protection Agency's recommended nutrient criteria) was estimated in watersheds throughout the region. Forest land was generally the largest source of total nitrogen stream load and geologic material was generally the largest source of total phosphorus stream load generated within the 12,039 modeled watersheds. These results reflected the prevalence of these two natural sources and the low input from other nutrient sources across the region. However, the combined input from agriculture, point sources, and developed land, rather than natural nutrient sources, was responsible for most of the nutrient load discharged from many of the largest watersheds. Our results provided an understanding of the regional patterns in surface-water nutrient conditions and should be useful to environmental managers in future water-quality planning efforts.

  1. A Comprehensive Radial Velocity Error Budget for Next Generation Doppler Spectrometers

    NASA Technical Reports Server (NTRS)

    Halverson, Samuel; Ryan, Terrien; Mahadevan, Suvrath; Roy, Arpita; Bender, Chad; Stefansson, Guomundur Kari; Monson, Andrew; Levi, Eric; Hearty, Fred; Blake, Cullen; hide

    2016-01-01

    We describe a detailed radial velocity error budget for the NASA-NSF Extreme Precision Doppler Spectrometer instrument concept NEID (NN-explore Exoplanet Investigations with Doppler spectroscopy). Such an instrument performance budget is a necessity for both identifying the variety of noise sources currently limiting Doppler measurements, and estimating the achievable performance of next generation exoplanet hunting Doppler spectrometers. For these instruments, no single source of instrumental error is expected to set the overall measurement floor. Rather, the overall instrumental measurement precision is set by the contribution of many individual error sources. We use a combination of numerical simulations, educated estimates based on published materials, extrapolations of physical models, results from laboratory measurements of spectroscopic subsystems, and informed upper limits for a variety of error sources to identify likely sources of systematic error and construct our global instrument performance error budget. While natively focused on the performance of the NEID instrument, this modular performance budget is immediately adaptable to a number of current and future instruments. Such an approach is an important step in charting a path towards improving Doppler measurement precisions to the levels necessary for discovering Earth-like planets.

  2. The Accuracy of Webcams in 2D Motion Analysis: Sources of Error and Their Control

    ERIC Educational Resources Information Center

    Page, A.; Moreno, R.; Candelas, P.; Belmar, F.

    2008-01-01

    In this paper, we show the potential of webcams as precision measuring instruments in a physics laboratory. Various sources of error appearing in 2D coordinate measurements using low-cost commercial webcams are discussed, quantifying their impact on accuracy and precision, and simple procedures to control these sources of error are presented.…

  3. Can Family Planning Service Statistics Be Used to Track Population-Level Outcomes?

    PubMed

    Magnani, Robert J; Ross, John; Williamson, Jessica; Weinberger, Michelle

    2018-03-21

    The need for annual family planning program tracking data under the Family Planning 2020 (FP2020) initiative has contributed to renewed interest in family planning service statistics as a potential data source for annual estimates of the modern contraceptive prevalence rate (mCPR). We sought to assess (1) how well a set of commonly recorded data elements in routine service statistics systems could, with some fairly simple adjustments, track key population-level outcome indicators, and (2) whether some data elements performed better than others. We used data from 22 countries in Africa and Asia to analyze 3 data elements collected from service statistics: (1) number of contraceptive commodities distributed to clients, (2) number of family planning service visits, and (3) number of current contraceptive users. Data quality was assessed via analysis of mean square errors, using the United Nations Population Division World Contraceptive Use annual mCPR estimates as the "gold standard." We also examined the magnitude of several components of measurement error: (1) variance, (2) level bias, and (3) slope (or trend) bias. Our results indicate modest levels of tracking error for data on commodities to clients (7%) and service visits (10%), and somewhat higher error rates for data on current users (19%). Variance and slope bias were relatively small for all data elements. Level bias was by far the largest contributor to tracking error. Paired comparisons of data elements in countries that collected at least 2 of the 3 data elements indicated a modest advantage of data on commodities to clients. None of the data elements considered was sufficiently accurate to be used to produce reliable stand-alone annual estimates of mCPR. However, the relatively low levels of variance and slope bias indicate that trends calculated from these 3 data elements can be productively used in conjunction with the Family Planning Estimation Tool (FPET) currently used to produce annual mCPR tracking estimates for FP2020. © Magnani et al.

  4. Evaluation of airborne topographic lidar for quantifying beach changes

    USGS Publications Warehouse

    2003-01-01

    A scanning airborne topographic lidar was evaluated for its ability to quantify beach topography and changes during the Sandy Duck experiment in 1997 along the North Carolina coast. Elevation estimates, acquired with NASA's Airborne Topographic Mapper (ATM), were compared to elevations measured with three types of ground-based mea- surements-1) differential GPS equipped all-terrain vehicle (ATV) that surveyed a 3-km reach of beach from the shoreline to the dune, 2) GPS antenna mounted on a stadia rod used to intensely survey a different 100 m reach of beach, and 3) a second GPS-equipped ATV that surveyed a 70-km-long transect along the coast. Over 40,000 individual intercomparisons between ATM and ground surveys were calculated. RMS vertical differences associated with the ATM when compared to ground measurements ranged from 13 to 19 cm. Considering all of the intercomparisons together, RMS ≃15 cm. This RMS error represents a total error for individual elevation estimates including uncertainties associated with random and mean errors. The latter was the largest source of error and was attributed to drift in differential GPS. The ≃15cm vertical accuracy of the ATM is adequate to resolve beach-change signals typical of the impact of storms. For example, ATM surveys of Assateague Island (spanning the border of MD and VA) prior to and immediately following a severe northeaster showed vertical beach changes in places greater than 2 m, much greater than expected errors associated with the ATM. A major asset of airborne lidar is the high spatial data density. Measurements of elevation are acquired every few m2 over regional scales of hundreds of kilometers. Hence, many scales of beach morphology and change can be resolved, from beach cusps tens of meters in wavelength to entire coastal cells com- prising tens to hundreds of kilometers of coast. Topographic lidars similar to the ATM are becoming increasingly available from commercial vendors and should, in the future, be widely used in beach su

  5. Evaluation of airborne topographic lidar for quantifying beach changes

    USGS Publications Warehouse

    Sallenger, A.H.; Krabill, W.B.; Swift, R.N.; Brock, J.; List, J.; Hansen, M.; Holman, R.A.; Manizade, S.; Sontag, J.; Meredith, A.; Morgan, K.; Yunkel, J.K.; Frederick, E.B.; Stockdon, H.

    2003-01-01

    A scanning airborne topographic lidar was evaluated for its ability to quantify beach topography and changes during the Sandy Duck experiment in 1997 along the North Carolina coast. Elevation estimates, acquired with NASA's Airborne Topographic Mapper (ATM), were compared to elevations measured with three types of ground-based measurements - 1) differential GPS equipped all-terrain vehicle (ATV) that surveyed a 3-km reach of beach from the shoreline to the dune, 2) GPS antenna mounted on a stadia rod used to intensely survey a different 100 m reach of beach, and 3) a second GPS-equipped ATV that surveyed a 70-km-long transect along the coast. Over 40,000 individual intercomparisons between ATM and ground surveys were calculated. RMS vertical differences associated with the ATM when compared to ground measurements ranged from 13 to 19 cm. Considering all of the intercomparisons together, RMS ??? 15 cm. This RMS error represents a total error for individual elevation estimates including uncertainties associated with random and mean errors. The latter was the largest source of error and was attributed to drift in differential GPS. The ??? 15 cm vertical accuracy of the ATM is adequate to resolve beach-change signals typical of the impact of storms. For example, ATM surveys of Assateague Island (spanning the border of MD and VA) prior to and immediately following a severe northeaster showed vertical beach changes in places greater than 2 m, much greater than expected errors associated with the ATM. A major asset of airborne lidar is the high spatial data density. Measurements of elevation are acquired every few m2 over regional scales of hundreds of kilometers. Hence, many scales of beach morphology and change can be resolved, from beach cusps tens of meters in wavelength to entire coastal cells comprising tens to hundreds of kilometers of coast. Topographic lidars similar to the ATM are becoming increasingly available from commercial vendors and should, in the future, be widely used in beach surveying.

  6. DC magnetic fields from the human body generally: a historical overview.

    PubMed

    Cohen, D

    2004-11-30

    A review is presented of the earliest dc magnetic field (dcMF) measurements, made between 1969 and 1983, due to natural currents in the body. The measurements were essentially a mapping over the whole body, except for the brain (dcMEG), which was omitted because of interfering non-neural sources in the head. This mapping can be useful today in interpreting new measurements over the body, especially dcMEG data, where the new authors assume only a neural source in the head; our mapping suggests that this assumption may be in error. Briefly, in our mapping, dcMFs were found over almost the entire body; they were larger over the limbs and head than over the torso proper except over the abdomen, where it was usually the largest in the body Some of the sources were: 1. A strong and complicated reflex in the abdomen due to drinking cold water, suggesting that other dcMF reflexes might be common in the body. 2. Long muscle fibers in the limbs, suggesting sources also in scalp muscles. 3. Hair follicles due to touching the scalp; these sources could also exist, unrecognized, in recent dcMEG whole-head measurements. 4. Injury currents from the ischemic human heart, suggesting dcMFs could arise from injured muscle in the body generally. One major mechanism for producing dcMFs appeared to be a change in the potassium ion concentration in the vicinity of long excitable fibers. Overall, we concluded that the dcMFs were complicated, and it may be difficult to identify each source, especially in the head.

  7. Prediction of discretization error using the error transport equation

    NASA Astrophysics Data System (ADS)

    Celik, Ismail B.; Parsons, Don Roscoe

    2017-06-01

    This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.

  8. Meta sequence analysis of human blood peptides and their parent proteins.

    PubMed

    Bowden, Peter; Pendrak, Voitek; Zhu, Peihong; Marshall, John G

    2010-04-18

    Sequence analysis of the blood peptides and their qualities will be key to understanding the mechanisms that contribute to error in LC-ESI-MS/MS. Analysis of peptides and their proteins at the level of sequences is much more direct and informative than the comparison of disparate accession numbers. A portable database of all blood peptide and protein sequences with descriptor fields and gene ontology terms might be useful for designing immunological or MRM assays from human blood. The results of twelve studies of human blood peptides and/or proteins identified by LC-MS/MS and correlated against a disparate array of genetic libraries were parsed and matched to proteins from the human ENSEMBL, SwissProt and RefSeq databases by SQL. The reported peptide and protein sequences were organized into an SQL database with full protein sequences and up to five unique peptides in order of prevalence along with the peptide count for each protein. Structured query language or BLAST was used to acquire descriptive information in current databases. Sampling error at the level of peptides is the largest source of disparity between groups. Chi Square analysis of peptide to protein distributions confirmed the significant agreement between groups on identified proteins. Copyright 2010. Published by Elsevier B.V.

  9. Thermal Output of WK-Type Strain Gauges on Various Materials at Cryogenic and Elevated Temperatures

    NASA Technical Reports Server (NTRS)

    Kowalkowski, Matthew K.; Rivers, H. Kevin; Smith, Russell W.

    1998-01-01

    Strain gage apparent strain (thermal output) is one of the largest sources of error associated with the measurement of strain when temperatures and mechanical loads are varied. In this paper, experimentally determined apparent strains of WK-type strain gages, installed on both metallic and composite-laminate materials of various lay-ups and resin systems for temperatures ranging from -450 F to 230 F are presented. For the composite materials apparent strain in both the 0 ply orientation angle and the 90 ply orientation angle were measured. Metal specimens tested included: aluminum-lithium alloy (Al-LI 2195-T87), aluminum alloy (Al 2219-T87), and titanium alloy. Composite materials tested include: graphite-toughened-epoxy (IM7/997- 2), graphite-bismaleimide (IM7/5260), and graphite-K3 (IM7/K3B). The experimentally determined apparent strain data are curve fit with a fourth-order polynomial for each of the materials studied. The apparent strain data and the polynomials that are fit to the data are compared with those produced by the strain gage manufacturer, and the results and comparisons are presented. Unacceptably high errors between the manufacture's data and the experimentally determined data were observed (especially at temperatures below - 270-F).

  10. Lesson and Impressions of the Ghanaian Capital Markets

    DTIC Science & Technology

    2011-07-31

    Gold and cocoa production are major sources of foreign exchange. Interestingly, the country’s largest source of foreign exchange is remittances from...workers abroad. Oil production has expanded. According to industry experts, within 5 years, Ghana is likely to be the third-largest producer of oil...State Department reports the most prominent industries include textiles, apparel, steel, tires, flour milling, cocoa processing, beverages, tobacco

  11. Sources and Delivery of Nutrients to the Northwestern Gulf of Mexico from Streams in the South-Central United States

    USGS Publications Warehouse

    Rebich, R.A.; Houston, N.A.; Mize, S.V.; Pearson, D.K.; Ging, P.B.; Evan, Hornig C.

    2011-01-01

    SPAtially Referenced Regressions On Watershed attributes (SPARROW) models were developed to estimate nutrient inputs [total nitrogen (TN) and total phosphorus (TP)] to the northwestern part of the Gulf of Mexico from streams in the South-Central United States (U.S.). This area included drainages of the Lower Mississippi, Arkansas-White-Red, and Texas-Gulf hydrologic regions. The models were standardized to reflect nutrient sources and stream conditions during 2002. Model predictions of nutrient loads (mass per time) and yields (mass per area per time) generally were greatest in streams in the eastern part of the region and along reaches near the Texas and Louisiana shoreline. The Mississippi River and Atchafalaya River watersheds, which drain nearly two-thirds of the conterminous U.S., delivered the largest nutrient loads to the Gulf of Mexico, as expected. However, the three largest delivered TN yields were from the Trinity River/Galveston Bay, Calcasieu River, and Aransas River watersheds, while the three largest delivered TP yields were from the Calcasieu River, Mermentau River, and Trinity River/Galveston Bay watersheds. Model output indicated that the three largest sources of nitrogen from the region were atmospheric deposition (42%), commercial fertilizer (20%), and livestock manure (unconfined, 17%). The three largest sources of phosphorus were commercial fertilizer (28%), urban runoff (23%), and livestock manure (confined and unconfined, 23%). ?? 2011 American Water Resources Association. This article is a U.S. Government work and is in the public domain in the USA.

  12. Space-Borne Laser Altimeter Geolocation Error Analysis

    NASA Astrophysics Data System (ADS)

    Wang, Y.; Fang, J.; Ai, Y.

    2018-05-01

    This paper reviews the development of space-borne laser altimetry technology over the past 40 years. Taking the ICESAT satellite as an example, a rigorous space-borne laser altimeter geolocation model is studied, and an error propagation equation is derived. The influence of the main error sources, such as the platform positioning error, attitude measurement error, pointing angle measurement error and range measurement error, on the geolocation accuracy of the laser spot are analysed by simulated experiments. The reasons for the different influences on geolocation accuracy in different directions are discussed, and to satisfy the accuracy of the laser control point, a design index for each error source is put forward.

  13. Sources of heavy metals in urban wastewater in Stockholm.

    PubMed

    Sörme, L; Lagerkvist, R

    2002-10-21

    The sources of heavy metals to a wastewater treatment plant was investigated. Sources can be actual goods, e.g. runoff from roofs, wear of tires, food, or activities, e.g. large enterprises, car washes. The sources were identified by knowing the metals content in various goods and the emissions from goods to sewage or stormwater. The sources of sewage water and stormwater were categorized to enable comparison with other research and measurements. The categories were households, drainage water, businesses, pipe sediment (all transported in sewage water), atmospheric deposition, traffic, building materials and pipe sediment (transported in stormwater). Results show that it was possible to track the sources of heavy metals for some metals such as Cu and Zn (110 and 100% found, respectively) as well as Ni and Hg (70% found). Other metals sources are still poorly understood or underestimated (Cd 60%, Pb 50%, Cr 20% known). The largest sources of Cu were tap water and roofs. For Zn the largest sources were galvanized material and car washes. In the case of Ni, the largest sources were chemicals used in the WTP and drinking water itself. And finally, for Hg the most dominant emission source was the amalgam in teeth. For Pb, Cr and Cd, where sources were more poorly understood, the largest contributors for all were car washes. Estimated results of sources from this study were compared with previously done measurements. The comparison shows that measured contribution from households is higher than that estimated (except Hg), leading to the conclusion that the sources of sewage water from households are still poorly understood or that known sources are underestimated. In the case of stormwater, the estimated contributions are rather well in agreement with measured contributions, although uncertainties are large for both estimations and measurements. Existing pipe sediments in the plumbing system, which release Hg and Pb, could be one explanation for the missing amount of these metals. Large enterprises were found to make a very small contribution, 4% or less for all metals studied. Smaller enterprises (with the exception of car washes) have been shown to make a small contribution in another city; the contribution in this case study is still unknown.

  14. Synthetic temperature profiles derived from Geosat altimetry: Comparison with air-dropped expendable bathythermograph profiles

    NASA Astrophysics Data System (ADS)

    Carnes, Michael R.; Mitchell, Jim L.; de Witt, P. Webb

    1990-10-01

    Synthetic temperature profiles are computed from altimeter-derived sea surface heights in the Gulf Stream region. The required relationships between surface height (dynamic height at the surface relative to 1000 dbar) and subsurface temperature are provided from regression relationships between dynamic height and amplitudes of empirical orthogonal functions (EOFs) of the vertical structure of temperature derived by de Witt (1987). Relationships were derived for each month of the year from historical temperature and salinity profiles from the region surrounding the Gulf Stream northeast of Cape Hatteras. Sea surface heights are derived using two different geoid estimates, the feature-modeled geoid and the air-dropped expendable bathythermograph (AXBT) geoid, both described by Carnes et al. (1990). The accuracy of the synthetic profiles is assessed by comparison to 21 AXBT profile sections which were taken during three surveys along 12 Geosat ERM ground tracks nearly contemporaneously with Geosat overflights. The primary error statistic considered is the root-mean-square (rms) difference between AXBT and synthetic isotherm depths. The two sources of error are the EOF relationship and the altimeter-derived surface heights. EOF-related and surface height-related errors in synthetic temperature isotherm depth are of comparable magnitude; each translates into about a 60-m rms isotherm depth error, or a combined 80 m to 90 m error for isotherms in the permanent thermocline. EOF-related errors are responsible for the absence of the near-surface warm core of the Gulf Stream and for the reduced volume of Eighteen Degree Water in the upper few hundred meters of (apparently older) cold-core rings in the synthetic profiles. The overall rms difference between surface heights derived from the altimeter and those computed from AXBT profiles is 0.15 dyn m when the feature-modeled geoid is used and 0.19 dyn m when the AXBT geoid is used; the portion attributable to altimeter-derived surface height errors alone is 0.03 dyn m less for each. In most cases, the deeper structure of the Gulf Stream and eddies is reproduced well by vertical sections of synthetic temperature, with largest errors typically in regions of high horizontal gradient such as across rings and the Gulf Stream front.

  15. TOWARD ERROR ANALYSIS OF LARGE-SCALE FOREST CARBON BUDGETS

    EPA Science Inventory

    Quantification of forest carbon sources and sinks is an important part of national inventories of net greenhouse gas emissions. Several such forest carbon budgets have been constructed, but little effort has been made to analyse the sources of error and how these errors propagate...

  16. Quantifying errors without random sampling.

    PubMed

    Phillips, Carl V; LaPole, Luwanna M

    2003-06-12

    All quantifications of mortality, morbidity, and other health measures involve numerous sources of error. The routine quantification of random sampling error makes it easy to forget that other sources of error can and should be quantified. When a quantification does not involve sampling, error is almost never quantified and results are often reported in ways that dramatically overstate their precision. We argue that the precision implicit in typical reporting is problematic and sketch methods for quantifying the various sources of error, building up from simple examples that can be solved analytically to more complex cases. There are straightforward ways to partially quantify the uncertainty surrounding a parameter that is not characterized by random sampling, such as limiting reported significant figures. We present simple methods for doing such quantifications, and for incorporating them into calculations. More complicated methods become necessary when multiple sources of uncertainty must be combined. We demonstrate that Monte Carlo simulation, using available software, can estimate the uncertainty resulting from complicated calculations with many sources of uncertainty. We apply the method to the current estimate of the annual incidence of foodborne illness in the United States. Quantifying uncertainty from systematic errors is practical. Reporting this uncertainty would more honestly represent study results, help show the probability that estimated values fall within some critical range, and facilitate better targeting of further research.

  17. Cone beam CT-based set-up strategies with and without rotational correction for stereotactic body radiation therapy in the liver.

    PubMed

    Bertholet, Jenny; Worm, Esben; Høyer, Morten; Poulsen, Per

    2017-06-01

    Accurate patient positioning is crucial in stereotactic body radiation therapy (SBRT) due to a high dose regimen. Cone-beam computed tomography (CBCT) is often used for patient positioning based on radio-opaque markers. We compared six CBCT-based set-up strategies with or without rotational correction. Twenty-nine patients with three implanted markers received 3-6 fraction liver SBRT. The markers were delineated on the mid-ventilation phase of a 4D-planning-CT. One pretreatment CBCT was acquired per fraction. Set-up strategy 1 used only translational correction based on manual marker match between the CBCT and planning CT. Set-up strategy 2 used automatic 6 degrees-of-freedom registration of the vertebrae closest to the target. The 3D marker trajectories were also extracted from the projections and the mean position of each marker was calculated and used for set-up strategies 3-6. Translational correction only was used for strategy 3. Translational and rotational corrections were used for strategies 4-6 with the rotation being either vertebrae based (strategy 4), or marker based and constrained to ±3° (strategy 5) or unconstrained (strategy 6). The resulting set-up error was calculated as the 3D root-mean-square set-up error of the three markers. The set-up error of the spinal cord was calculated for all strategies. The bony anatomy set-up (2) had the largest set-up error (5.8 mm). The marker-based set-up with unconstrained rotations (6) had the smallest set-up error (0.8 mm) but the largest spinal cord set-up error (12.1 mm). The marker-based set-up with translational correction only (3) or with bony anatomy rotational correction (4) had equivalent set-up error (1.3 mm) but rotational correction reduced the spinal cord set-up error from 4.1 mm to 3.5 mm. Marker-based set-up was substantially better than bony-anatomy set-up. Rotational correction may improve the set-up, but further investigations are required to determine the optimal correction strategy.

  18. Scattering from binary optics

    NASA Technical Reports Server (NTRS)

    Ricks, Douglas W.

    1993-01-01

    There are a number of sources of scattering in binary optics: etch depth errors, line edge errors, quantization errors, roughness, and the binary approximation to the ideal surface. These sources of scattering can be systematic (deterministic) or random. In this paper, scattering formulas for both systematic and random errors are derived using Fourier optics. These formulas can be used to explain the results of scattering measurements and computer simulations.

  19. Psychrometric Measurement of Leaf Water Potential: Lack of Error Attributable to Leaf Permeability.

    PubMed

    Barrs, H D

    1965-07-02

    A report that low permeability could cause gross errors in psychrometric determinations of water potential in leaves has not been confirmed. No measurable error from this source could be detected for either of two types of thermocouple psychrometer tested on four species, each at four levels of water potential. No source of error other than tissue respiration could be demonstrated.

  20. An empirical examination of WISE/NEOWISE asteroid analysis and results

    NASA Astrophysics Data System (ADS)

    Myhrvold, Nathan

    2017-10-01

    Observations made by the WISE space telescope and subsequent analysis by the NEOWISE project represent the largest corpus of asteroid data to date, describing the diameter, albedo, and other properties of the ~164,000 asteroids in the collection. I present a critical reanalysis of the WISE observational data, and NEOWISE results published in numerous papers and in the JPL Planetary Data System (PDS). This analysis reveals shortcomings and a lack of clarity, both in the original analysis and in the presentation of results. The procedures used to generate NEOWISE results fall short of established thermal modelling standards. Rather than using a uniform protocol, 10 modelling methods were applied to 12 combinations of WISE band data. Over half the NEOWISE results are based on a single band of data. Most NEOWISE curve fits are poor quality, frequently missing many or all the data points. About 30% of the single-band results miss all the data; 43% of the results derived from the most common multiple-band combinations miss all the data in at least one band. The NEOWISE data processing procedures rely on inconsistent assumptions, and introduce bias by systematically discarding much of the original data. I show that error estimates for the WISE observational data have a true uncertainty factor of ~1.2 to 1.9 times larger than previously described, and that the error estimates do not fit a normal distribution. These issues call into question the validity of the NEOWISE Monte-Carlo error analysis. Comparing published NEOWISE diameters to published estimates using radar, occultation, or spacecraft measurements (ROS) reveals 150 for which the NEOWISE diameters were copied exactly from the ROS source. My findings show that the accuracy of diameter estimates for NEOWISE results depend heavily on the choice of data bands and model. Systematic errors in the diameter estimates are much larger than previously described. Systematic errors for diameters in the PDS range from -3% to +27%. Random errors range from -14% to +19% when using all four WISE bands, and from -45% to +74% in cases using only the W2 band. The results presented here show that much work remains to be done towards understanding asteroid data from WISE/NEOWISE.

  1. Surface-Water Nutrient Conditions and Sources in the United States Pacific Northwest1

    PubMed Central

    Wise, Daniel R; Johnson, Henry M

    2011-01-01

    Abstract The SPAtially Referenced Regressions On Watershed attributes (SPARROW) model was used to perform an assessment of surface-water nutrient conditions and to identify important nutrient sources in watersheds of the Pacific Northwest region of the United States (U.S.) for the year 2002. Our models included variables representing nutrient sources as well as landscape characteristics that affect nutrient delivery to streams. Annual nutrient yields were higher in watersheds on the wetter, west side of the Cascade Range compared to watersheds on the drier, east side. High nutrient enrichment (relative to the U.S. Environmental Protection Agency's recommended nutrient criteria) was estimated in watersheds throughout the region. Forest land was generally the largest source of total nitrogen stream load and geologic material was generally the largest source of total phosphorus stream load generated within the 12,039 modeled watersheds. These results reflected the prevalence of these two natural sources and the low input from other nutrient sources across the region. However, the combined input from agriculture, point sources, and developed land, rather than natural nutrient sources, was responsible for most of the nutrient load discharged from many of the largest watersheds. Our results provided an understanding of the regional patterns in surface-water nutrient conditions and should be useful to environmental managers in future water-quality planning efforts. PMID:22457584

  2. Temporal dynamics of conflict monitoring and the effects of one or two conflict sources on error-(related) negativity.

    PubMed

    Armbrecht, Anne-Simone; Wöhrmann, Anne; Gibbons, Henning; Stahl, Jutta

    2010-09-01

    The present electrophysiological study investigated the temporal development of response conflict and the effects of diverging conflict sources on error(-related) negativity (Ne). Eighteen participants performed a combined stop-signal flanker task, which was comprised of two different conflict sources: a left-right and a go-stop response conflict. It is assumed that the Ne reflects the activity of a conflict monitoring system and thus increases according to (i) the number of conflict sources and (ii) the temporal development of the conflict activity. No increase of the Ne amplitude after double errors (comprising two conflict sources) as compared to hand- and stop-errors (comprising one conflict source) was found, whereas a higher Ne amplitude was observed after a delayed stop-signal onset. The results suggest that the Ne is not sensitive to an increase in the number of conflict sources, but to the temporal dynamics of a go-stop response conflict. Copyright (c) 2010 Elsevier B.V. All rights reserved.

  3. Thermospheric mass density model error variance as a function of time scale

    NASA Astrophysics Data System (ADS)

    Emmert, J. T.; Sutton, E. K.

    2017-12-01

    In the increasingly crowded low-Earth orbit environment, accurate estimation of orbit prediction uncertainties is essential for collision avoidance. Poor characterization of such uncertainty can result in unnecessary and costly avoidance maneuvers (false positives) or disregard of a collision risk (false negatives). Atmospheric drag is a major source of orbit prediction uncertainty, and is particularly challenging to account for because it exerts a cumulative influence on orbital trajectories and is therefore not amenable to representation by a single uncertainty parameter. To address this challenge, we examine the variance of measured accelerometer-derived and orbit-derived mass densities with respect to predictions by thermospheric empirical models, using the data-minus-model variance as a proxy for model uncertainty. Our analysis focuses mainly on the power spectrum of the residuals, and we construct an empirical model of the variance as a function of time scale (from 1 hour to 10 years), altitude, and solar activity. We find that the power spectral density approximately follows a power-law process but with an enhancement near the 27-day solar rotation period. The residual variance increases monotonically with altitude between 250 and 550 km. There are two components to the variance dependence on solar activity: one component is 180 degrees out of phase (largest variance at solar minimum), and the other component lags 2 years behind solar maximum (largest variance in the descending phase of the solar cycle).

  4. [Comparison of Google and Yahoo applications for geocoding of postal addresses in epidemiological studies].

    PubMed

    Quesada, Jose Antonio; Nolasco, Andreu; Moncho, Joaquín

    2013-01-01

    Geocoding is the assignment of geographic coordinates to spatial points, which often are postal addresses. The error made in applying this process can introduce bias in estimates of spatiotemporal models in epidemiological studies. No studies have been found to measure the error made in applying this process in Spanish cities. The objective is to evaluate the errors in magnitude and direction from two free sources (Google and Yahoo) with regard to a GPS in two Spanish cities. 30 addresses were geocoded with those two sources and the GPS in Santa Pola (Alicante) and Alicante city. The distances were calculated in metres (median, CI95%) between the sources and the GPS, globally and according to the status reported by each source. The directionality of the error was evaluated by calculating the location quadrant and applying a Chi-Square test. The GPS error was evaluated by geocoding 11 addresses twice at 4 days interval. The overall median in Google-GPS was 23,2 metres (16,0-32,1) for Santa Pola, and 21,4 meters (14,9-31,1) for Alicante. The overall median in Yahoo was 136,0 meters (19,2-318,5) for Santa Pola, and 23,8 meters (13,6- 29,2) for Alicante. Between the 73% and 90% were geocoded by status as "exact or interpolated" (minor error), where Goggle and Yahoo had a median error between 19 and 23 metres in the two cities. The GPS had a median error of 13.8 meters (6,7-17,8). No error directionality was detected. Google error is acceptable and stable in the two cities, so that it is a reliable source for Para medir elgeocoding addresses in Spain in epidemiological studies.

  5. Measuring Diagnoses: ICD Code Accuracy

    PubMed Central

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-01-01

    Objective To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. Data Sources/Study Setting The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. Study Design/Methods We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Principle Findings Main error sources along the “patient trajectory” include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the “paper trail” include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. Conclusions By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways. PMID:16178999

  6. Source apportionment of ambient volatile organic compounds in the Pearl River Delta, China: Part II

    NASA Astrophysics Data System (ADS)

    Liu, Ying; Shao, Min; Lu, Sihua; Chang, Chih-Chung; Wang, Jia-Lin; Fu, Linlin

    The chemical mass balance receptor model was applied to the source apportionment of 58 hydrocarbons measured at seven sites in a field campaign that examined regional air quality in the Pearl River Delta (PRD) region in the fall of 2004. A total of 12 volatile organic compound (VOC) emission sources were considered, including gasoline- and diesel-powered vehicle exhausts, headspace vapors of gasoline and diesel fuel, vehicle evaporative emissions, liquid petroleum gas (LPG) leakage, paint vapors, asphalt emissions from paved roads, biomass combustion, coal combustion, the chemical industry, and petroleum refineries. Vehicle exhaust was the largest source of VOCs, contributing to >50% of ambient VOCs at the three urban sites (Guangzhou, Foshan, and Zhongshan). LPG leakage played an important role, representing 8-16% of emissions at most sites in the PRD. Solvent usage was the biggest emitter of VOCs at Dongguan, an industrial site, contributing 33% of ambient VOCs. Similarly, at Xinken, a non-urban site, the evaporation of solvents and coatings was the largest emission source, accounting for 31% of emissions, probably because it was downwind of Dongguan. Local biomass combustion was a noticeable source of VOCs at Xinken; although its contribution was estimated at 14.3%, biomass combustion was the third largest VOC source at this site.

  7. A 1400-MHz survey of 1478 Abell clusters of galaxies

    NASA Technical Reports Server (NTRS)

    Owen, F. N.; White, R. A.; Hilldrup, K. C.; Hanisch, R. J.

    1982-01-01

    Observations of 1478 Abell clusters of galaxies with the NRAO 91-m telescope at 1400 MHz are reported. The measured beam shape was deconvolved from the measured source Gaussian fits in order to estimate the source size and position angle. All detected sources within 0.5 corrected Abell cluster radii are listed, including the cluster number, richness class, distance class, magnitude of the tenth brightest galaxy, redshift estimate, corrected cluster radius in arcmin, right ascension and error, declination and error, total flux density and error, and angular structure for each source.

  8. Reaching nearby sources: comparison between real and virtual sound and visual targets

    PubMed Central

    Parseihian, Gaëtan; Jouffrais, Christophe; Katz, Brian F. G.

    2014-01-01

    Sound localization studies over the past century have predominantly been concerned with directional accuracy for far-field sources. Few studies have examined the condition of near-field sources and distance perception. The current study concerns localization and pointing accuracy by examining source positions in the peripersonal space, specifically those associated with a typical tabletop surface. Accuracy is studied with respect to the reporting hand (dominant or secondary) for auditory sources. Results show no effect on the reporting hand with azimuthal errors increasing equally for the most extreme source positions. Distance errors show a consistent compression toward the center of the reporting area. A second evaluation is carried out comparing auditory and visual stimuli to examine any bias in reporting protocol or biomechanical difficulties. No common bias error was observed between auditory and visual stimuli indicating that reporting errors were not due to biomechanical limitations in the pointing task. A final evaluation compares real auditory sources and anechoic condition virtual sources created using binaural rendering. Results showed increased azimuthal errors, with virtual source positions being consistently overestimated to more lateral positions, while no significant distance perception was observed, indicating a deficiency in the binaural rendering condition relative to the real stimuli situation. Various potential reasons for this discrepancy are discussed with several proposals for improving distance perception in peripersonal virtual environments. PMID:25228855

  9. International Test Comparisons: Reviewing Translation Error in Different Source Language-Target Language Combinations

    ERIC Educational Resources Information Center

    Zhao, Xueyu; Solano-Flores, Guillermo; Qian, Ming

    2018-01-01

    This article addresses test translation review in international test comparisons. We investigated the applicability of the theory of test translation error--a theory of the multidimensionality and inevitability of test translation error--across source language-target language combinations in the translation of PISA (Programme of International…

  10. Optical linear algebra processors: noise and error-source modeling.

    PubMed

    Casasent, D; Ghosh, A

    1985-06-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAP's) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  11. Optical linear algebra processors - Noise and error-source modeling

    NASA Technical Reports Server (NTRS)

    Casasent, D.; Ghosh, A.

    1985-01-01

    The modeling of system and component noise and error sources in optical linear algebra processors (OLAPs) are considered, with attention to the frequency-multiplexed OLAP. General expressions are obtained for the output produced as a function of various component errors and noise. A digital simulator for this model is discussed.

  12. Using multi-source satellite data to assess snow-cover change in Qinghai-Tibetan Plateau in last decade

    NASA Astrophysics Data System (ADS)

    Jiang, Y.; Chen, F.; Gao, Y.; Barlage, M. J.

    2017-12-01

    Snow cover in Qinghai-Tibetan Plateau (QTP) is a critical component of water cycle and affects regional climate of East Asia. Satellite data from three different sources (i.e., FY3A/B/C, MODIS and IMS) were used to analyze the QTP fractional-snow-cover (FSC) change and associated uncertainties in the last decade. To reduce the high percentage of cloud in FY3A/B/C and MODIS, a four-step cloud removal procedure was applied and effectively reduced the cloud percentage from 40.8-56.1% to 2.2­-­3.3%. The averaged error introduced by the cloud removal procedure was about 2% estimated by a random sampling method. Results show that the snow cover in QTP significantly decreased in recent 5 years. Three data sets (FY3B, MODIS and IMS) showed significant decreased annual FSC at all elevation bands from 2012-2016, and a significant shorter snow season with delayed snow onset and earlier melting. Both IMS and MODIS had a slightly decline annual FSC from 2000 to 3000 m, while MODIS FSC slightly decreased in 2002-2016 and IMS FSC slightly increased from 2006-2016 in the region with elevation higher than 3000 m. Results also show significant uncertainties among the five data sets (FY3A/B/C, MODIS, IMS), although they showed similar fluctuations of daily FSC. IMS had largest snow-cover extent and highest daily FSC due to its multi data sources. FY3A/C and MODIS (observed in the morning) had around 5% higher mean FSC than FY3B (observed in the afternoon) due to the 3 hours detection time gap. The relative error of daily FSC (taking MODIS as `truth') between FY3A/B/C, IMS and MODIS is 23%, -35%, 8% and 63%, respectively, averaged in five elevation bands in 2015-2017.

  13. ROSAT X-Ray Observation of the Second Error Box for SGR 1900+14

    NASA Technical Reports Server (NTRS)

    Li, P.; Hurley, K.; Vrba, F.; Kouveliotou, C.; Meegan, C. A.; Fishman, G. J.; Kulkarni, S.; Frail, D.

    1997-01-01

    The positions of the two error boxes for the soft gamma repeater (SGR) 1900+14 were determined by the "network synthesis" method, which employs observations by the Ulysses gamma-ray burst and CGRO BATSE instruments. The location of the first error box has been observed at optical, infrared, and X-ray wavelengths, resulting in the discovery of a ROSAT X-ray point source and a curious double infrared source. We have recently used the ROSAT HRI to observe the second error box to complete the counterpart search. A total of six X-ray sources were identified within the field of view. None of them falls within the network synthesis error box, and a 3 sigma upper limit to any X-ray counterpart was estimated to be 6.35 x 10(exp -14) ergs/sq cm/s. The closest source is approximately 3 min. away, and has an estimated unabsorbed flux of 1.5 x 10(exp -12) ergs/sq cm/s. Unlike the first error box, there is no supernova remnant near the second error box. The closest one, G43.9+1.6, lies approximately 2.dg6 away. For these reasons, we believe that the first error box is more likely to be the correct one.

  14. Sources of variability and systematic error in mouse timing behavior.

    PubMed

    Gallistel, C R; King, Adam; McDonald, Robert

    2004-01-01

    In the peak procedure, starts and stops in responding bracket the target time at which food is expected. The variability in start and stop times is proportional to the target time (scalar variability), as is the systematic error in the mean center (scalar error). The authors investigated the source of the error and the variability, using head poking in the mouse, with target intervals of 5 s, 15 s, and 45 s, in the standard procedure, and in a variant with 3 different target intervals at 3 different locations in a single trial. The authors conclude that the systematic error is due to the asymmetric location of start and stop decision criteria, and the scalar variability derives primarily from sources other than memory.

  15. Magnetic Nanoparticle Thermometer: An Investigation of Minimum Error Transmission Path and AC Bias Error

    PubMed Central

    Du, Zhongzhou; Su, Rijian; Liu, Wenzhong; Huang, Zhixing

    2015-01-01

    The signal transmission module of a magnetic nanoparticle thermometer (MNPT) was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias), was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA) when the hardware system of the MNPT was designed with the aforementioned method. PMID:25875188

  16. Pinpointing the North Korea Nuclear tests with body waves scattered by surface topography

    NASA Astrophysics Data System (ADS)

    Wang, N.; Shen, Y.; Bao, X.; Flinders, A. F.

    2017-12-01

    On September 3, 2017, North Korea conducted its sixth and by far the largest nuclear test at the Punggye-ri test site. In this work, we apply a novel full-wave location method that combines a non-linear grid-search algorithm with the 3D strain Green's tensor database to locate this event. We use the first arrivals (Pn waves) and their immediate codas, which are likely dominated by waves scattered by the surface topography near the source, to pinpoint the source location. We assess the solution in the search volume using a least-squares misfit between the observed and synthetic waveforms, which are obtained using the collocated-grid finite difference method on curvilinear grids. We calculate the one standard deviation level of the 'best' solution as a posterior error estimation. Our results show that the waveform based location method allows us to obtain accurate solutions with a small number of stations. The solutions are absolute locations as opposed to relative locations based on relative travel times, because topography-scattered waves depend on the geometric relations between the source and the unique topography near the source. Moreover, we use both differential waveforms and traveltimes to locate pairs of the North Korea tests in years 2016 and 2017 to further reduce the effects of inaccuracies in the reference velocity model (CRUST 1.0). Finally, we compare our solutions with those of other studies based on satellite images and relative traveltimes.

  17. Time Lapse of World’s Largest 3-D Printed Object

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    2016-08-29

    Researchers at the MDF have 3D-printed a large-scale trim tool for a Boeing 777X, the world’s largest twin-engine jet airliner. The additively manufactured tool was printed on the Big Area Additive Manufacturing, or BAAM machine over a 30-hour period. The team used a thermoplastic pellet comprised of 80% ABS plastic and 20% carbon fiber from local material supplier. The tool has proven to decrease time, labor, cost and errors associated with traditional manufacturing techniques and increased energy savings in preliminary testing and will undergo further, long term testing.

  18. Infrared Time Lapse of World’s Largest 3D-Printed Object

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    None

    Researchers at Oak Ridge National Laboratory have 3D-printed a large-scale trim tool for a Boeing 777X, the world’s largest twin-engine jet airliner. The additively manufactured tool was printed on the Big Area Additive Manufacturing, or BAAM machine over a 30-hour period. The team used a thermoplastic pellet comprised of 80% ABS plastic and 20% carbon fiber from local material supplier. The tool has proven to decrease time, labor, cost and errors associated with traditional manufacturing techniques and increased energy savings in preliminary testing and will undergo further, long term testing.

  19. Effect of stratospheric aerosol layers on the TOMS/SBUV ozone retrieval

    NASA Technical Reports Server (NTRS)

    Torres, O.; Ahmad, Zia; Pan, L.; Herman, J. R.; Bhartia, P. K.; Mcpeters, R.

    1994-01-01

    An evaluation of the optical effects of stratospheric aerosol layers on total ozone retrieval from space by the TOMS/SBUV type instruments is presented here. Using the Dave radiative transfer model we estimate the magnitude of the errors in the retrieved ozone when polar stratospheric clouds (PSC's) or volcanic aerosol layers interfere with the measurements. The largest errors are produced by optically thick water ice PSC's. Results of simulation experiments on the effect of the Pinatubo aerosol cloud on the Nimbus-7 and Meteor-3 TOMS products are presented.

  20. Safe drinking water and waterborne outbreaks.

    PubMed

    Moreira, N A; Bondelind, M

    2017-02-01

    The present work compiles a review on drinking waterborne outbreaks, with the perspective of production and distribution of microbiologically safe water, during 2000-2014. The outbreaks are categorised in raw water contamination, treatment deficiencies and distribution network failure. The main causes for contamination were: for groundwater, intrusion of animal faeces or wastewater due to heavy rain; in surface water, discharge of wastewater into the water source and increased turbidity and colour; at treatment plants, malfunctioning of the disinfection equipment; and for distribution systems, cross-connections, pipe breaks and wastewater intrusion into the network. Pathogens causing the largest number of affected consumers were Cryptosporidium, norovirus, Giardia, Campylobacter, and rotavirus. The largest number of different pathogens was found for the treatment works and the distribution network. The largest number of affected consumers with gastrointestinal illness was for contamination events from a surface water source, while the largest number of individual events occurred for the distribution network.

  1. Measuring diagnoses: ICD code accuracy.

    PubMed

    O'Malley, Kimberly J; Cook, Karon F; Price, Matt D; Wildes, Kimberly Raiford; Hurdle, John F; Ashton, Carol M

    2005-10-01

    To examine potential sources of errors at each step of the described inpatient International Classification of Diseases (ICD) coding process. The use of disease codes from the ICD has expanded from classifying morbidity and mortality information for statistical purposes to diverse sets of applications in research, health care policy, and health care finance. By describing a brief history of ICD coding, detailing the process for assigning codes, identifying where errors can be introduced into the process, and reviewing methods for examining code accuracy, we help code users more systematically evaluate code accuracy for their particular applications. We summarize the inpatient ICD diagnostic coding process from patient admission to diagnostic code assignment. We examine potential sources of errors at each step and offer code users a tool for systematically evaluating code accuracy. Main error sources along the "patient trajectory" include amount and quality of information at admission, communication among patients and providers, the clinician's knowledge and experience with the illness, and the clinician's attention to detail. Main error sources along the "paper trail" include variance in the electronic and written records, coder training and experience, facility quality-control efforts, and unintentional and intentional coder errors, such as misspecification, unbundling, and upcoding. By clearly specifying the code assignment process and heightening their awareness of potential error sources, code users can better evaluate the applicability and limitations of codes for their particular situations. ICD codes can then be used in the most appropriate ways.

  2. Minimization of model representativity errors in identification of point source emission from atmospheric concentration measurements

    NASA Astrophysics Data System (ADS)

    Sharan, Maithili; Singh, Amit Kumar; Singh, Sarvesh Kumar

    2017-11-01

    Estimation of an unknown atmospheric release from a finite set of concentration measurements is considered an ill-posed inverse problem. Besides ill-posedness, the estimation process is influenced by the instrumental errors in the measured concentrations and model representativity errors. The study highlights the effect of minimizing model representativity errors on the source estimation. This is described in an adjoint modelling framework and followed in three steps. First, an estimation of point source parameters (location and intensity) is carried out using an inversion technique. Second, a linear regression relationship is established between the measured concentrations and corresponding predicted using the retrieved source parameters. Third, this relationship is utilized to modify the adjoint functions. Further, source estimation is carried out using these modified adjoint functions to analyse the effect of such modifications. The process is tested for two well known inversion techniques, called renormalization and least-square. The proposed methodology and inversion techniques are evaluated for a real scenario by using concentrations measurements from the Idaho diffusion experiment in low wind stable conditions. With both the inversion techniques, a significant improvement is observed in the retrieval of source estimation after minimizing the representativity errors.

  3. Supermassive Black Holes and Their Host Spheroids. I. Disassembling Galaxies

    NASA Astrophysics Data System (ADS)

    Savorgnan, G. A. D.; Graham, A. W.

    2016-01-01

    Several recent studies have performed galaxy decompositions to investigate correlations between the black hole mass and various properties of the host spheroid, but they have not converged on the same conclusions. This is because their models for the same galaxy were often significantly different and not consistent with each other in terms of fitted components. Using 3.6 μm Spitzer imagery, which is a superb tracer of the stellar mass (superior to the K band), we have performed state-of-the-art multicomponent decompositions for 66 galaxies with directly measured black hole masses. Our sample is the largest to date and, unlike previous studies, contains a large number (17) of spiral galaxies with low black hole masses. We paid careful attention to the image mosaicking, sky subtraction, and masking of contaminating sources. After a scrupulous inspection of the galaxy photometry (through isophotal analysis and unsharp masking) and—for the first time—2D kinematics, we were able to account for spheroids large-scale, intermediate-scale, and nuclear disks bars rings spiral arms halos extended or unresolved nuclear sources; and partially depleted cores. For each individual galaxy, we compared our best-fit model with previous studies, explained the discrepancies, and identified the optimal decomposition. Moreover, we have independently performed one-dimensional (1D) and two-dimensional (2D) decompositions and concluded that, at least when modeling large, nearby galaxies, 1D techniques have more advantages than 2D techniques. Finally, we developed a prescription to estimate the uncertainties on the 1D best-fit parameters for the 66 spheroids that takes into account systematic errors, unlike popular 2D codes that only consider statistical errors.

  4. SUPERMASSIVE BLACK HOLES AND THEIR HOST SPHEROIDS. I. DISASSEMBLING GALAXIES

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savorgnan, G. A. D.; Graham, A. W., E-mail: gsavorgn@astro.swin.edu.au

    Several recent studies have performed galaxy decompositions to investigate correlations between the black hole mass and various properties of the host spheroid, but they have not converged on the same conclusions. This is because their models for the same galaxy were often significantly different and not consistent with each other in terms of fitted components. Using 3.6 μm Spitzer imagery, which is a superb tracer of the stellar mass (superior to the K band), we have performed state-of-the-art multicomponent decompositions for 66 galaxies with directly measured black hole masses. Our sample is the largest to date and, unlike previous studies, containsmore » a large number (17) of spiral galaxies with low black hole masses. We paid careful attention to the image mosaicking, sky subtraction, and masking of contaminating sources. After a scrupulous inspection of the galaxy photometry (through isophotal analysis and unsharp masking) and—for the first time—2D kinematics, we were able to account for spheroids; large-scale, intermediate-scale, and nuclear disks; bars; rings; spiral arms; halos; extended or unresolved nuclear sources; and partially depleted cores. For each individual galaxy, we compared our best-fit model with previous studies, explained the discrepancies, and identified the optimal decomposition. Moreover, we have independently performed one-dimensional (1D) and two-dimensional (2D) decompositions and concluded that, at least when modeling large, nearby galaxies, 1D techniques have more advantages than 2D techniques. Finally, we developed a prescription to estimate the uncertainties on the 1D best-fit parameters for the 66 spheroids that takes into account systematic errors, unlike popular 2D codes that only consider statistical errors.« less

  5. Evaluation of the depth-integration method of measuring water discharge in large rivers

    USGS Publications Warehouse

    Moody, J.A.; Troutman, B.M.

    1992-01-01

    The depth-integration method oor measuring water discharge makes a continuos measurement of the water velocity from the water surface to the bottom at 20 to 40 locations or verticals across a river. It is especially practical for large rivers where river traffic makes it impractical to use boats attached to taglines strung across the river or to use current meters suspended from bridges. This method has the additional advantage over the standard two- and eight-tenths method in that a discharge-weighted suspended-sediment sample can be collected at the same time. When this method is used in large rivers such as the Missouri, Mississippi and Ohio, a microwave navigation system is used to determine the ship's position at each vertical sampling location across the river, and to make accurate velocity corrections to compensate for shift drift. An essential feature is a hydraulic winch that can lower and raise the current meter at a constant transit velocity so that the velocities at all depths are measured for equal lengths of time. Field calibration measurements show that: (1) the mean velocity measured on the upcast (bottom to surface) is within 1% of the standard mean velocity determined by 9-11 point measurements; (2) if the transit velocity is less than 25% of the mean velocity, then average error in the mean velocity is 4% or less. The major source of bias error is a result of mounting the current meter above a sounding weight and sometimes above a suspended-sediment sampling bottle, which prevents measurement of the velocity all the way to the bottom. The measured mean velocity is slightly larger than the true mean velocity. This bias error in the discharge is largest in shallow water (approximately 8% for the Missouri River at Hermann, MO, where the mean depth was 4.3 m) and smallest in deeper water (approximately 3% for the Mississippi River at Vickbsurg, MS, where the mean depth was 14.5 m). The major source of random error in the discharge is the natural variability of river velocities, which we assumed to be independent and random at each vertical. The standard error of the estimated mean velocity, at an individual vertical sampling location, may be as large as 9%, for large sand-bed alluvial rivers. The computed discharge, however, is a weighted mean of these random velocities. Consequently the standard error of computed discharge is divided by the square root of the number of verticals, producing typical values between 1 and 2%. The discharges measured by the depth-integrated method agreed within ??5% of those measured simultaneously by the standard two- and eight-tenths, six-tenth and moving boat methods. ?? 1992.

  6. Syndrome source coding and its universal generalization

    NASA Technical Reports Server (NTRS)

    Ancheta, T. C., Jr.

    1975-01-01

    A method of using error-correcting codes to obtain data compression, called syndrome-source-coding, is described in which the source sequence is treated as an error pattern whose syndrome forms the compressed data. It is shown that syndrome-source-coding can achieve arbitrarily small distortion with the number of compressed digits per source digit arbitrarily close to the entropy of a binary memoryless source. A universal generalization of syndrome-source-coding is formulated which provides robustly-effective, distortionless, coding of source ensembles.

  7. A data-driven modeling approach to stochastic computation for low-energy biomedical devices.

    PubMed

    Lee, Kyong Ho; Jang, Kuk Jin; Shoeb, Ali; Verma, Naveen

    2011-01-01

    Low-power devices that can detect clinically relevant correlations in physiologically-complex patient signals can enable systems capable of closed-loop response (e.g., controlled actuation of therapeutic stimulators, continuous recording of disease states, etc.). In ultra-low-power platforms, however, hardware error sources are becoming increasingly limiting. In this paper, we present how data-driven methods, which allow us to accurately model physiological signals, also allow us to effectively model and overcome prominent hardware error sources with nearly no additional overhead. Two applications, EEG-based seizure detection and ECG-based arrhythmia-beat classification, are synthesized to a logic-gate implementation, and two prominent error sources are introduced: (1) SRAM bit-cell errors and (2) logic-gate switching errors ('stuck-at' faults). Using patient data from the CHB-MIT and MIT-BIH databases, performance similar to error-free hardware is achieved even for very high fault rates (up to 0.5 for SRAMs and 7 × 10(-2) for logic) that cause computational bit error rates as high as 50%.

  8. Verification of concentration time formulae accuracy in Southern Brazil

    NASA Astrophysics Data System (ADS)

    Freitas Ferreira, Pedro; Allasia, Daniel; Herbstrith Froemming, Gabriel; Ribeiro Fontoura, Jessica; Tassi, Rutineia

    2016-04-01

    The time of concentration (TC) of an urban catchment is a fundamental watershed parameter used to compute the peak discharge and/or in the hydrological simulation of sewer systems. In the lack of hydrological data for its estimative, several empirical formulae are used, however, almost none of them have been verified in Brazil leading to large uncertainties in the correct value. In this light, were tested several formulae such as the proposed by Kirpich (and a modifications of this equation proposed by the National Transport Bureau of Brazil (DNIT)), U.S. Corps. Of Engineers, Pasini, Dooge , Johnstone , Ventura and Ven T Chow as they are used in Brazil. The verification was accomplished against measured data in 5 sub-basins situated in the Dilúvio basin, a semi urbanized watershed that contains the most developed area of the city of Porto Alegre. All the rainfall stations were active in the period from late 1970's until early 1980's due to the existence of Projeto Dilúvio but today, however, only two of them are still in operation. Porto Alegre is the capital and largest city in the Brazilian southernmost state of Rio Grande do Sul with a population of approximately 1.6 million inhabitants, the tenth most populous city in the country and the centre of Brazil's fourth largest metropolitan area, with almost 4,5 million inhabitants (IBGE, 2010). The city is situated in a humid subtropical climate with high and regular precipitation throughout the year. Most summer rainfall occurs during thunderstorms and an occasional tropical storm, hurricane or cyclone. The results showed an error of around 70% for half of the formulas, with a tendency to underestimate TC values. Among the tested methods, Johnstone had the best overall result, with an average error of 25%, well far from the second, Dooge, with 43% of average error. The best results were obtained in only one basin, Dilúvio, the largest one, with an area of 25km², with an error of just 3% for Modified Kirpich, and 5% for Dooge . The results show the necessity of more studies in order to help in the selection of TC parameter for ungauged basins in Brazil.

  9. Parameter optimization in biased decoy-state quantum key distribution with both source errors and statistical fluctuations

    NASA Astrophysics Data System (ADS)

    Zhu, Jian-Rong; Li, Jian; Zhang, Chun-Mei; Wang, Qin

    2017-10-01

    The decoy-state method has been widely used in commercial quantum key distribution (QKD) systems. In view of the practical decoy-state QKD with both source errors and statistical fluctuations, we propose a universal model of full parameter optimization in biased decoy-state QKD with phase-randomized sources. Besides, we adopt this model to carry out simulations of two widely used sources: weak coherent source (WCS) and heralded single-photon source (HSPS). Results show that full parameter optimization can significantly improve not only the secure transmission distance but also the final key generation rate. And when taking source errors and statistical fluctuations into account, the performance of decoy-state QKD using HSPS suffered less than that of decoy-state QKD using WCS.

  10. Assessment study of lichenometric methods for dating surfaces

    NASA Astrophysics Data System (ADS)

    Jomelli, Vincent; Grancher, Delphine; Naveau, Philippe; Cooley, Daniel; Brunstein, Daniel

    2007-04-01

    In this paper, we discuss the advantages and drawbacks of the most classical approaches used in lichenometry. In particular, we perform a detailed comparison among methods based on the statistical analysis of either the largest lichen diameters recorded on geomorphic features or the frequency of all lichens. To assess the performance of each method, a careful comparison design with well-defined criteria is proposed and applied to two distinct data sets. First, we study 350 tombstones. This represents an ideal test bed because tombstone dates are known and, therefore, the quality of the estimated lichen growth curve can be easily tested for the different techniques. Secondly, 37 moraines from two tropical glaciers are investigated. This analysis corresponds to our real case study. For both data sets, we apply our list of criteria that reflects precision, error measurements and their theoretical foundations when proposing estimated ages and their associated confidence intervals. From this comparison, it clearly appears that two methods, the mean of the n largest lichen diameters and the recent Bayesian method based on extreme value theory, offer the most reliable estimates of moraine and tombstones dates. Concerning the spread of the error, the latter approach provides the smallest uncertainty and it is the only one that takes advantage of the statistical nature of the observations by fitting an extreme value distribution to the largest diameters.

  11. 40 CFR 112.12 - Spill Prevention, Control, and Countermeasure Plan requirements.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... equipment failure or human error at the facility. (c) Bulk storage containers. (1) Not use a container for... means of containment for the entire capacity of the largest single container and sufficient freeboard to... soil conditions. (6) Bulk storage container inspections. (i) Except for containers that meet the...

  12. 40 CFR 112.12 - Spill Prevention, Control, and Countermeasure Plan requirements.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... equipment failure or human error at the facility. (c) Bulk storage containers. (1) Not use a container for... means of containment for the entire capacity of the largest single container and sufficient freeboard to... soil conditions. (6) Bulk storage container inspections. (i) Except for containers that meet the...

  13. 40 CFR 112.12 - Spill Prevention, Control, and Countermeasure Plan requirements.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... equipment failure or human error at the facility. (c) Bulk storage containers. (1) Not use a container for... means of containment for the entire capacity of the largest single container and sufficient freeboard to... soil conditions. (6) Bulk storage container inspections. (i) Except for containers that meet the...

  14. Gap filling strategies and error in estimating annual soil respiration

    USDA-ARS?s Scientific Manuscript database

    Soil respiration (Rsoil) is one of the largest CO2 fluxes in the global carbon (C) cycle. Estimation of annual Rsoil requires extrapolation of survey measurements or gap-filling of automated records to produce a complete time series. While many gap-filling methodologies have been employed, there is ...

  15. Application of the SPARROW model to assess surface-water nutrient conditions and sources in the United States Pacific Northwest

    USGS Publications Warehouse

    Wise, Daniel R.; Johnson, Henry M.

    2013-01-01

    The watershed model SPARROW (Spatially Referenced Regressions on Watershed attributes) was used to estimate mean annual surface-water nutrient conditions (total nitrogen and total phosphorus) and to identify important nutrient sources in catchments of the Pacific Northwest region of the United States for 2002. Model-estimated nutrient yields were generally higher in catchments on the wetter, western side of the Cascade Range than in catchments on the drier, eastern side. The largest source of locally generated total nitrogen stream load in most catchments was runoff from forestland, whereas the largest source of locally generated total phosphorus stream load in most catchments was either geologic material or livestock manure (primarily from grazing livestock). However, the highest total nitrogen and total phosphorus yields were predicted in the relatively small number of catchments where urban sources were the largest contributor to local stream load. Two examples are presented that show how SPARROW results can be applied to large rivers—the relative contribution of different nutrient sources to the total nitrogen load in the Willamette River and the total phosphorus load in the Snake River. The results from this study provided an understanding of the regional patterns in surface-water nutrient conditions and should be useful to researchers and water-quality managers performing local nutrient assessments.

  16. The impact of reflectivity correction and conversion methods to improve precipitation estimation by weather radar for an extreme low-land Mesoscale Convective System

    NASA Astrophysics Data System (ADS)

    Hazenberg, Pieter; Leijnse, Hidde; Uijlenhoet, Remko

    2014-05-01

    Between 25 and 27 August 2010 a long-duration mesoscale convective system was observed above the Netherlands. For most of the country this led to over 15 hours of near-continuous precipitation, which resulted in total event accumulations exceeding 150 mm in the eastern part of the Netherlands. Such accumulations belong to the largest sums ever recorded in this country and gave rise to local flooding. Measuring precipitation by weather radar within such mesoscale convective systems is known to be a challenge, since measurements are affected by multiple sources of error. For the current event the operational weather radar rainfall product only estimated about 30% of the actual amount of precipitation as measured by rain gauges. In the current presentation we will try to identify what gave rise to such large underestimations. In general weather radar measurement errors can be subdivided into two different groups: 1) errors affecting the volumetric reflectivity measurements taken, and 2) errors related to the conversion of reflectivity values in rainfall intensity and attenuation estimates. To correct for the first group of errors, the quality of the weather radar reflectivity data was improved by successively correcting for 1) clutter and anomalous propagation, 2) radar calibration, 3) wet radome attenuation, 4) signal attenuation and 5) the vertical profile of reflectivity. Such consistent corrections are generally not performed by operational meteorological services. Results show a large improvement in the quality of the precipitation data, however still only ~65% of the actual observed accumulations was estimated. To further improve the quality of the precipitation estimates, the second group of errors are corrected for by making use of disdrometer measurements taken in close vicinity of the radar. Based on these data the parameters of a normalized drop size distribution are estimated for the total event as well as for each precipitation type separately (convective, stratiform and undefined). These are then used to obtain coherent parameter sets for the radar reflectivity-rainfall rate (Z-R) and radar reflectivity-attenuation (Z-k) relationship, specifically applicable for this event. By applying a single parameter set to correct for both sources of errors, the quality of the rainfall product improves further, leading to >80% of the observed accumulations. However, by differentiating between precipitation type no better results are obtained as when using the operational relationships. This leads to the question: how representative are local disdrometer observations to correct large scale weather radar measurements? In order to tackle this question a Monte Carlo approach was used to generate >10000 sets of the normalized dropsize distribution parameters and to assess their impact on the estimated precipitation amounts. Results show that a large number of parameter sets result in improved precipitation estimated by the weather radar closely resembling observations. However, these optimal sets vary considerably as compared to those obtained from the local disdrometer measurements.

  17. Estimation of the caesium-137 source term from the Fukushima Daiichi nuclear power plant using a consistent joint assimilation of air concentration and deposition observations

    NASA Astrophysics Data System (ADS)

    Winiarek, Victor; Bocquet, Marc; Duhanyan, Nora; Roustan, Yelva; Saunier, Olivier; Mathieu, Anne

    2014-01-01

    Inverse modelling techniques can be used to estimate the amount of radionuclides and the temporal profile of the source term released in the atmosphere during the accident of the Fukushima Daiichi nuclear power plant in March 2011. In Winiarek et al. (2012b), the lower bounds of the caesium-137 and iodine-131 source terms were estimated with such techniques, using activity concentration measurements. The importance of an objective assessment of prior errors (the observation errors and the background errors) was emphasised for a reliable inversion. In such critical context where the meteorological conditions can make the source term partly unobservable and where only a few observations are available, such prior estimation techniques are mandatory, the retrieved source term being very sensitive to this estimation. We propose to extend the use of these techniques to the estimation of prior errors when assimilating observations from several data sets. The aim is to compute an estimate of the caesium-137 source term jointly using all available data about this radionuclide, such as activity concentrations in the air, but also daily fallout measurements and total cumulated fallout measurements. It is crucial to properly and simultaneously estimate the background errors and the prior errors relative to each data set. A proper estimation of prior errors is also a necessary condition to reliably estimate the a posteriori uncertainty of the estimated source term. Using such techniques, we retrieve a total released quantity of caesium-137 in the interval 11.6-19.3 PBq with an estimated standard deviation range of 15-20% depending on the method and the data sets. The “blind” time intervals of the source term have also been strongly mitigated compared to the first estimations with only activity concentration data.

  18. Influence of precision of emission characteristic parameters on model prediction error of VOCs/formaldehyde from dry building material.

    PubMed

    Wei, Wenjuan; Xiong, Jianyin; Zhang, Yinping

    2013-01-01

    Mass transfer models are useful in predicting the emissions of volatile organic compounds (VOCs) and formaldehyde from building materials in indoor environments. They are also useful for human exposure evaluation and in sustainable building design. The measurement errors in the emission characteristic parameters in these mass transfer models, i.e., the initial emittable concentration (C 0), the diffusion coefficient (D), and the partition coefficient (K), can result in errors in predicting indoor VOC and formaldehyde concentrations. These errors have not yet been quantitatively well analyzed in the literature. This paper addresses this by using modelling to assess these errors for some typical building conditions. The error in C 0, as measured in environmental chambers and applied to a reference living room in Beijing, has the largest influence on the model prediction error in indoor VOC and formaldehyde concentration, while the error in K has the least effect. A correlation between the errors in D, K, and C 0 and the error in the indoor VOC and formaldehyde concentration prediction is then derived for engineering applications. In addition, the influence of temperature on the model prediction of emissions is investigated. It shows the impact of temperature fluctuations on the prediction errors in indoor VOC and formaldehyde concentrations to be less than 7% at 23±0.5°C and less than 30% at 23±2°C.

  19. The influence of LED lighting on task accuracy: time of day, gender and myopia effects

    NASA Astrophysics Data System (ADS)

    Rao, Feng; Chan, A. H. S.; Zhu, Xi-Fang

    2017-07-01

    In this research, task errors were obtained during performance of a marker location task in which the markers were shown on a computer screen under nine LED lighting conditions; three illuminances (100, 300 and 500 lx) and three color temperatures (3000, 4500 and 6500 K). A total of 47 students participated voluntarily in these tasks. The results showed that task errors in the morning were small and nearly constant across the nine lighting conditions. However in the afternoon, the task errors were significantly larger and varied across lighting conditions. The largest errors for the afternoon session occurred when the color temperature was 4500 K and illuminance 500 lx. There were significant differences between task errors in the morning and afternoon sessions. No significant difference between females and males was found. Task errors for high myopia students were significantly larger than for the low myopia students under the same lighting conditions. In summary, the influence of LED lighting on task accuracy during office hours was not gender dependent, but was time of day and myopia dependent.

  20. Seasonal to interannual Arctic sea ice predictability in current global climate models

    NASA Astrophysics Data System (ADS)

    Tietsche, S.; Day, J. J.; Guemas, V.; Hurlin, W. J.; Keeley, S. P. E.; Matei, D.; Msadek, R.; Collins, M.; Hawkins, E.

    2014-02-01

    We establish the first intermodel comparison of seasonal to interannual predictability of present-day Arctic climate by performing coordinated sets of idealized ensemble predictions with four state-of-the-art global climate models. For Arctic sea ice extent and volume, there is potential predictive skill for lead times of up to 3 years, and potential prediction errors have similar growth rates and magnitudes across the models. Spatial patterns of potential prediction errors differ substantially between the models, but some features are robust. Sea ice concentration errors are largest in the marginal ice zone, and in winter they are almost zero away from the ice edge. Sea ice thickness errors are amplified along the coasts of the Arctic Ocean, an effect that is dominated by sea ice advection. These results give an upper bound on the ability of current global climate models to predict important aspects of Arctic climate.

  1. Implicit Monte Carlo with a linear discontinuous finite element material solution and piecewise non-constant opacity

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollaeger, Ryan T.; Wollaber, Allan B.; Urbatsch, Todd J.

    2016-02-23

    Here, the non-linear thermal radiative-transfer equations can be solved in various ways. One popular way is the Fleck and Cummings Implicit Monte Carlo (IMC) method. The IMC method was originally formulated with piecewise-constant material properties. For domains with a coarse spatial grid and large temperature gradients, an error known as numerical teleportation may cause artificially non-causal energy propagation and consequently an inaccurate material temperature. Source tilting is a technique to reduce teleportation error by constructing sub-spatial-cell (or sub-cell) emission profiles from which IMC particles are sampled. Several source tilting schemes exist, but some allow teleportation error to persist. We examinemore » the effect of source tilting in problems with a temperature-dependent opacity. Within each cell, the opacity is evaluated continuously from a temperature profile implied by the source tilt. For IMC, this is a new approach to modeling the opacity. We find that applying both source tilting along with a source tilt-dependent opacity can introduce another dominant error that overly inhibits thermal wavefronts. We show that we can mitigate both teleportation and under-propagation errors if we discretize the temperature equation with a linear discontinuous (LD) trial space. Our method is for opacities ~ 1/T 3, but we formulate and test a slight extension for opacities ~ 1/T 3.5, where T is temperature. We find our method avoids errors that can be incurred by IMC with continuous source tilt constructions and piecewise-constant material temperature updates.« less

  2. Feasibility of Equivalent Dipole Models for Electroencephalogram-Based Brain Computer Interfaces.

    PubMed

    Schimpf, Paul H

    2017-09-15

    This article examines the localization errors of equivalent dipolar sources inverted from the surface electroencephalogram in order to determine the feasibility of using their location as classification parameters for non-invasive brain computer interfaces. Inverse localization errors are examined for two head models: a model represented by four concentric spheres and a realistic model based on medical imagery. It is shown that the spherical model results in localization ambiguity such that a number of dipolar sources, with different azimuths and varying orientations, provide a near match to the electroencephalogram of the best equivalent source. No such ambiguity exists for the elevation of inverted sources, indicating that for spherical head models, only the elevation of inverted sources (and not the azimuth) can be expected to provide meaningful classification parameters for brain-computer interfaces. In a realistic head model, all three parameters of the inverted source location are found to be reliable, providing a more robust set of parameters. In both cases, the residual error hypersurfaces demonstrate local minima, indicating that a search for the best-matching sources should be global. Source localization error vs. signal-to-noise ratio is also demonstrated for both head models.

  3. [Anthropogenic ammonia emission inventory and characteristics in the Pearl River Delta Region].

    PubMed

    Yin, Sha-sha; Zheng, Jun-yu; Zhang, Li-jun; Zhong, Liu-ju

    2010-05-01

    Based on the collected activity data and emission factors of anthropogenic ammonia sources, a 2006-based anthropogenic ammonia emission inventory was developed for the Pearl River Delta (PRD) region by source categories and cities with the use of appropriate estimation methods. The results show: (1) the total NH3 emission from anthropogenic sources in the PRD region was 194. 8 kt; (2) the agriculture sources were major contributors of anthropogenic ammonia sources, in which livestock sources shared 62.1% of total NH3 emission and the contribution of application of nitrogen fertilizers was 21.7%; (3) the broiler was the largest contributor among the livestock sources, accounting for 43.4% of the livestock emissions, followed by the hog with a contribution of 32.1%; (4) Guangzhou was the largest ammonia emission city in the PRD region, and then Jiangmen, accounting for 23.4% and 19.1% of total NH3 emission in the PRD region respectively, with major sources as livestock sources and application of nitrogen fertilizers.

  4. Error analysis in stereo vision for location measurement of 3D point

    NASA Astrophysics Data System (ADS)

    Li, Yunting; Zhang, Jun; Tian, Jinwen

    2015-12-01

    Location measurement of 3D point in stereo vision is subjected to different sources of uncertainty that propagate to the final result. For current methods of error analysis, most of them are based on ideal intersection model to calculate the uncertainty region of point location via intersecting two fields of view of pixel that may produce loose bounds. Besides, only a few of sources of error such as pixel error or camera position are taken into account in the process of analysis. In this paper we present a straightforward and available method to estimate the location error that is taken most of source of error into account. We summed up and simplified all the input errors to five parameters by rotation transformation. Then we use the fast algorithm of midpoint method to deduce the mathematical relationships between target point and the parameters. Thus, the expectations and covariance matrix of 3D point location would be obtained, which can constitute the uncertainty region of point location. Afterwards, we turned back to the error propagation of the primitive input errors in the stereo system and throughout the whole analysis process from primitive input errors to localization error. Our method has the same level of computational complexity as the state-of-the-art method. Finally, extensive experiments are performed to verify the performance of our methods.

  5. Adaptation to sensory-motor reflex perturbations is blind to the source of errors.

    PubMed

    Hudson, Todd E; Landy, Michael S

    2012-01-06

    In the study of visual-motor control, perhaps the most familiar findings involve adaptation to externally imposed movement errors. Theories of visual-motor adaptation based on optimal information processing suppose that the nervous system identifies the sources of errors to effect the most efficient adaptive response. We report two experiments using a novel perturbation based on stimulating a visually induced reflex in the reaching arm. Unlike adaptation to an external force, our method induces a perturbing reflex within the motor system itself, i.e., perturbing forces are self-generated. This novel method allows a test of the theory that error source information is used to generate an optimal adaptive response. If the self-generated source of the visually induced reflex perturbation is identified, the optimal response will be via reflex gain control. If the source is not identified, a compensatory force should be generated to counteract the reflex. Gain control is the optimal response to reflex perturbation, both because energy cost and movement errors are minimized. Energy is conserved because neither reflex-induced nor compensatory forces are generated. Precision is maximized because endpoint variance is proportional to force production. We find evidence against source-identified adaptation in both experiments, suggesting that sensory-motor information processing is not always optimal.

  6. Categorizing accident sequences in the external radiotherapy for risk analysis

    PubMed Central

    2013-01-01

    Purpose This study identifies accident sequences from the past accidents in order to help the risk analysis application to the external radiotherapy. Materials and Methods This study reviews 59 accidental cases in two retrospective safety analyses that have collected the incidents in the external radiotherapy extensively. Two accident analysis reports that accumulated past incidents are investigated to identify accident sequences including initiating events, failure of safety measures, and consequences. This study classifies the accidents by the treatments stages and sources of errors for initiating events, types of failures in the safety measures, and types of undesirable consequences and the number of affected patients. Then, the accident sequences are grouped into several categories on the basis of similarity of progression. As a result, these cases can be categorized into 14 groups of accident sequence. Results The result indicates that risk analysis needs to pay attention to not only the planning stage, but also the calibration stage that is committed prior to the main treatment process. It also shows that human error is the largest contributor to initiating events as well as to the failure of safety measures. This study also illustrates an event tree analysis for an accident sequence initiated in the calibration. Conclusion This study is expected to provide sights into the accident sequences for the prospective risk analysis through the review of experiences. PMID:23865005

  7. Relationship auditing of the FMA ontology

    PubMed Central

    Gu, Huanying (Helen); Wei, Duo; Mejino, Jose L.V.; Elhanan, Gai

    2010-01-01

    The Foundational Model of Anatomy (FMA) ontology is a domain reference ontology based on a disciplined modeling approach. Due to its large size, semantic complexity and manual data entry process, errors and inconsistencies are unavoidable and might remain within the FMA structure without detection. In this paper, we present computable methods to highlight candidate concepts for various relationship assignment errors. The process starts with locating structures formed by transitive structural relationships (part_of, tributary_of, branch_of) and examine their assignments in the context of the IS-A hierarchy. The algorithms were designed to detect five major categories of possible incorrect relationship assignments: circular, mutually exclusive, redundant, inconsistent, and missed entries. A domain expert reviewed samples of these presumptive errors to confirm the findings. Seven thousand and fifty-two presumptive errors were detected, the largest proportion related to part_of relationship assignments. The results highlight the fact that errors are unavoidable in complex ontologies and that well designed algorithms can help domain experts to focus on concepts with high likelihood of errors and maximize their effort to ensure consistency and reliability. In the future similar methods might be integrated with data entry processes to offer real-time error detection. PMID:19475727

  8. Acquisition of Pragmatic Routines by Learners of L2 English: Investigating Common Errors and Sources of Pragmatic Fossilization

    ERIC Educational Resources Information Center

    Tajeddin, Zia; Alemi, Minoo; Pashmforoosh, Roya

    2017-01-01

    Unlike linguistic fossilization, pragmatic fossilization has received scant attention in fossilization research. To bridge this gap, the present study adopted a typical-error method of fossilization research to identify the most frequent errors in pragmatic routines committed by Persian-speaking learners of L2 English and explore the sources of…

  9. Minimizing Artifacts and Biases in Chamber-Based Measurements of Soil Respiration

    NASA Astrophysics Data System (ADS)

    Davidson, E. A.; Savage, K.

    2001-05-01

    Soil respiration is one of the largest and most important fluxes of carbon in terrestrial ecosystems. The objectives of this paper are to review concerns about uncertainties of chamber-based measurements of CO2 emissions from soils, to evaluate the direction and magnitude of these potential errors, and to explain procedures that minimize these errors and biases. Disturbance of diffusion gradients cause underestimate of fluxes by less than 15% in most cases, and can be partially corrected for with curve fitting and/or can be minimized by using brief measurement periods. Under-pressurization or over-pressurization of the chamber caused by flow restrictions in air circulating designs can cause large errors, but can also be avoided with properly sized chamber vents and unrestricted flows. Somewhat larger pressure differentials are observed under windy conditions, and the accuracy of measurements made under such conditions needs more research. Spatial and temporal heterogeneity can be addressed with appropriate chamber sizes and numbers and frequency of sampling. For example, means of 8 randomly chosen flux measurements from a population of 36 measurements made with 300 cm2 chambers in tropical forests and pastures were within 25% of the full population mean 98% of the time and were within 10% of the full population mean 70% of the time. Comparisons of chamber-based measurements with tower-based measurements of total ecosystem respiration require analysis of the scale of variation within the purported tower footprint. In a forest at Howland, Maine, the differences in soil respiration rates among very poorly drained and well drained soils were large, but they mostly were fortuitously cancelled when evaluated for purported tower footprints of 600-2100 m length. While all of these potential sources of measurement error and sampling biases must be carefully considered, properly designed and deployed chambers provide a reliable means of accurately measuring soil respiration in terrestrial ecosystems.

  10. Exception handling for sensor fusion

    NASA Astrophysics Data System (ADS)

    Chavez, G. T.; Murphy, Robin R.

    1993-08-01

    This paper presents a control scheme for handling sensing failures (sensor malfunctions, significant degradations in performance due to changes in the environment, and errant expectations) in sensor fusion for autonomous mobile robots. The advantages of the exception handling mechanism are that it emphasizes a fast response to sensing failures, is able to use only a partial causal model of sensing failure, and leads to a graceful degradation of sensing if the sensing failure cannot be compensated for. The exception handling mechanism consists of two modules: error classification and error recovery. The error classification module in the exception handler attempts to classify the type and source(s) of the error using a modified generate-and-test procedure. If the source of the error is isolated, the error recovery module examines its cache of recovery schemes, which either repair or replace the current sensing configuration. If the failure is due to an error in expectation or cannot be identified, the planner is alerted. Experiments using actual sensor data collected by the CSM Mobile Robotics/Machine Perception Laboratory's Denning mobile robot demonstrate the operation of the exception handling mechanism.

  11. Introduction to CAUSES: Description of Weather and Climate Models and Their Near-Surface Temperature Errors in 5 day Hindcasts Near the Southern Great Plains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morcrette, C. J.; Van Weverberg, K.; Ma, H. -Y.

    The Clouds Above the United States and Errors at the Surface (CAUSES) project is aimed at gaining a better understanding of the physical processes that are leading to the creation of warm screen-temperature biases over the American Midwest, which are seen in many numerical models. Here in Part 1, a series of 5-day hindcasts, each initialised from re-analyses and performed by 11 different models, are evaluated against screen-temperature observations. All the models have a warm bias over parts of the Midwest. Several ways of quantifying the impact of the initial conditions on the evolution of the simulations are presented, showingmore » that within a day or so all models have produced a warm bias that is representative of their bias after 5 days, and not closely tied to the conditions at the initial time. Although the surface temperature biases sometimes coincide with locations where the re-analyses themselves have a bias, there are many regions in each of the models where biases grow over the course of 5 days or are larger than the biases present in the reanalyses. At the Southern Great Plains site, the model biases are shown to not be confined to the surface, but extend several kilometres into the atmosphere. In most of the models, there is a strong diurnal cycle in the screen-temperature bias and in some models the biases are largest around midday, while in the others it is largest during the night. While the different physical processes that are contributing to a given model having a screen-temperature error will be discussed in more detail in the companion papers (Parts 2 and 3) the fact that there is a spatial coherence in the phase of the diurnal cycle of the error across wide regions and that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP suggest that the detailed evaluations of the role of different processes in contributing to errors at SGP will be representative of errors that are prevalent over a much larger spatial scale.« less

  12. Introduction to CAUSES: Description of weather and climate models and their near-surface temperature errors in 5-day hindcasts near the Southern Great Plains

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Morcrette, Cyril J.; Van Weverberg, Kwinten; Ma, H

    2018-02-16

    The Clouds Above the United States and Errors at the Surface (CAUSES) project is aimed at gaining a better understanding of the physical processes that are leading to the creation of warm screen-temperature biases over the American Midwest, which are seen in many numerical models. Here in Part 1, a series of 5-day hindcasts, each initialised from re-analyses and performed by 11 different models, are evaluated against screen-temperature observations. All the models have a warm bias over parts of the Midwest. Several ways of quantifying the impact of the initial conditions on the evolution of the simulations are presented, showingmore » that within a day or so all models have produced a warm bias that is representative of their bias after 5 days, and not closely tied to the conditions at the initial time. Although the surface temperature biases sometimes coincide with locations where the re-analyses themselves have a bias, there are many regions in each of the models where biases grow over the course of 5 days or are larger than the biases present in the reanalyses. At the Southern Great Plains site, the model biases are shown to not be confined to the surface, but extend several kilometres into the atmosphere. In most of the models, there is a strong diurnal cycle in the screen-temperature bias and in some models the biases are largest around midday, while in the others it is largest during the night. While the different physical processes that are contributing to a given model having a screen-temperature error will be discussed in more detail in the companion papers (Parts 2 and 3) the fact that there is a spatial coherence in the phase of the diurnal cycle of the error across wide regions and that there are numerous locations across the Midwest where the diurnal cycle of the error is highly correlated with the diurnal cycle of the error at SGP suggest that the detailed evaluations of the role of different processes in contributing to errors at SGP will be representative of errors that are prevalent over a much larger spatial scale.« less

  13. Accuracy analysis and design of A3 parallel spindle head

    NASA Astrophysics Data System (ADS)

    Ni, Yanbing; Zhang, Biao; Sun, Yupeng; Zhang, Yuan

    2016-03-01

    As functional components of machine tools, parallel mechanisms are widely used in high efficiency machining of aviation components, and accuracy is one of the critical technical indexes. Lots of researchers have focused on the accuracy problem of parallel mechanisms, but in terms of controlling the errors and improving the accuracy in the stage of design and manufacturing, further efforts are required. Aiming at the accuracy design of a 3-DOF parallel spindle head(A3 head), its error model, sensitivity analysis and tolerance allocation are investigated. Based on the inverse kinematic analysis, the error model of A3 head is established by using the first-order perturbation theory and vector chain method. According to the mapping property of motion and constraint Jacobian matrix, the compensatable and uncompensatable error sources which affect the accuracy in the end-effector are separated. Furthermore, sensitivity analysis is performed on the uncompensatable error sources. The sensitivity probabilistic model is established and the global sensitivity index is proposed to analyze the influence of the uncompensatable error sources on the accuracy in the end-effector of the mechanism. The results show that orientation error sources have bigger effect on the accuracy in the end-effector. Based upon the sensitivity analysis results, the tolerance design is converted into the issue of nonlinearly constrained optimization with the manufacturing cost minimum being the optimization objective. By utilizing the genetic algorithm, the allocation of the tolerances on each component is finally determined. According to the tolerance allocation results, the tolerance ranges of ten kinds of geometric error sources are obtained. These research achievements can provide fundamental guidelines for component manufacturing and assembly of this kind of parallel mechanisms.

  14. Measurement-device-independent quantum key distribution with correlated source-light-intensity errors

    NASA Astrophysics Data System (ADS)

    Jiang, Cong; Yu, Zong-Wen; Wang, Xiang-Bin

    2018-04-01

    We present an analysis for measurement-device-independent quantum key distribution with correlated source-light-intensity errors. Numerical results show that the results here can greatly improve the key rate especially with large intensity fluctuations and channel attenuation compared with prior results if the intensity fluctuations of different sources are correlated.

  15. On estimating the basin-scale ocean circulation from satellite altimetry. Part 1: Straightforward spherical harmonic expansion

    NASA Technical Reports Server (NTRS)

    Tai, Chang-Kou

    1988-01-01

    Direct estimation of the absolute dynamic topography from satellite altimetry has been confined to the largest scales (basically the basin-scale) owing to the fact that the signal-to-noise ratio is more unfavorable everywhere else. But even for the largest scales, the results are contaminated by the orbit error and geoid uncertainties. Recently a more accurate Earth gravity model (GEM-T1) became available, providing the opportunity to examine the whole question of direct estimation under a more critical limelight. It is found that our knowledge of the Earth's gravity field has indeed improved a great deal. However, it is not yet possible to claim definitively that our knowledge of the ocean circulation has improved through direct estimation. Yet, the improvement in the gravity model has come to the point that it is no longer possible to attribute the discrepancy at the basin scales between altimetric and hydrographic results as mostly due to geoid uncertainties. A substantial part of the difference must be due to other factors; i.e., the orbit error, or the uncertainty of the hydrographically derived dynamic topography.

  16. INTERDISCIPLINARY PHYSICS AND RELATED AREAS OF SCIENCE AND TECHNOLOGY: Relaxation Property and Stability Analysis of the Quasispecies Models

    NASA Astrophysics Data System (ADS)

    Feng, Xiao-Li; Li, Yu-Xiao; Gu, Jian-Zhong; Zhuo, Yi-Zhong

    2009-10-01

    The relaxation property of both Eigen model and Crow-Kimura model with a single peak fitness landscape is studied from phase transition point of view. We first analyze the eigenvalue spectra of the replication mutation matrices. For sufficiently long sequences, the almost crossing point between the largest and second-largest eigenvalues locates the error threshold at which critical slowing down behavior appears. We calculate the critical exponent in the limit of infinite sequence lengths and compare it with the result from numerical curve fittings at sufficiently long sequences. We find that for both models the relaxation time diverges with exponent 1 at the error (mutation) threshold point. Results obtained from both methods agree quite well. From the unlimited correlation length feature, the first order phase transition is further confirmed. Finally with linear stability theory, we show that the two model systems are stable for all ranges of mutation rate. The Eigen model is asymptotically stable in terms of mutant classes, and the Crow-Kimura model is completely stable.

  17. The impact of work-related stress on medication errors in Eastern Region Saudi Arabia.

    PubMed

    Salam, Abdul; Segal, David M; Abu-Helalah, Munir Ahmad; Gutierrez, Mary Lou; Joosub, Imran; Ahmed, Wasim; Bibi, Rubina; Clarke, Elizabeth; Qarni, Ali Ahmed Al

    2018-05-07

    To examine the relationship between overall level and source-specific work-related stressors on medication errors rate. A cross-sectional study examined the relationship between overall levels of stress, 25 source-specific work-related stressors and medication error rate based on documented incident reports in Saudi Arabia (SA) hospital, using secondary databases. King Abdulaziz Hospital in Al-Ahsa, Eastern Region, SA. Two hundred and sixty-nine healthcare professionals (HCPs). The odds ratio (OR) and corresponding 95% confidence interval (CI) for HCPs documented incident report medication errors and self-reported sources of Job Stress Survey. Multiple logistic regression analysis identified source-specific work-related stress as significantly associated with HCPs who made at least one medication error per month (P < 0.05), including disruption to home life, pressure to meet deadlines, difficulties with colleagues, excessive workload, income over 10 000 riyals and compulsory night/weekend call duties either some or all of the time. Although not statistically significant, HCPs who reported overall stress were two times more likely to make at least one medication error per month than non-stressed HCPs (OR: 1.95, P = 0.081). This is the first study to use documented incident reports for medication errors rather than self-report to evaluate the level of stress-related medication errors in SA HCPs. Job demands, such as social stressors (home life disruption, difficulties with colleagues), time pressures, structural determinants (compulsory night/weekend call duties) and higher income, were significantly associated with medication errors whereas overall stress revealed a 2-fold higher trend.

  18. Quantifying Data Quality for Clinical Trials Using Electronic Data Capture

    PubMed Central

    Nahm, Meredith L.; Pieper, Carl F.; Cunningham, Maureen M.

    2008-01-01

    Background Historically, only partial assessments of data quality have been performed in clinical trials, for which the most common method of measuring database error rates has been to compare the case report form (CRF) to database entries and count discrepancies. Importantly, errors arising from medical record abstraction and transcription are rarely evaluated as part of such quality assessments. Electronic Data Capture (EDC) technology has had a further impact, as paper CRFs typically leveraged for quality measurement are not used in EDC processes. Methods and Principal Findings The National Institute on Drug Abuse Treatment Clinical Trials Network has developed, implemented, and evaluated methodology for holistically assessing data quality on EDC trials. We characterize the average source-to-database error rate (14.3 errors per 10,000 fields) for the first year of use of the new evaluation method. This error rate was significantly lower than the average of published error rates for source-to-database audits, and was similar to CRF-to-database error rates reported in the published literature. We attribute this largely to an absence of medical record abstraction on the trials we examined, and to an outpatient setting characterized by less acute patient conditions. Conclusions Historically, medical record abstraction is the most significant source of error by an order of magnitude, and should be measured and managed during the course of clinical trials. Source-to-database error rates are highly dependent on the amount of structured data collection in the clinical setting and on the complexity of the medical record, dependencies that should be considered when developing data quality benchmarks. PMID:18725958

  19. Multiplate Radiation Shields: Investigating Radiational Heating Errors

    NASA Astrophysics Data System (ADS)

    Richardson, Scott James

    1995-01-01

    Multiplate radiation shield errors are examined using the following techniques: (1) analytic heat transfer analysis, (2) optical ray tracing, (3) numerical fluid flow modeling, (4) laboratory testing, (5) wind tunnel testing, and (6) field testing. Guidelines for reducing radiational heating errors are given that are based on knowledge of the temperature sensor to be used, with the shield being chosen to match the sensor design. Small, reflective sensors that are exposed directly to the air stream (not inside a filter as is the case for many temperature and relative humidity probes) should be housed in a shield that provides ample mechanical and rain protection while impeding the air flow as little as possible; protection from radiation sources is of secondary importance. If a sensor does not meet the above criteria (i.e., is large or absorbing), then a standard Gill shield performs reasonably well. A new class of shields, called part-time aspirated multiplate radiation shields, are introduced. This type of shield consists of a multiplate design usually operated in a passive manner but equipped with a fan-forced aspiration capability to be used when necessary (e.g., low wind speed). The fans used here are 12 V DC that can be operated with a small dedicated solar panel. This feature allows the fan to operate when global solar radiation is high, which is when the largest radiational heating errors usually occur. A prototype shield was constructed and field tested and an example is given in which radiational heating errors were reduced from 2 ^circC to 1.2 ^circC. The fan was run continuously to investigate night-time low wind speed errors and the prototype shield reduced errors from 1.6 ^ circC to 0.3 ^circC. Part-time aspirated shields are an inexpensive alternative to fully aspirated shields and represent a good compromise between cost, power consumption, reliability (because they should be no worse than a standard multiplate shield if the fan fails), and accuracy. In addition, it is possible to modify existing passive shields to incorporate part-time aspiration, thus making them even more cost-effective. Finally, a new shield is described that incorporates a large diameter top plate that is designed to shade the lower portion of the shield. This shield increases flow through it by 60%, compared to the Gill design and it is likely to reduce radiational heating errors, although it has not been tested.

  20. Contribution of PAHs from coal-tar pavement sealcoat and other sources to 40 U.S. lakes

    USGS Publications Warehouse

    Van Metre, Peter C.; Mahler, Barbara J.

    2010-01-01

    Contamination of urban lakes and streams by polycyclic aromatic hydrocarbons (PAHs) has increased in the United States during the past 40 years. We evaluated sources of PAHs in post-1990 sediments in cores from 40 lakes in urban areas across the United States using a contaminant mass-balance receptor model and including as a potential source coal-tar-based (CT) sealcoat, a recently recognized source of urban PAH. Other PAH sources considered included several coal- and vehicle-related sources, wood combustion, and fuel-oil combustion. The four best modeling scenarios all indicate CT sealcoat is the largest PAH source when averaged across all 40 lakes, contributing about one-half of PAH in sediment, followed by vehicle-related sources and coal combustion. PAH concentrations in the lakes were highly correlated with PAH loading from CT sealcoat (Spearman's rho=0.98), and the mean proportional PAH profile for the 40 lakes was highly correlated with the PAH profile for dust from CT-sealed pavement (r=0.95). PAH concentrations and mass and fractional loading from CT sealcoat were significantly greater in the central and eastern United States than in the western United States, reflecting regional differences in use of different sealcoat product types. The model was used to calculate temporal trends in PAH source contributions during the last 40 to 100 years to eight of the 40 lakes. In seven of the lakes, CT sealcoat has been the largest source of PAHs since the 1960s, and in six of those lakes PAH trends are upward. Traffic is the largest source to the eighth lake, located in southern California where use of CT sealcoat is rare.

  1. Comparing Parameter Estimation Techniques for an Electrical Power Transformer Oil Temperature Prediction Model

    NASA Technical Reports Server (NTRS)

    Morris, A. Terry

    1999-01-01

    This paper examines various sources of error in MIT's improved top oil temperature rise over ambient temperature model and estimation process. The sources of error are the current parameter estimation technique, quantization noise, and post-processing of the transformer data. Results from this paper will show that an output error parameter estimation technique should be selected to replace the current least squares estimation technique. The output error technique obtained accurate predictions of transformer behavior, revealed the best error covariance, obtained consistent parameter estimates, and provided for valid and sensible parameters. This paper will also show that the output error technique should be used to minimize errors attributed to post-processing (decimation) of the transformer data. Models used in this paper are validated using data from a large transformer in service.

  2. How important is mode-coupling in global surface wave tomography?

    NASA Astrophysics Data System (ADS)

    Mikesell, Dylan; Nolet, Guust; Voronin, Sergey; Ritsema, Jeroen; Van Heijst, Hendrik-Jan

    2016-04-01

    To investigate the influence of mode coupling for fundamental mode Rayleigh waves with periods between 64 and 174s, we analysed 3,505,902 phase measurements obtained along minor arc trajectories as well as 2,163,474 phases along major arcs. This is a selection of five frequency bands from the data set of Van Heijst and Woodhouse, extended with more recent earthquakes, that served to define upper mantle S velocity in model S40RTS. Since accurate estimation of the misfits (as represented by χ2) is essential, we used the method of Voronin et al. (GJI 199:276, 2014) to obtain objective estimates of the standard errors in this data set. We adapted Voronin's method slightly to avoid that systematic errors along clusters of raypaths can be accommodated by source corrections. This was done by simultaneously analysing multiple clusters of raypaths originating from the same group of earthquakes but traveling in different directions. For the minor arc data, phase errors at the one sigma level range from 0.26 rad at a period of 174s to 0.89 rad at 64s. For the major arcs, these errors are roughly twice as high (0.40 and 2.09 rad, respectively). In the subsequent inversion we removed any outliers that could not be fitted at the 3 sigma level in an almost undamped inversion. Using these error estimates and the theory of finite-frequency tomography to include the effects of scattering, we solved for models with χ2 = N (the number of data) both including and excluding the effect of mode coupling between Love and Rayleigh waves. We shall present some dramatic differences between the two models, notably near ocean-continent boundaries (e.g. California) where mode conversions are likely to be largest. But a sharpening of other features, such as cratons and high-velocity blobs in the oceanic domain, is also observed when mode coupling is taken into account. An investigation of the influence of coupling on azimuthal anisotropy is still under way at the time of writing of this abstract, but the results of this will be included in the presentation.

  3. The accuracy of estimates of the overturning circulation from basin-wide mooring arrays

    NASA Astrophysics Data System (ADS)

    Sinha, B.; Smeed, D. A.; McCarthy, G.; Moat, B. I.; Josey, S. A.; Hirschi, J. J.-M.; Frajka-Williams, E.; Blaker, A. T.; Rayner, D.; Madec, G.

    2018-01-01

    Previous modeling and observational studies have established that it is possible to accurately monitor the Atlantic Meridional Overturning Circulation (AMOC) at 26.5°N using a coast-to-coast array of instrumented moorings supplemented by direct transport measurements in key boundary regions (the RAPID/MOCHA/WBTS Array). The main sources of observational and structural errors have been identified in a variety of individual studies. Here a unified framework for identifying and quantifying structural errors associated with the RAPID array-based AMOC estimates is established using a high-resolution (eddy resolving at low-mid latitudes, eddy permitting elsewhere) ocean general circulation model, which simulates the ocean state between 1978 and 2010. We define a virtual RAPID array in the model in close analogy to the real RAPID array and compare the AMOC estimate from the virtual array with the true model AMOC. The model analysis suggests that the RAPID method underestimates the mean AMOC by ∼1.5 Sv (1 Sv = 106 m3 s-1) at ∼900 m depth, however it captures the variability to high accuracy. We examine three major contributions to the streamfunction bias: (i) due to the assumption of a single fixed reference level for calculation of geostrophic transports, (ii) due to regions not sampled by the array and (iii) due to ageostrophic transport. A key element in (i) and (iii) is use of the model sea surface height to establish the true (or absolute) geostrophic transport. In the upper 2000 m, we find that the reference level bias is strongest and most variable in time, whereas the bias due to unsampled regions is largest below 3000 m. The ageostrophic transport is significant in the upper 1000 m but shows very little variability. The results establish, for the first time, the uncertainty of the AMOC estimate due to the combined structural errors in the measurement design and suggest ways in which the error could be reduced. Our work has applications to basin-wide circulation measurement arrays at other latitudes and in other basins as well as quantifying systematic errors in ocean model estimates of the AMOC at 26.5°N.

  4. OBSERVATIONS OF BINARY STARS WITH THE DIFFERENTIAL SPECKLE SURVEY INSTRUMENT. III. MEASURES BELOW THE DIFFRACTION LIMIT OF THE WIYN TELESCOPE

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Horch, Elliott P.; Van Altena, William F.; Howell, Steve B.

    2011-06-15

    In this paper, we study the ability of CCD- and electron-multiplying-CCD-based speckle imaging to obtain reliable astrometry and photometry of binary stars below the diffraction limit of the WIYN 3.5 m Telescope. We present a total of 120 measures of binary stars, 75 of which are below the diffraction limit. The measures are divided into two groups that have different measurement accuracy and precision. The first group is composed of standard speckle observations, that is, a sequence of speckle images taken in a single filter, while the second group consists of paired observations where the two observations are taken onmore » the same observing run and in different filters. The more recent paired observations were taken simultaneously with the Differential Speckle Survey Instrument, which is a two-channel speckle imaging system. In comparing our results to the ephemeris positions of binaries with known orbits, we find that paired observations provide the opportunity to identify cases of systematic error in separation below the diffraction limit and after removing these from consideration, we obtain a linear measurement uncertainty of 3-4 mas. However, if observations are unpaired or if two observations taken in the same filter are paired, it becomes harder to identify cases of systematic error, presumably because the largest source of this error is residual atmospheric dispersion, which is color dependent. When observations are unpaired, we find that it is unwise to report separations below approximately 20 mas, as these are most susceptible to this effect. Using the final results obtained, we are able to update two older orbits in the literature and present preliminary orbits for three systems that were discovered by Hipparcos.« less

  5. Assessment of errors and biases in retrievals of X CO2, X CH4, X CO, and X N2O from a 0.5 cm –1 resolution solar-viewing spectrometer

    DOE PAGES

    Hedelius, Jacob K.; Viatte, Camille; Wunch, Debra; ...

    2016-08-03

    Bruker™ EM27/SUN instruments are commercial mobile solar-viewing near-IR spectrometers. They show promise for expanding the global density of atmospheric column measurements of greenhouse gases and are being marketed for such applications. They have been shown to measure the same variations of atmospheric gases within a day as the high-resolution spectrometers of the Total Carbon Column Observing Network (TCCON). However, there is little known about the long-term precision and uncertainty budgets of EM27/SUN measurements. In this study, which includes a comparison of 186 measurement days spanning 11 months, we note that atmospheric variations of X gas within a single day aremore » well captured by these low-resolution instruments, but over several months, the measurements drift noticeably. We present comparisons between EM27/SUN instruments and the TCCON using GGG as the retrieval algorithm. In addition, we perform several tests to evaluate the robustness of the performance and determine the largest sources of errors from these spectrometers. We include comparisons of X CO2, X CH4, X CO, and X N2O. Specifically we note EM27/SUN biases for January 2015 of 0.03, 0.75, –0.12, and 2.43 % for X CO2, X CH4, X CO, and X N2O respectively, with 1 σ running precisions of 0.08 and 0.06 % for X CO2 and X CH4 from measurements in Pasadena. We also identify significant error caused by nonlinear sensitivity when using an extended spectral range detector used to measure CO and N 2O.« less

  6. Quantifying the imprint of mesoscale and synoptic-scale atmospheric transport on total column carbon dioxide measurements

    NASA Astrophysics Data System (ADS)

    Torres, A. D.; Keppel-Aleks, G.; Doney, S. C.; Feng, S.; Lauvaux, T.; Fendrock, M. A.; Rheuben, J.

    2017-12-01

    Remote sensing instruments provide an unprecedented density of observations of the atmospheric CO2 column average mole fraction (denoted as XCO2), which can be used to constrain regional scale carbon fluxes. Inferring fluxes from XCO2 observations is challenging, as measurements and inversion methods are sensitive to not only the imprint local and large-scale fluxes, but also mesoscale and synoptic-scale atmospheric transport. Quantifying the fine-scale variability in XCO2 from mesoscale and synoptic-scale atmospheric transport will likely improve overall error estimates from flux inversions by improving estimates of representation errors that occur when XCO2 observations are compared to modeled XCO2 in relatively coarse transport models. Here, we utilize various statistical methods to quantify the imprint of atmospheric transport on XCO2 observations. We compare spatial variations along Orbiting Carbon Observatory (OCO-2) satellite tracks to temporal variations observed by the Total Column Carbon Observing Network (TCCON). We observe a coherent seasonal cycle of both within-day temporal and fine-scale spatial variability (of order 10 km) of XCO2 from these two datasets, suggestive of the imprint of mesoscale systems. To account for other potential sources of error in XCO2 retrieval, we compare observed temporal and spatial variations of XCO2 to high-resolution output from the Weather Research and Forecasting (WRF) model run at 9 km resolution. In both simulations and observations, the Northern hemisphere mid-latitude XCO2 showed peak variability during the growing season when atmospheric gradients are largest. These results are qualitatively consistent with our expectations of seasonal variations of the imprint of synoptic and mesoscale atmospheric transport on XCO2 observations; suggesting that these statistical methods could be sensitive to the imprint of atmospheric transport on XCO2 observations.

  7. Uncertainties in the Antarctic Ice Sheet Contribution to Sea Level Rise: Exploration of Model Response to Errors in Climate Forcing, Boundary Conditions, and Internal Parameters

    NASA Astrophysics Data System (ADS)

    Schlegel, N.; Seroussi, H. L.; Boening, C.; Larour, E. Y.; Limonadi, D.; Schodlok, M.; Watkins, M. M.

    2017-12-01

    The Jet Propulsion Laboratory-University of California at Irvine Ice Sheet System Model (ISSM) is a thermo-mechanical 2D/3D parallelized finite element software used to physically model the continental-scale flow of ice at high resolutions. Embedded into ISSM are uncertainty quantification (UQ) tools, based on the Design Analysis Kit for Optimization and Terascale Applications (DAKOTA) software. ISSM-DAKOTA offers various UQ methods for the investigation of how errors in model input impact uncertainty in simulation results. We utilize these tools to regionally sample model input and key parameters, based on specified bounds of uncertainty, and run a suite of continental-scale 100-year ISSM forward simulations of the Antarctic Ice Sheet. Resulting diagnostics (e.g., spread in local mass flux and regional mass balance) inform our conclusion about which parameters and/or forcing has the greatest impact on century-scale model simulations of ice sheet evolution. The results allow us to prioritize the key datasets and measurements that are critical for the minimization of ice sheet model uncertainty. Overall, we find that Antartica's total sea level contribution is strongly affected by grounding line retreat, which is driven by the magnitude of ice shelf basal melt rates and by errors in bedrock topography. In addition, results suggest that after 100 years of simulation, Thwaites glacier is the most significant source of model uncertainty, and its drainage basin has the largest potential for future sea level contribution. This work is performed at and supported by the California Institute of Technology's Jet Propulsion Laboratory. Supercomputing time is also supported through a contract with the National Aeronautics and Space Administration's Cryosphere program.

  8. Alternative Regression Equations for Estimation of Annual Peak-Streamflow Frequency for Undeveloped Watersheds in Texas using PRESS Minimization

    USGS Publications Warehouse

    Asquith, William H.; Thompson, David B.

    2008-01-01

    The U.S. Geological Survey, in cooperation with the Texas Department of Transportation and in partnership with Texas Tech University, investigated a refinement of the regional regression method and developed alternative equations for estimation of peak-streamflow frequency for undeveloped watersheds in Texas. A common model for estimation of peak-streamflow frequency is based on the regional regression method. The current (2008) regional regression equations for 11 regions of Texas are based on log10 transformations of all regression variables (drainage area, main-channel slope, and watershed shape). Exclusive use of log10-transformation does not fully linearize the relations between the variables. As a result, some systematic bias remains in the current equations. The bias results in overestimation of peak streamflow for both the smallest and largest watersheds. The bias increases with increasing recurrence interval. The primary source of the bias is the discernible curvilinear relation in log10 space between peak streamflow and drainage area. Bias is demonstrated by selected residual plots with superimposed LOWESS trend lines. To address the bias, a statistical framework based on minimization of the PRESS statistic through power transformation of drainage area is described and implemented, and the resulting regression equations are reported. Compared to log10-exclusive equations, the equations derived from PRESS minimization have PRESS statistics and residual standard errors less than the log10 exclusive equations. Selected residual plots for the PRESS-minimized equations are presented to demonstrate that systematic bias in regional regression equations for peak-streamflow frequency estimation in Texas can be reduced. Because the overall error is similar to the error associated with previous equations and because the bias is reduced, the PRESS-minimized equations reported here provide alternative equations for peak-streamflow frequency estimation.

  9. A new systematic calibration method of ring laser gyroscope inertial navigation system

    NASA Astrophysics Data System (ADS)

    Wei, Guo; Gao, Chunfeng; Wang, Qi; Wang, Qun; Xiong, Zhenyu; Long, Xingwu

    2016-10-01

    Inertial navigation system has been the core component of both military and civil navigation systems. Before the INS is put into application, it is supposed to be calibrated in the laboratory in order to compensate repeatability error caused by manufacturing. Discrete calibration method cannot fulfill requirements of high-accurate calibration of the mechanically dithered ring laser gyroscope navigation system with shock absorbers. This paper has analyzed theories of error inspiration and separation in detail and presented a new systematic calibration method for ring laser gyroscope inertial navigation system. Error models and equations of calibrated Inertial Measurement Unit are given. Then proper rotation arrangement orders are depicted in order to establish the linear relationships between the change of velocity errors and calibrated parameter errors. Experiments have been set up to compare the systematic errors calculated by filtering calibration result with those obtained by discrete calibration result. The largest position error and velocity error of filtering calibration result are only 0.18 miles and 0.26m/s compared with 2 miles and 1.46m/s of discrete calibration result. These results have validated the new systematic calibration method and proved its importance for optimal design and accuracy improvement of calibration of mechanically dithered ring laser gyroscope inertial navigation system.

  10. Accuracy Study of a Robotic System for MRI-guided Prostate Needle Placement

    PubMed Central

    Seifabadi, Reza; Cho, Nathan BJ.; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M.; Fichtinger, Gabor; Iordachita, Iulian

    2013-01-01

    Background Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified, and minimized to the possible extent. Methods and Materials The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called before-insertion error) and the error associated with needle-tissue interaction (called due-to-insertion error). The before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator’s error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator’s accuracy and repeatability was also studied. Results The average overall system error in phantom study was 2.5 mm (STD=1.1mm). The average robotic system error in super soft phantom was 1.3 mm (STD=0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was approximated to be 2.13 mm thus having larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator’s targeting accuracy was 0.71 mm (STD=0.21mm) after robot calibration. The robot’s repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot’s accuracy and repeatability. Conclusions The experimental methodology presented in this paper may help researchers to identify, quantify, and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analyzed here, the overall error of the studied system remained within the acceptable range. PMID:22678990

  11. Accuracy study of a robotic system for MRI-guided prostate needle placement.

    PubMed

    Seifabadi, Reza; Cho, Nathan B J; Song, Sang-Eun; Tokuda, Junichi; Hata, Nobuhiko; Tempany, Clare M; Fichtinger, Gabor; Iordachita, Iulian

    2013-09-01

    Accurate needle placement is the first concern in percutaneous MRI-guided prostate interventions. In this phantom study, different sources contributing to the overall needle placement error of a MRI-guided robot for prostate biopsy have been identified, quantified and minimized to the possible extent. The overall needle placement error of the system was evaluated in a prostate phantom. This error was broken into two parts: the error associated with the robotic system (called 'before-insertion error') and the error associated with needle-tissue interaction (called 'due-to-insertion error'). Before-insertion error was measured directly in a soft phantom and different sources contributing into this part were identified and quantified. A calibration methodology was developed to minimize the 4-DOF manipulator's error. The due-to-insertion error was indirectly approximated by comparing the overall error and the before-insertion error. The effect of sterilization on the manipulator's accuracy and repeatability was also studied. The average overall system error in the phantom study was 2.5 mm (STD = 1.1 mm). The average robotic system error in the Super Soft plastic phantom was 1.3 mm (STD = 0.7 mm). Assuming orthogonal error components, the needle-tissue interaction error was found to be approximately 2.13 mm, thus making a larger contribution to the overall error. The average susceptibility artifact shift was 0.2 mm. The manipulator's targeting accuracy was 0.71 mm (STD = 0.21 mm) after robot calibration. The robot's repeatability was 0.13 mm. Sterilization had no noticeable influence on the robot's accuracy and repeatability. The experimental methodology presented in this paper may help researchers to identify, quantify and minimize different sources contributing into the overall needle placement error of an MRI-guided robotic system for prostate needle placement. In the robotic system analysed here, the overall error of the studied system remained within the acceptable range. Copyright © 2012 John Wiley & Sons, Ltd.

  12. Nonparametric Signal Extraction and Measurement Error in the Analysis of Electroencephalographic Activity During Sleep

    PubMed Central

    Crainiceanu, Ciprian M.; Caffo, Brian S.; Di, Chong-Zhi; Punjabi, Naresh M.

    2009-01-01

    We introduce methods for signal and associated variability estimation based on hierarchical nonparametric smoothing with application to the Sleep Heart Health Study (SHHS). SHHS is the largest electroencephalographic (EEG) collection of sleep-related data, which contains, at each visit, two quasi-continuous EEG signals for each subject. The signal features extracted from EEG data are then used in second level analyses to investigate the relation between health, behavioral, or biometric outcomes and sleep. Using subject specific signals estimated with known variability in a second level regression becomes a nonstandard measurement error problem. We propose and implement methods that take into account cross-sectional and longitudinal measurement error. The research presented here forms the basis for EEG signal processing for the SHHS. PMID:20057925

  13. Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study

    PubMed Central

    Hosseinyalamdary, Siavash

    2018-01-01

    Bayes filters, such as the Kalman and particle filters, have been used in sensor fusion to integrate two sources of information and obtain the best estimate of unknowns. The efficient integration of multiple sensors requires deep knowledge of their error sources. Some sensors, such as Inertial Measurement Unit (IMU), have complicated error sources. Therefore, IMU error modelling and the efficient integration of IMU and Global Navigation Satellite System (GNSS) observations has remained a challenge. In this paper, we developed deep Kalman filter to model and remove IMU errors and, consequently, improve the accuracy of IMU positioning. To achieve this, we added a modelling step to the prediction and update steps of the Kalman filter, so that the IMU error model is learned during integration. The results showed our deep Kalman filter outperformed the conventional Kalman filter and reached a higher level of accuracy. PMID:29695119

  14. Deep Kalman Filter: Simultaneous Multi-Sensor Integration and Modelling; A GNSS/IMU Case Study.

    PubMed

    Hosseinyalamdary, Siavash

    2018-04-24

    Bayes filters, such as the Kalman and particle filters, have been used in sensor fusion to integrate two sources of information and obtain the best estimate of unknowns. The efficient integration of multiple sensors requires deep knowledge of their error sources. Some sensors, such as Inertial Measurement Unit (IMU), have complicated error sources. Therefore, IMU error modelling and the efficient integration of IMU and Global Navigation Satellite System (GNSS) observations has remained a challenge. In this paper, we developed deep Kalman filter to model and remove IMU errors and, consequently, improve the accuracy of IMU positioning. To achieve this, we added a modelling step to the prediction and update steps of the Kalman filter, so that the IMU error model is learned during integration. The results showed our deep Kalman filter outperformed the conventional Kalman filter and reached a higher level of accuracy.

  15. Total absorption cross sections of several gases of aeronomic interest at 584 A.

    NASA Technical Reports Server (NTRS)

    Starr, W. L.; Loewenstein, M.

    1972-01-01

    Total photoabsorption cross sections have been measured at 584.3 A for N2, O2, Ar, CO2, CO, NO, N2O, NH3, CH4, H2, and H2S. A monochromator was used to isolate the He I 584 line produced in a helium resonance lamp, and thin aluminum filters were used as absorption cell windows, thereby eliminating possible errors associated with the use of undispersed radiation or windowless cells. Sources of error are examined, and limits of uncertainty are given. Previous relevant cross-sectional measurements and possible error sources are reviewed. Wall adsorption as a source of error in cross-sectional measurements has not previously been considered and is discussed briefly.

  16. Source localization (LORETA) of the error-related-negativity (ERN/Ne) and positivity (Pe).

    PubMed

    Herrmann, Martin J; Römmler, Josefine; Ehlis, Ann-Christine; Heidrich, Anke; Fallgatter, Andreas J

    2004-07-01

    We investigated error processing of 39 subjects engaging the Eriksen flanker task. In all 39 subjects a pronounced negative deflection (ERN/Ne) and a later positive component (Pe) were observed after incorrect as compared to correct responses. The neural sources of both components were analyzed using LORETA source localization. For the negative component (ERN/Ne) we found significantly higher brain electrical activity in medial prefrontal areas for incorrect responses, whereas the positive component (Pe) was localized nearby but more rostral within the anterior cingulate cortex (ACC). Thus, different neural generators were found for the ERN/Ne and the Pe, which further supports the notion that both error-related components represent different aspects of error processing.

  17. Dispensing error rate after implementation of an automated pharmacy carousel system.

    PubMed

    Oswald, Scott; Caldwell, Richard

    2007-07-01

    A study was conducted to determine filling and dispensing error rates before and after the implementation of an automated pharmacy carousel system (APCS). The study was conducted in a 613-bed acute and tertiary care university hospital. Before the implementation of the APCS, filling and dispensing rates were recorded during October through November 2004 and January 2005. Postimplementation data were collected during May through June 2006. Errors were recorded in three areas of pharmacy operations: first-dose or missing medication fill, automated dispensing cabinet fill, and interdepartmental request fill. A filling error was defined as an error caught by a pharmacist during the verification step. A dispensing error was defined as an error caught by a pharmacist observer after verification by the pharmacist. Before implementation of the APCS, 422 first-dose or missing medication orders were observed between October 2004 and January 2005. Independent data collected in December 2005, approximately six weeks after the introduction of the APCS, found that filling and error rates had increased. The filling rate for automated dispensing cabinets was associated with the largest decrease in errors. Filling and dispensing error rates had decreased by December 2005. In terms of interdepartmental request fill, no dispensing errors were noted in 123 clinic orders dispensed before the implementation of the APCS. One dispensing error out of 85 clinic orders was identified after implementation of the APCS. The implementation of an APCS at a university hospital decreased medication filling errors related to automated cabinets only and did not affect other filling and dispensing errors.

  18. Public funding for contraceptive, sterilization and abortion services, 1994.

    PubMed

    Sollom, T; Gold, R B; Saul, R

    1996-01-01

    In 1994, federal and state funding for contraceptive services and supplies reached +715 million. Funding totaled +148 million for contraceptive sterilization and +90 million for abortion services. According to a survey of state health, Medicaid and social service agencies, reported spending on contraceptive services and supplies increased by 11% between 1992 and 1994. In the same period, spending under Title X rose by 37%, making it the third largest public funding source for contraceptive services and supplies. The largest source of public funds for family planning services continues to be the joint federal-state Medicaid program. Medicaid family planning expenditures increased by only 4% between 1992 and 1994, a sizable decrease in growth from previous years. State funds continue to be the second largest source, providing almost one-quarter of reported public expenditures in 1994. The maternal and child health and social services block grants remain relatively minor sources of support nationally, although in a handful of states they provide the majority of public-sector funds. State governments were virtually the sole source of public support for the 203,200 abortions provided in 1994 to low-income women. Despite the loosening of federal abortion funding criteria in FY 1994 permitting payment in cases of rape and incest, federally funded abortions numbered only 282.

  19. Dynamics of the Wulong landslide revealed by broadband seismic records

    NASA Astrophysics Data System (ADS)

    Li, Zhengyuan; Huang, Xinghui; Xu, Qiang; Yu, Dan; Fan, Junyi; Qiao, Xuejun

    2017-02-01

    The catastrophic Wulong landslide occurred at 14:51 (Beijing time, UTC+8) on 5 June 2009, in Wulong Prefecture, Southwest China. This rockslide occurred in a complex topographic environment. Seismic signals generated by this event were recorded by the seismic network deployed in the surrounding area, and long-period signals were extracted from 8 broadband seismic stations within 250 km to obtain source time functions by inversion. The location of this event was simultaneously acquired using a stepwise refined grid search approach, with an error of 2.2 km. The estimated source time functions reveal that, according to the movement parameters, this landslide could be divided into three stages with different movement directions, velocities, and increasing inertial forces. The sliding mass moved northward, northeastward and northward in the three stages, with average velocities of 6.5, 20.3, and 13.8 m/s, respectively. The maximum movement velocity of the mass reached 35 m/s before the end of the second stage. The basal friction coefficients were relatively small in the first stage and gradually increasing; large in the second stage, accompanied by the largest variability; and oscillating and gradually decreasing to a stable value, in the third stage. Analysis shows that the movement characteristics of these three stages are consistent with the topography of the sliding zone, corresponding to the northward initiation, eastward sliding after being stopped by the west wall, and northward debris flowing after collision with the east slope of the Tiejianggou valley. The maximum movement velocity of the sliding mass results from the largest height difference of the west slope of the Tiejianggou valley. The basal friction coefficients of the three stages represent the thin weak layer in the source zone, the dramatically varying topography of the west slope of the Tiejianggou valley, and characteristics of the debris flow along the Tiejianggou valley. Based on the above results, it is recognized that the inverted source time functions are consistent with the topography of the sliding zone. Special geological and topographic conditions can have a focusing effect on landslides and are key factors in inducing the major disasters, which may follow from them. This landslide was of an unusual nature, and it will be worthwhile to pursue research into its dynamic characteristics more deeply.[Figure not available: see fulltext.

  20. Sources of Error in Substance Use Prevalence Surveys

    PubMed Central

    Johnson, Timothy P.

    2014-01-01

    Population-based estimates of substance use patterns have been regularly reported now for several decades. Concerns with the quality of the survey methodologies employed to produce those estimates date back almost as far. Those concerns have led to a considerable body of research specifically focused on understanding the nature and consequences of survey-based errors in substance use epidemiology. This paper reviews and summarizes that empirical research by organizing it within a total survey error model framework that considers multiple types of representation and measurement errors. Gaps in our knowledge of error sources in substance use surveys and areas needing future research are also identified. PMID:27437511

  1. First order error corrections in common introductory physics experiments

    NASA Astrophysics Data System (ADS)

    Beckey, Jacob; Baker, Andrew; Aravind, Vasudeva; Clarion Team

    As a part of introductory physics courses, students perform different standard lab experiments. Almost all of these experiments are prone to errors owing to factors like friction, misalignment of equipment, air drag, etc. Usually these types of errors are ignored by students and not much thought is paid to the source of these errors. However, paying attention to these factors that give rise to errors help students make better physics models and understand physical phenomena behind experiments in more detail. In this work, we explore common causes of errors in introductory physics experiment and suggest changes that will mitigate the errors, or suggest models that take the sources of these errors into consideration. This work helps students build better and refined physical models and understand physics concepts in greater detail. We thank Clarion University undergraduate student grant for financial support involving this project.

  2. Life cycle assessment of cellulosic and advanced biofuel crops

    USDA-ARS?s Scientific Manuscript database

    Estimating the carbon intensity of biofuel production is important in order to meet greenhouse gas (GHG) targets set by government policy. Nitrous oxide emissions are the largest source and soil carbon the largest sink of GHGs for determining the carbon intensity of biofuels during their production ...

  3. Mapping poverty using mobile phone and satellite data

    PubMed Central

    Pezzulo, Carla; Bjelland, Johannes; Iqbal, Asif M.; Hadiuzzaman, Khandakar N.; Lu, Xin; Wetter, Erik; Tatem, Andrew J.

    2017-01-01

    Poverty is one of the most important determinants of adverse health outcomes globally, a major cause of societal instability and one of the largest causes of lost human potential. Traditional approaches to measuring and targeting poverty rely heavily on census data, which in most low- and middle-income countries (LMICs) are unavailable or out-of-date. Alternate measures are needed to complement and update estimates between censuses. This study demonstrates how public and private data sources that are commonly available for LMICs can be used to provide novel insight into the spatial distribution of poverty. We evaluate the relative value of modelling three traditional poverty measures using aggregate data from mobile operators and widely available geospatial data. Taken together, models combining these data sources provide the best predictive power (highest r2 = 0.78) and lowest error, but generally models employing mobile data only yield comparable results, offering the potential to measure poverty more frequently and at finer granularity. Stratifying models into urban and rural areas highlights the advantage of using mobile data in urban areas and different data in different contexts. The findings indicate the possibility to estimate and continually monitor poverty rates at high spatial resolution in countries with limited capacity to support traditional methods of data collection. PMID:28148765

  4. Mapping poverty using mobile phone and satellite data.

    PubMed

    Steele, Jessica E; Sundsøy, Pål Roe; Pezzulo, Carla; Alegana, Victor A; Bird, Tomas J; Blumenstock, Joshua; Bjelland, Johannes; Engø-Monsen, Kenth; de Montjoye, Yves-Alexandre; Iqbal, Asif M; Hadiuzzaman, Khandakar N; Lu, Xin; Wetter, Erik; Tatem, Andrew J; Bengtsson, Linus

    2017-02-01

    Poverty is one of the most important determinants of adverse health outcomes globally, a major cause of societal instability and one of the largest causes of lost human potential. Traditional approaches to measuring and targeting poverty rely heavily on census data, which in most low- and middle-income countries (LMICs) are unavailable or out-of-date. Alternate measures are needed to complement and update estimates between censuses. This study demonstrates how public and private data sources that are commonly available for LMICs can be used to provide novel insight into the spatial distribution of poverty. We evaluate the relative value of modelling three traditional poverty measures using aggregate data from mobile operators and widely available geospatial data. Taken together, models combining these data sources provide the best predictive power (highest r 2 = 0.78) and lowest error, but generally models employing mobile data only yield comparable results, offering the potential to measure poverty more frequently and at finer granularity. Stratifying models into urban and rural areas highlights the advantage of using mobile data in urban areas and different data in different contexts. The findings indicate the possibility to estimate and continually monitor poverty rates at high spatial resolution in countries with limited capacity to support traditional methods of data collection. © 2017 The Authors.

  5. A comparative study between evaluation methods for quality control procedures for determining the accuracy of PET/CT registration

    NASA Astrophysics Data System (ADS)

    Cha, Min Kyoung; Ko, Hyun Soo; Jung, Woo Young; Ryu, Jae Kwang; Choe, Bo-Young

    2015-08-01

    The Accuracy of registration between positron emission tomography (PET) and computed tomography (CT) images is one of the important factors for reliable diagnosis in PET/CT examinations. Although quality control (QC) for checking alignment of PET and CT images should be performed periodically, the procedures have not been fully established. The aim of this study is to determine optimal quality control (QC) procedures that can be performed at the user level to ensure the accuracy of PET/CT registration. Two phantoms were used to carry out this study: the American college of Radiology (ACR)-approved PET phantom and National Electrical Manufacturers Association (NEMA) International Electrotechnical Commission (IEC) body phantom, containing fillable spheres. All PET/CT images were acquired on a Biograph TruePoint 40 PET/CT scanner using routine protocols. To measure registration error, the spatial coordinates of the estimated centers of the target slice (spheres) was calculated independently for the PET and the CT images in two ways. We compared the images from the ACR-approved PET phantom to that from the NEMA IEC body phantom. Also, we measured the total time required from phantom preparation to image analysis. The first analysis method showed a total difference of 0.636 ± 0.11 mm for the largest hot sphere and 0.198 ± 0.09 mm for the largest cold sphere in the case of the ACR-approved PET phantom. In the NEMA IEC body phantom, the total difference was 3.720 ± 0.97 mm for the largest hot sphere and 4.800 ± 0.85 mm for the largest cold sphere. The second analysis method showed that the differences in the x location at the line profile of the lesion on PET and CT were (1.33, 1.33) mm for a bone lesion, (-1.26, -1.33) mm for an air lesion and (-1.67, -1.60) mm for a hot sphere lesion for the ACR-approved PET phantom. For the NEMA IEC body phantom, the differences in the x location at the line profile of the lesion on PET and CT were (-1.33, 4.00) mm for the air lesion and (1.33, -1.29) mm for a hot sphere lesion. These registration errors from this study were reasonable compared to the errors reported in previous studies. Meanwhile, the total time required from phantom preparation was 67.72 ± 4.50 min for the ACR-approved PET phantom and 96.78 ± 8.50 min for the NEMA IEC body phantom. When the registration errors and the lead times are considered, the method using the ACR-approved PET phantom was more practical and useful than the method using the NEMA IEC body phantom.

  6. A methodology for translating positional error into measures of attribute error, and combining the two error sources

    Treesearch

    Yohay Carmel; Curtis Flather; Denis Dean

    2006-01-01

    This paper summarizes our efforts to investigate the nature, behavior, and implications of positional error and attribute error in spatiotemporal datasets. Estimating the combined influence of these errors on map analysis has been hindered by the fact that these two error types are traditionally expressed in different units (distance units, and categorical units,...

  7. Fluorescence errors in integrating sphere measurements of remote phosphor type LED light sources

    NASA Astrophysics Data System (ADS)

    Keppens, A.; Zong, Y.; Podobedov, V. B.; Nadal, M. E.; Hanselaer, P.; Ohno, Y.

    2011-05-01

    The relative spectral radiant flux error caused by phosphor fluorescence during integrating sphere measurements is investigated both theoretically and experimentally. Integrating sphere and goniophotometer measurements are compared and used for model validation, while a case study provides additional clarification. Criteria for reducing fluorescence errors to a degree of negligibility as well as a fluorescence error correction method based on simple matrix algebra are presented. Only remote phosphor type LED light sources are studied because of their large phosphor surfaces and high application potential in general lighting.

  8. Evaluation of overall setup accuracy and adequate setup margins in pelvic image-guided radiotherapy: Comparison of the male and female patients

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Laaksomaa, Marko, E-mail: marko.laaksomaa@pshp.fi; Kapanen, Mika; Department of Medical Physics, Tampere University Hospital

    We evaluated adequate setup margins for the radiotherapy (RT) of pelvic tumors based on overall position errors of bony landmarks. We also estimated the difference in setup accuracy between the male and female patients. Finally, we compared the patient rotation for 2 immobilization devices. The study cohort included consecutive 64 male and 64 female patients. Altogether, 1794 orthogonal setup images were analyzed. Observer-related deviation in image matching and the effect of patient rotation were explicitly determined. Overall systematic and random errors were calculated in 3 orthogonal directions. Anisotropic setup margins were evaluated based on residual errors after weekly image guidance.more » The van Herk formula was used to calculate the margins. Overall, 100 patients were immobilized with a house-made device. The patient rotation was compared against 28 patients immobilized with CIVCO's Kneefix and Feetfix. We found that the usually applied isotropic setup margin of 8 mm covered all the uncertainties related to patient setup for most RT treatments of the pelvis. However, margins of even 10.3 mm were needed for the female patients with very large pelvic target volumes centered either in the symphysis or in the sacrum containing both of these structures. This was because the effect of rotation (p ≤ 0.02) and the observer variation in image matching (p ≤ 0.04) were significantly larger for the female patients than for the male patients. Even with daily image guidance, the required margins remained larger for the women. Patient rotations were largest about the lateral axes. The difference between the required margins was only 1 mm for the 2 immobilization devices. The largest component of overall systematic position error came from patient rotation. This emphasizes the need for rotation correction. Overall, larger position errors and setup margins were observed for the female patients with pelvic cancer than for the male patients.« less

  9. Influence of Precision of Emission Characteristic Parameters on Model Prediction Error of VOCs/Formaldehyde from Dry Building Material

    PubMed Central

    Wei, Wenjuan; Xiong, Jianyin; Zhang, Yinping

    2013-01-01

    Mass transfer models are useful in predicting the emissions of volatile organic compounds (VOCs) and formaldehyde from building materials in indoor environments. They are also useful for human exposure evaluation and in sustainable building design. The measurement errors in the emission characteristic parameters in these mass transfer models, i.e., the initial emittable concentration (C 0), the diffusion coefficient (D), and the partition coefficient (K), can result in errors in predicting indoor VOC and formaldehyde concentrations. These errors have not yet been quantitatively well analyzed in the literature. This paper addresses this by using modelling to assess these errors for some typical building conditions. The error in C 0, as measured in environmental chambers and applied to a reference living room in Beijing, has the largest influence on the model prediction error in indoor VOC and formaldehyde concentration, while the error in K has the least effect. A correlation between the errors in D, K, and C 0 and the error in the indoor VOC and formaldehyde concentration prediction is then derived for engineering applications. In addition, the influence of temperature on the model prediction of emissions is investigated. It shows the impact of temperature fluctuations on the prediction errors in indoor VOC and formaldehyde concentrations to be less than 7% at 23±0.5°C and less than 30% at 23±2°C. PMID:24312497

  10. Realtime mitigation of GPS SA errors using Loran-C

    NASA Technical Reports Server (NTRS)

    Braasch, Soo Y.

    1994-01-01

    The hybrid use of Loran-C with the Global Positioning System (GPS) was shown capable of providing a sole-means of enroute air radionavigation. By allowing pilots to fly direct to their destinations, use of this system is resulting in significant time savings and therefore fuel savings as well. However, a major error source limiting the accuracy of GPS is the intentional degradation of the GPS signal known as Selective Availability (SA). SA-induced position errors are highly correlated and far exceed all other error sources (horizontal position error: 100 meters, 95 percent). Realtime mitigation of SA errors from the position solution is highly desirable. How that can be achieved is discussed. The stability of Loran-C signals is exploited to reduce SA errors. The theory behind this technique is discussed and results using bench and flight data are given.

  11. Investigating error structure of shuttle radar topography mission elevation data product

    NASA Astrophysics Data System (ADS)

    Becek, Kazimierz

    2008-08-01

    An attempt was made to experimentally assess the instrumental component of error of the C-band SRTM (SRTM). This was achieved by comparing elevation data of 302 runways from airports all over the world with the shuttle radar topography mission data product (SRTM). It was found that the rms of the instrumental error is about +/-1.55 m. Modeling of the remaining SRTM error sources, including terrain relief and pixel size, shows that downsampling from 30 m to 90 m (1 to 3 arc-sec pixels) worsened SRTM vertical accuracy threefold. It is suspected that the proximity of large metallic objects is a source of large SRTM errors. The achieved error estimates allow a pixel-based accuracy assessment of the SRTM elevation data product to be constructed. Vegetation-induced errors were not considered in this work.

  12. An interpretation of radiosonde errors in the atmospheric boundary layer

    Treesearch

    Bernadette H. Connell; David R. Miller

    1995-01-01

    The authors review sources of error in radiosonde measurements in the atmospheric boundary layer and analyze errors of two radiosonde models manufactured by Atmospheric Instrumentation Research, Inc. The authors focus on temperature and humidity lag errors and wind errors. Errors in measurement of azimuth and elevation angles and pressure over short time intervals and...

  13. China’s Economic Conditions

    DTIC Science & Technology

    2009-12-11

    5 Measuring the Size of China’s Economy .....................................................................................6...29 Table A-4. China’s Top Five African Export Markets : 2004-2008 .............................................. 30 Table A-5...partner, its third largest export market , and its largest source of imports. Many U.S. companies have extensive operations in China in order to sell

  14. Propagation of angular errors in two-axis rotation systems

    NASA Astrophysics Data System (ADS)

    Torrington, Geoffrey K.

    2003-10-01

    Two-Axis Rotation Systems, or "goniometers," are used in diverse applications including telescope pointing, automotive headlamp testing, and display testing. There are three basic configurations in which a goniometer can be built depending on the orientation and order of the stages. Each configuration has a governing set of equations which convert motion between the system "native" coordinates to other base systems, such as direction cosines, optical field angles, or spherical-polar coordinates. In their simplest form, these equations neglect errors present in real systems. In this paper, a statistical treatment of error source propagation is developed which uses only tolerance data, such as can be obtained from the system mechanical drawings prior to fabrication. It is shown that certain error sources are fully correctable, partially correctable, or uncorrectable, depending upon the goniometer configuration and zeroing technique. The system error budget can be described by a root-sum-of-squares technique with weighting factors describing the sensitivity of each error source. This paper tabulates weighting factors at 67% (k=1) and 95% (k=2) confidence for various levels of maximum travel for each goniometer configuration. As a practical example, this paper works through an error budget used for the procurement of a system at Sandia National Laboratories.

  15. Accounting for optical errors in microtensiometry.

    PubMed

    Hinton, Zachary R; Alvarez, Nicolas J

    2018-09-15

    Drop shape analysis (DSA) techniques measure interfacial tension subject to error in image analysis and the optical system. While considerable efforts have been made to minimize image analysis errors, very little work has treated optical errors. There are two main sources of error when considering the optical system: the angle of misalignment and the choice of focal plane. Due to the convoluted nature of these sources, small angles of misalignment can lead to large errors in measured curvature. We demonstrate using microtensiometry the contributions of these sources to measured errors in radius, and, more importantly, deconvolute the effects of misalignment and focal plane. Our findings are expected to have broad implications on all optical techniques measuring interfacial curvature. A geometric model is developed to analytically determine the contributions of misalignment angle and choice of focal plane on measurement error for spherical cap interfaces. This work utilizes a microtensiometer to validate the geometric model and to quantify the effect of both sources of error. For the case of a microtensiometer, an empirical calibration is demonstrated that corrects for optical errors and drastically simplifies implementation. The combination of geometric modeling and experimental results reveal a convoluted relationship between the true and measured interfacial radius as a function of the misalignment angle and choice of focal plane. The validated geometric model produces a full operating window that is strongly dependent on the capillary radius and spherical cap height. In all cases, the contribution of optical errors is minimized when the height of the spherical cap is equivalent to the capillary radius, i.e. a hemispherical interface. The understanding of these errors allow for correct measure of interfacial curvature and interfacial tension regardless of experimental setup. For the case of microtensiometry, this greatly decreases the time for experimental setup and increases experiential accuracy. In a broad sense, this work outlines the importance of optical errors in all DSA techniques. More specifically, these results have important implications for all microscale and microfluidic measurements of interface curvature. Copyright © 2018 Elsevier Inc. All rights reserved.

  16. SU-F-T-398: Improving Radiotherapy Treatment Planning Using Dual Energy Computed Tomography Based Tissue Characterization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tomic, N; Bekerat, H; Seuntjens, J

    Purpose: Both kVp settings and geometric distribution of various materials lead to significant change of the HU values, showing the largest discrepancy for high-Z materials and for the lowest CT scanning kVp setting. On the other hand, the dose distributions around low-energy brachytherapy sources are highly dependent on the architecture and composition of tissue heterogeneities in and around the implant. Both measurements and Monte Carlo calculations show that improper tissue characterization may lead to calculated dose errors of 90% for low energy and around 10% for higher energy photons. We investigated the ability of dual-energy CT (DECT) to characterize moremore » accurately tissue equivalent materials. Methods: We used the RMI-467 heterogeneity phantom scanned in DECT mode with 3 different set-ups: first, we placed high electron density (ED) plugs within the outer ring of the phantom; then we arranged high ED plugs within the inner ring; and finally ED plugs were randomly distributed. All three setups were scanned with the same DECT technique using a single-source DECT scanner with fast kVp switching (Discovery CT750HD; GE Healthcare). Images were transferred to a GE Advantage workstation for DECT analysis. Spectral Hounsfield unit curves (SHUACs) were then generated from 50 to 140-keV, in 10-keV increments, for each plug. Results: The dynamic range of Hounsfield units shrinks with increased photon energy as the attenuation coefficients decrease. Our results show that the spread of HUs for the three different geometrical setups is the smallest at 80 keV. Furthermore, among all the energies and all materials presented, the largest difference appears at high Z tissue equivalent plugs. Conclusion: Our results suggest that dose calculations at both megavoltage and low photon energies could benefit in the vicinity of bony structures if the 80 keV reconstructed monochromatic CT image is used with the DECT protocol utilized in this work.« less

  17. Effect of inventory method on niche models: random versus systematic error

    Treesearch

    Heather E. Lintz; Andrew N. Gray; Bruce McCune

    2013-01-01

    Data from large-scale biological inventories are essential for understanding and managing Earth's ecosystems. The Forest Inventory and Analysis Program (FIA) of the U.S. Forest Service is the largest biological inventory in North America; however, the FIA inventory recently changed from an amalgam of different approaches to a nationally-standardized approach in...

  18. Main sources of errors in diagnosis of chronic radiation sickness (in Russian)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Soldatova, V.A.

    1973-11-01

    With the aim of finding out the main sources of errors in the diagnosis of chronic radiation sickness, the author analyzed a total of 500 cases of this sickness in roenigenologists and radiologists sent to the clinic to be examined according to occupational indications. lt was shown that the main source of errors when interpreting the observed deviations as occupational was underestimation of etiological significance of functional and organic diseases of the nervous system, endocrinevascular dystonia and also such diseases as hypochromic anemia and chronic infection. The majority of diagnostic errors is explained by insufficient knowledge of the main regularitymore » of forming the picture of chronic radiation sickness and by the absence of the necessary differential diagnosis with general somatic diseases. (auth)« less

  19. Estimating the sources and transport of nutrients in the Waikato River Basin, New Zealand

    USGS Publications Warehouse

    Alexander, Richard B.; Elliott, Alexander H.; Shankar, Ude; McBride, Graham B.

    2002-01-01

    We calibrated SPARROW (Spatially Referenced Regression on Watershed Attributes) surface water‐quality models using measurements of total nitrogen and total phosphorus from 37 sites in the 13,900‐km2 Waikato River Basin, the largest watershed on the North Island of New Zealand. This first application of SPARROW outside of the United States included watersheds representative of a wide range of natural and cultural conditions and water‐resources data that were well suited for calibrating and validating the models. We applied the spatially distributed model to a drainage network of nearly 5000 stream reaches and 75 lakes and reservoirs to empirically estimate the rates of nutrient delivery (and their levels of uncertainty) from point and diffuse sources to streams, lakes, and watershed outlets. The resulting models displayed relatively small errors; predictions of stream yield (kg ha−1 yr−1) were typically within 30% or less of the observed values at the monitoring sites. There was strong evidence of the accuracy of the model estimates of nutrient sources and the natural rates of nutrient attenuation in surface waters. Estimated loss rates for streams, lakes, and reservoirs agreed closely with experimental measurements and empirical models from New Zealand, North America, and Europe as well as with previous U.S. SPARROW models. The results indicate that the SPARROW modeling technique provides a reliable method for relating experimental data and observations from small catchments to the transport of nutrients in the surface waters of large river basins.

  20. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Vecchio, Alberto; Wickham, Elizabeth D.L.

    The Laser Interferometer Space Antenna (LISA) is expected to provide the largest observational sample of binary systems of faint subsolar mass compact objects, in particular, white-dwarfs, whose radiation is monochromatic over most of the LISA observational window. Current astrophysical estimates suggest that the instrument will be able to resolve {approx}10{sup 4} such systems, with a large fraction of them at frequencies > or approx. 3 mHz, where the wavelength of gravitational waves becomes comparable to or shorter than the LISA armlength. This affects the structure of the so-called LISA transfer function which cannot be treated as constant in this frequencymore » range: it introduces characteristic phase and amplitude modulations that depend on the source location in the sky and the emission frequency. Here we investigate the effect of the LISA transfer function on detection and parameter estimation for monochromatic sources. For signal detection we show that filters constructed by approximating the transfer function as a constant (long-wavelength approximation) introduce a negligible loss of signal-to-noise ratio--the fitting factor always exceeds 0.97--for f{<=}10 mHz, therefore in a frequency range where one would actually expect the approximation to fail. For parameter estimation, we conclude that in the range 3 mHz < or approx. f < or approx. 30 mHz the errors associated with parameter measurements differ from {approx_equal}5% up to a factor {approx}10 (depending on the actual source parameters and emission frequency) with respect to those computed using the long-wavelength approximation.« less

  1. Decomposition of Sources of Errors in Seasonal Streamflow Forecasting over the U.S. Sunbelt

    NASA Technical Reports Server (NTRS)

    Mazrooei, Amirhossein; Sinah, Tusshar; Sankarasubramanian, A.; Kumar, Sujay V.; Peters-Lidard, Christa D.

    2015-01-01

    Seasonal streamflow forecasts, contingent on climate information, can be utilized to ensure water supply for multiple uses including municipal demands, hydroelectric power generation, and for planning agricultural operations. However, uncertainties in the streamflow forecasts pose significant challenges in their utilization in real-time operations. In this study, we systematically decompose various sources of errors in developing seasonal streamflow forecasts from two Land Surface Models (LSMs) (Noah3.2 and CLM2), which are forced with downscaled and disaggregated climate forecasts. In particular, the study quantifies the relative contributions of the sources of errors from LSMs, climate forecasts, and downscaling/disaggregation techniques in developing seasonal streamflow forecast. For this purpose, three month ahead seasonal precipitation forecasts from the ECHAM4.5 general circulation model (GCM) were statistically downscaled from 2.8deg to 1/8deg spatial resolution using principal component regression (PCR) and then temporally disaggregated from monthly to daily time step using kernel-nearest neighbor (K-NN) approach. For other climatic forcings, excluding precipitation, we considered the North American Land Data Assimilation System version 2 (NLDAS-2) hourly climatology over the years 1979 to 2010. Then the selected LSMs were forced with precipitation forecasts and NLDAS-2 hourly climatology to develop retrospective seasonal streamflow forecasts over a period of 20 years (1991-2010). Finally, the performance of LSMs in forecasting streamflow under different schemes was analyzed to quantify the relative contribution of various sources of errors in developing seasonal streamflow forecast. Our results indicate that the most dominant source of errors during winter and fall seasons is the errors due to ECHAM4.5 precipitation forecasts, while temporal disaggregation scheme contributes to maximum errors during summer season.

  2. Black carbon emissions from Russian diesel sources. Case study of Murmansk

    DOE PAGES

    Evans, M.; Kholod, N.; Malyshev, V.; ...

    2015-07-27

    Black carbon (BC) is a potent pollutant because of its effects on climate change, ecosystems and human health. Black carbon has a particularly pronounced impact as a climate forcer in the Arctic because of its effect on snow albedo and cloud formation. We have estimated BC emissions from diesel sources in the Murmansk Region and Murmansk City, the largest city in the world above the Arctic Circle. In this study we developed a detailed inventory of diesel sources including on-road vehicles, off-road transport (mining, locomotives, construction and agriculture), ships and diesel generators. For on-road transport, we conducted several surveys tomore » understand the vehicle fleet and driving patterns, and, for all sources, we also relied on publicly available local data sets and analysis. We calculated that BC emissions in the Murmansk Region were 0.40 Gg in 2012. The mining industry is the largest source of BC emissions in the region, emitting 69 % of all BC emissions because of its large diesel consumption and absence of emissions controls. On-road vehicles are the second largest source, emitting about 13 % of emissions. Old heavy duty trucks are the major source of emissions. Emission controls on new vehicles limit total emissions from on-road transportation. Vehicle traffic and fleet surveys show that many of the older cars on the registry are lightly or never used. We also estimated that total BC emissions from diesel sources in Russia were 50.8 Gg in 2010, and on-road transport contributed 49 % of diesel BC emissions. Agricultural machinery is also a significant source Russia-wide, in part because of the lack of controls on off-road vehicles.« less

  3. Black carbon emissions from Russian diesel sources. Case study of Murmansk

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Evans, M.; Kholod, N.; Malyshev, V.

    Black carbon (BC) is a potent pollutant because of its effects on climate change, ecosystems and human health. Black carbon has a particularly pronounced impact as a climate forcer in the Arctic because of its effect on snow albedo and cloud formation. We have estimated BC emissions from diesel sources in the Murmansk Region and Murmansk City, the largest city in the world above the Arctic Circle. In this study we developed a detailed inventory of diesel sources including on-road vehicles, off-road transport (mining, locomotives, construction and agriculture), ships and diesel generators. For on-road transport, we conducted several surveys tomore » understand the vehicle fleet and driving patterns, and, for all sources, we also relied on publicly available local data sets and analysis. We calculated that BC emissions in the Murmansk Region were 0.40 Gg in 2012. The mining industry is the largest source of BC emissions in the region, emitting 69 % of all BC emissions because of its large diesel consumption and absence of emissions controls. On-road vehicles are the second largest source, emitting about 13 % of emissions. Old heavy duty trucks are the major source of emissions. Emission controls on new vehicles limit total emissions from on-road transportation. Vehicle traffic and fleet surveys show that many of the older cars on the registry are lightly or never used. We also estimated that total BC emissions from diesel sources in Russia were 50.8 Gg in 2010, and on-road transport contributed 49 % of diesel BC emissions. Agricultural machinery is also a significant source Russia-wide, in part because of the lack of controls on off-road vehicles.« less

  4. A Very Simple Method to Calculate the (Positive) Largest Lyapunov Exponent Using Interval Extensions

    NASA Astrophysics Data System (ADS)

    Mendes, Eduardo M. A. M.; Nepomuceno, Erivelton G.

    2016-12-01

    In this letter, a very simple method to calculate the positive Largest Lyapunov Exponent (LLE) based on the concept of interval extensions and using the original equations of motion is presented. The exponent is estimated from the slope of the line derived from the lower bound error when considering two interval extensions of the original system. It is shown that the algorithm is robust, fast and easy to implement and can be considered as alternative to other algorithms available in the literature. The method has been successfully tested in five well-known systems: Logistic, Hénon, Lorenz and Rössler equations and the Mackey-Glass system.

  5. [Improving blood safety: errors management in transfusion medicine].

    PubMed

    Bujandrić, Nevenka; Grujić, Jasmina; Krga-Milanović, Mirjana

    2014-01-01

    The concept of blood safety includes the entire transfusion chain starting with the collection of blood from the blood donor, and ending with blood transfusion to the patient. The concept involves quality management system as the systematic monitoring of adverse reactions and incidents regarding the blood donor or patient. Monitoring of near-miss errors show the critical points in the working process and increase transfusion safety. The aim of the study was to present the analysis results of adverse and unexpected events in transfusion practice with a potential risk to the health of blood donors and patients. One-year retrospective study was based on the collection, analysis and interpretation of written reports on medical errors in the Blood Transfusion Institute of Vojvodina. Errors were distributed according to the type, frequency and part of the working process where they occurred. Possible causes and corrective actions were described for each error. The study showed that there were not errors with potential health consequences for the blood donor/patient. Errors with potentially damaging consequences for patients were detected throughout the entire transfusion chain. Most of the errors were identified in the preanalytical phase. The human factor was responsible for the largest number of errors. Error reporting system has an important role in the error management and the reduction of transfusion-related risk of adverse events and incidents. The ongoing analysis reveals the strengths and weaknesses of the entire process and indicates the necessary changes. Errors in transfusion medicine can be avoided in a large percentage and prevention is cost-effective, systematic and applicable.

  6. Hybrid Correlation Algorithms. A Bridge Between Feature Matching and Image Correlation,

    DTIC Science & Technology

    1979-11-01

    spa- tially into groups of pixels. The intensity level preprocessing is designed to compensate for any biases or gain changes in the system ; whereas...number of error sources that affect the performance of the system . It would be desirable to lump these errors into ge- neric categories in discussing... system performance rather than treat- ing each error source separately. Such a generic categorization should possess the following properties: 1. The

  7. Estimated withdrawals and use of freshwater in Vermont, 1990

    USGS Publications Warehouse

    Horn, M.A.; Medalie, Laura

    1996-01-01

    Estimated freshwater withdrawals during 1990 in Vermont totaled about 632 million gallons per day. The largest withdrawals were for thermoelectric- power generation (82 percent), industrial use (7 percent), and public supply (6 percent). Most withdrawals, 587 million gallons per day, were made from surface-water sources as compared to 44.9 million gallons per day from ground-water sources. The largest withdrawals were in the Upper Connecticut-Mascomo River Basin (525 million gallons per day). About 17,700 million gallons per day were used instream for hydroelectric-poser generation, the largest of which were in the Upper Connecticut-Mascoma and Otter River Basins. Other information describing water-use patters is shown in tables, bar graphs, pie charts, maps, and accompanying text. The data are aggregated by river basin (hydrologic cataloging unit), and all amounts are reports in million gallons per day.

  8. Personal digital assistant-based drug information sources: potential to improve medication safety.

    PubMed

    Galt, Kimberly A; Rule, Ann M; Houghton, Bruce; Young, Daniel O; Remington, Gina

    2005-04-01

    This study compared the potential for personal digital assistant (PDA)-based drug information sources to minimize potential medication errors dependent on accurate and complete drug information at the point of care. A quality and safety framework for drug information resources was developed to evaluate 11 PDA-based drug information sources. Three drug information sources met the criteria of the framework: Eprocrates Rx Pro, Lexi-Drugs, and mobileMICROMEDEX. Medication error types related to drug information at the point of care were then determined. Forty-seven questions were developed to test the potential of the sources to prevent these error types. Pharmacists and physician experts from Creighton University created these questions based on the most common types of questions asked by primary care providers. Three physicians evaluated the drug information sources, rating the source for each question: 1=no information available, 2=some information available, or 3 = adequate amount of information available. The mean ratings for the drug information sources were: 2.0 (Eprocrates Rx Pro), 2.5 (Lexi-Drugs), and 2.03 (mobileMICROMEDEX). Lexi-Drugs was significantly better (mobileMICROMEDEX t test; P=0.05; Eprocrates Rx Pro t test; P=0.01). Lexi-Drugs was found to be the most specific and complete PDA resource available to optimize medication safety by reducing potential errors associated with drug information. No resource was sufficient to address the patient safety information needs for all cases.

  9. Identification of driver errors : overview and recommendations

    DOT National Transportation Integrated Search

    2002-08-01

    Driver error is cited as a contributing factor in most automobile crashes, and although estimates vary by source, driver error is cited as the principal cause of from 45 to 75 percent of crashes. However, the specific errors that lead to crashes, and...

  10. Comparing Methods to Assess Intraobserver Measurement Error of 3D Craniofacial Landmarks Using Geometric Morphometrics Through a Digitizer Arm.

    PubMed

    Menéndez, Lumila Paula

    2017-05-01

    Intraobserver error (INTRA-OE) is the difference between repeated measurements of the same variable made by the same observer. The objective of this work was to evaluate INTRA-OE from 3D landmarks registered with a Microscribe, in different datasets: (A) the 3D coordinates, (B) linear measurements calculated from A, and (C) the six-first principal component axes. INTRA-OE was analyzed by digitizing 42 landmarks from 23 skulls in three events two weeks apart from each other. Systematic error was tested through repeated measures ANOVA (ANOVA-RM), while random error through intraclass correlation coefficient. Results showed that the largest differences between the three observations were found in the first dataset. Some anatomical points like nasion, ectoconchion, temporosphenoparietal, asterion, and temporomandibular presented the highest INTRA-OE. In the second dataset, local distances had higher INTRA-OE than global distances while the third dataset showed the lowest INTRA-OE. © 2016 American Academy of Forensic Sciences.

  11. Assessment of Computational Fluid Dynamics (CFD) Models for Shock Boundary-Layer Interaction

    NASA Technical Reports Server (NTRS)

    DeBonis, James R.; Oberkampf, William L.; Wolf, Richard T.; Orkwis, Paul D.; Turner, Mark G.; Babinsky, Holger

    2011-01-01

    A workshop on the computational fluid dynamics (CFD) prediction of shock boundary-layer interactions (SBLIs) was held at the 48th AIAA Aerospace Sciences Meeting. As part of the workshop numerous CFD analysts submitted solutions to four experimentally measured SBLIs. This paper describes the assessment of the CFD predictions. The assessment includes an uncertainty analysis of the experimental data, the definition of an error metric and the application of that metric to the CFD solutions. The CFD solutions provided very similar levels of error and in general it was difficult to discern clear trends in the data. For the Reynolds Averaged Navier-Stokes methods the choice of turbulence model appeared to be the largest factor in solution accuracy. Large-eddy simulation methods produced error levels similar to RANS methods but provided superior predictions of normal stresses.

  12. Error-Transparent Quantum Gates for Small Logical Qubit Architectures

    NASA Astrophysics Data System (ADS)

    Kapit, Eliot

    2018-02-01

    One of the largest obstacles to building a quantum computer is gate error, where the physical evolution of the state of a qubit or group of qubits during a gate operation does not match the intended unitary transformation. Gate error stems from a combination of control errors and random single qubit errors from interaction with the environment. While great strides have been made in mitigating control errors, intrinsic qubit error remains a serious problem that limits gate fidelity in modern qubit architectures. Simultaneously, recent developments of small error-corrected logical qubit devices promise significant increases in logical state lifetime, but translating those improvements into increases in gate fidelity is a complex challenge. In this Letter, we construct protocols for gates on and between small logical qubit devices which inherit the parent device's tolerance to single qubit errors which occur at any time before or during the gate. We consider two such devices, a passive implementation of the three-qubit bit flip code, and the author's own [E. Kapit, Phys. Rev. Lett. 116, 150501 (2016), 10.1103/PhysRevLett.116.150501] very small logical qubit (VSLQ) design, and propose error-tolerant gate sets for both. The effective logical gate error rate in these models displays superlinear error reduction with linear increases in single qubit lifetime, proving that passive error correction is capable of increasing gate fidelity. Using a standard phenomenological noise model for superconducting qubits, we demonstrate a realistic, universal one- and two-qubit gate set for the VSLQ, with error rates an order of magnitude lower than those for same-duration operations on single qubits or pairs of qubits. These developments further suggest that incorporating small logical qubits into a measurement based code could substantially improve code performance.

  13. SIRTF Focal Plane Survey: A Pre-flight Error Analysis

    NASA Technical Reports Server (NTRS)

    Bayard, David S.; Brugarolas, Paul B.; Boussalis, Dhemetrios; Kang, Bryan H.

    2003-01-01

    This report contains a pre-flight error analysis of the calibration accuracies expected from implementing the currently planned SIRTF focal plane survey strategy. The main purpose of this study is to verify that the planned strategy will meet focal plane survey calibration requirements (as put forth in the SIRTF IOC-SV Mission Plan [4]), and to quantify the actual accuracies expected. The error analysis was performed by running the Instrument Pointing Frame (IPF) Kalman filter on a complete set of simulated IOC-SV survey data, and studying the resulting propagated covariances. The main conclusion of this study is that the all focal plane calibration requirements can be met with the currently planned survey strategy. The associated margins range from 3 to 95 percent, and tend to be smallest for frames having a 0.14" requirement, and largest for frames having a more generous 0.28" (or larger) requirement. The smallest margin of 3 percent is associated with the IRAC 3.6 and 5.8 micron array centers (frames 068 and 069), and the largest margin of 95 percent is associated with the MIPS 160 micron array center (frame 087). For pointing purposes, the most critical calibrations are for the IRS Peakup sweet spots and short wavelength slit centers (frames 019, 023, 052, 028, 034). Results show that these frames are meeting their 0.14" requirements with an expected accuracy of approximately 0.1", which corresponds to a 28 percent margin.

  14. Biased interpretation and memory in children with varying levels of spider fear.

    PubMed

    Klein, Anke M; Titulaer, Geraldine; Simons, Carlijn; Allart, Esther; de Gier, Erwin; Bögels, Susan M; Becker, Eni S; Rinck, Mike

    2014-01-01

    This study investigated multiple cognitive biases in children simultaneously, to investigate whether spider-fearful children display an interpretation bias, a recall bias, and source monitoring errors, and whether these biases are specific for spider-related materials. Furthermore, the independent ability of these biases to predict spider fear was investigated. A total of 121 children filled out the Spider Anxiety and Disgust Screening for Children (SADS-C), and they performed an interpretation task, a memory task, and a Behavioural Assessment Test (BAT). As expected, a specific interpretation bias was found: Spider-fearful children showed more negative interpretations of ambiguous spider-related scenarios, but not of other scenarios. We also found specific source monitoring errors: Spider-fearful children made more fear-related source monitoring errors for the spider-related scenarios, but not for the other scenarios. Only limited support was found for a recall bias. Finally, interpretation bias, recall bias, and source monitoring errors predicted unique variance components of spider fear.

  15. Characterization Approaches to Place Invariant Sites on SI-Traceable Scales

    NASA Technical Reports Server (NTRS)

    Thome, Kurtis

    2012-01-01

    The effort to understand the Earth's climate system requires a complete integration of remote sensing imager data across time and multiple countries. Such an integration necessarily requires ensuring inter-consistency between multiple sensors to create the data sets needed to understand the climate system. Past efforts at inter-consistency have forced agreement between two sensors using sources that are viewed by both sensors at nearly the same time, and thus tend to be near polar regions over snow and ice. The current work describes a method that would provide an absolute radiometric calibration of a sensor rather than an inter-consistency of a sensor relative to another. The approach also relies on defensible error budgets that eventually provides a cross comparison of sensors without systematic errors. The basis of the technique is a model-based, SI-traceable prediction of at-sensor radiance over selected sites. The predicted radiance would be valid for arbitrary view and illumination angles and for any date of interest that is dominated by clear-sky conditions. The effort effectively works to characterize the sites as sources with known top-of-atmosphere radiance allowing accurate intercomparison of sensor data that without the need for coincident views. Data from the Advanced Spaceborne Thermal Emission and Reflection and Radiometer (ASTER), Enhanced Thematic Mapper Plus (ETM+), and Moderate Resolution Imaging Spectroradiometer (MODIS) are used to demonstrate the difficulties of cross calibration as applied to current sensors. Special attention is given to the differences caused in the cross-comparison of sensors in radiance space as opposed to reflectance space. The radiance comparisons lead to significant differences created by the specific solar model used for each sensor. The paper also proposes methods to mitigate the largest error sources in future systems. The results from these historical intercomparisons provide the basis for a set of recommendations to ensure future SI-traceable cross calibration using future missions such as CLARREO and TRUTHS. The paper describes a proposed approach that relies on model-based, SI-traceable predictions of at-sensor radiance over selected sites. The predicted radiance would be valid for arbitrary view and illumination angles and for any date of interest that is dominated by clear-sky conditions. The basis of the method is highly accurate measurements of at-sensor radiance of sufficient quality to understand the spectral and BRDF characteristics of the site and sufficient historical data to develop an understanding of temporal effects from changing surface and atmospheric conditions.

  16. Uncertainty Analysis Principles and Methods

    DTIC Science & Technology

    2007-09-01

    error source . The Data Processor converts binary coded numbers to values, performs D/A curve fitting and applies any correction factors that may be...describes the stages or modules involved in the measurement process. We now need to identify all relevant error sources and develop the mathematical... sources , gathering and maintaining the data needed, and completing and reviewing the collection of information. Send comments regarding this burden

  17. Sweat Sodium Concentration: Inter-Unit Variability of a Low Cost, Portable, and Battery Operated Sodium Analyzer.

    PubMed

    Goulet, Eric D B; Baker, Lindsay B

    2017-12-01

    The B-722 Laqua Twin is a low cost, portable, and battery operated sodium analyzer, which can be used for the assessment of sweat sodium concentration. The Laqua Twin is reliable and provides a degree of accuracy similar to more expensive analyzers; however, its interunit measurement error remains unknown. The purpose of this study was to compare the sodium concentration values of 70 sweat samples measured using three different Laqua Twin units. Mean absolute errors, random errors and constant errors among the different Laqua Twins ranged respectively between 1.7 mmol/L to 3.5 mmol/L, 2.5 mmol/L to 3.7 mmol/L and -0.6 mmol/L to 3.9 mmol/L. Proportional errors among Laqua Twins were all < 2%. Based on a within-subject biological variability in sweat sodium concentration of ± 12%, the maximal allowable imprecision among instruments was considered to be £ 6%. In that respect, the within (2.9%), between (4.5%), and total (5.4%) measurement error coefficient of variations were all < 6%. For a given sweat sodium concentration value, the largest observed difference in mean and lower and upper bound error of measurements among instruments were, respectively, 4.7 mmol/L, 2.3 mmol/L, and 7.0 mmol/L. In conclusion, our findings show that the interunit measurement error of the B-722 Laqua Twin is low and methodologically acceptable.

  18. Region 2 Port-area Investigation of Emissions Reduction (R2PIER)

    EPA Science Inventory

    Background Region 2 is home to the Port of New York and New Jersey (Port), the largest marine port on the East Coast and third largest in the nation. The Port is a concentrated source of diesel pollution, as more than 3 million containers move each year on diesel-powered ships, ...

  19. Beyond alpha: an empirical examination of the effects of different sources of measurement error on reliability estimates for measures of individual differences constructs.

    PubMed

    Schmidt, Frank L; Le, Huy; Ilies, Remus

    2003-06-01

    On the basis of an empirical study of measures of constructs from the cognitive domain, the personality domain, and the domain of affective traits, the authors of this study examine the implications of transient measurement error for the measurement of frequently studied individual differences variables. The authors clarify relevant reliability concepts as they relate to transient error and present a procedure for estimating the coefficient of equivalence and stability (L. J. Cronbach, 1947), the only classical reliability coefficient that assesses all 3 major sources of measurement error (random response, transient, and specific factor errors). The authors conclude that transient error exists in all 3 trait domains and is especially large in the domain of affective traits. Their findings indicate that the nearly universal use of the coefficient of equivalence (Cronbach's alpha; L. J. Cronbach, 1951), which fails to assess transient error, leads to overestimates of reliability and undercorrections for biases due to measurement error.

  20. Error in the Honeybee Waggle Dance Improves Foraging Flexibility

    PubMed Central

    Okada, Ryuichi; Ikeno, Hidetoshi; Kimura, Toshifumi; Ohashi, Mizue; Aonuma, Hitoshi; Ito, Etsuro

    2014-01-01

    The honeybee waggle dance communicates the location of profitable food sources, usually with a certain degree of error in the directional information ranging from 10–15° at the lower margin. We simulated one-day colonial foraging to address the biological significance of information error in the waggle dance. When the error was 30° or larger, the waggle dance was not beneficial. If the error was 15°, the waggle dance was beneficial when the food sources were scarce. When the error was 10° or smaller, the waggle dance was beneficial under all the conditions tested. Our simulation also showed that precise information (0–5° error) yielded great success in finding feeders, but also caused failures at finding new feeders, i.e., a high-risk high-return strategy. The observation that actual bees perform the waggle dance with an error of 10–15° might reflect, at least in part, the maintenance of a successful yet risky foraging trade-off. PMID:24569525

  1. Application of Exactly Linearized Error Transport Equations to AIAA CFD Prediction Workshops

    NASA Technical Reports Server (NTRS)

    Derlaga, Joseph M.; Park, Michael A.; Rallabhandi, Sriram

    2017-01-01

    The computational fluid dynamics (CFD) prediction workshops sponsored by the AIAA have created invaluable opportunities in which to discuss the predictive capabilities of CFD in areas in which it has struggled, e.g., cruise drag, high-lift, and sonic boom pre diction. While there are many factors that contribute to disagreement between simulated and experimental results, such as modeling or discretization error, quantifying the errors contained in a simulation is important for those who make decisions based on the computational results. The linearized error transport equations (ETE) combined with a truncation error estimate is a method to quantify one source of errors. The ETE are implemented with a complex-step method to provide an exact linearization with minimal source code modifications to CFD and multidisciplinary analysis methods. The equivalency of adjoint and linearized ETE functional error correction is demonstrated. Uniformly refined grids from a series of AIAA prediction workshops demonstrate the utility of ETE for multidisciplinary analysis with a connection between estimated discretization error and (resolved or under-resolved) flow features.

  2. Groundwater Pollution Source Identification using Linked ANN-Optimization Model

    NASA Astrophysics Data System (ADS)

    Ayaz, Md; Srivastava, Rajesh; Jain, Ashu

    2014-05-01

    Groundwater is the principal source of drinking water in several parts of the world. Contamination of groundwater has become a serious health and environmental problem today. Human activities including industrial and agricultural activities are generally responsible for this contamination. Identification of groundwater pollution source is a major step in groundwater pollution remediation. Complete knowledge of pollution source in terms of its source characteristics is essential to adopt an effective remediation strategy. Groundwater pollution source is said to be identified completely when the source characteristics - location, strength and release period - are known. Identification of unknown groundwater pollution source is an ill-posed inverse problem. It becomes more difficult for real field conditions, when the lag time between the first reading at observation well and the time at which the source becomes active is not known. We developed a linked ANN-Optimization model for complete identification of an unknown groundwater pollution source. The model comprises two parts- an optimization model and an ANN model. Decision variables of linked ANN-Optimization model contain source location and release period of pollution source. An objective function is formulated using the spatial and temporal data of observed and simulated concentrations, and then minimized to identify the pollution source parameters. In the formulation of the objective function, we require the lag time which is not known. An ANN model with one hidden layer is trained using Levenberg-Marquardt algorithm to find the lag time. Different combinations of source locations and release periods are used as inputs and lag time is obtained as the output. Performance of the proposed model is evaluated for two and three dimensional case with error-free and erroneous data. Erroneous data was generated by adding uniformly distributed random error (error level 0-10%) to the analytically computed concentration values. The main advantage of the proposed model is that it requires only upper half of the breakthrough curve and is capable of predicting source parameters when the lag time is not known. Linking of ANN model with proposed optimization model reduces the dimensionality of the decision variables of the optimization model by one and hence complexity of optimization model is reduced. The results show that our proposed linked ANN-Optimization model is able to predict the source parameters for the error-free data accurately. The proposed model was run several times to obtain the mean, standard deviation and interval estimate of the predicted parameters for observations with random measurement errors. It was observed that mean values as predicted by the model were quite close to the exact values. An increasing trend was observed in the standard deviation of the predicted values with increasing level of measurement error. The model appears to be robust and may be efficiently utilized to solve the inverse pollution source identification problem.

  3. RESEARCH AREA -- MOBILE SOURCE EMISSIONS (EMISSIONS CHARACTERIZATION AND PREVENTION BRANCH, APPCD, NRMRL)

    EPA Science Inventory

    The objective of this program is to characterize mobile source emissions which are one of the largest sources of tropospheric ozone precursor emissions (CO, NOx, and volotile organic compounds) in the U.S. The research objective of the Emissions Characterization and Prevention Br...

  4. The influence of phonological context on the sound errors of a speaker with Wernicke's aphasia.

    PubMed

    Goldmann, R E; Schwartz, M F; Wilshire, C E

    2001-09-01

    A corpus of phonological errors produced in narrative speech by a Wernicke's aphasic speaker (R.W.B.) was tested for context effects using two new methods for establishing chance baselines. A reliable anticipatory effect was found using the second method, which estimated chance from the distance between phoneme repeats in the speech sample containing the errors. Relative to this baseline, error-source distances were shorter than expected for anticipations, but not perseverations. R.W.B.'s anticipation/perseveration ratio measured intermediate between a nonaphasic error corpus and that of a more severe aphasic speaker (both reported in Schwartz et al., 1994), supporting the view that the anticipatory bias correlates to severity. Finally, R.W.B's anticipations favored word-initial segments, although errors and sources did not consistently share word or syllable position. Copyright 2001 Academic Press.

  5. The Sources of Error in Spanish Writing.

    ERIC Educational Resources Information Center

    Justicia, Fernando; Defior, Sylvia; Pelegrina, Santiago; Martos, Francisco J.

    1999-01-01

    Determines the pattern of errors in Spanish spelling. Analyzes and proposes a classification system for the errors made by children in the initial stages of the acquisition of spelling skills. Finds the diverse forms of only 20 Spanish words produces 36% of the spelling errors in Spanish; and substitution is the most frequent type of error. (RS)

  6. Geometric error analysis for shuttle imaging spectrometer experiment

    NASA Technical Reports Server (NTRS)

    Wang, S. J.; Ih, C. H.

    1984-01-01

    The demand of more powerful tools for remote sensing and management of earth resources steadily increased over the last decade. With the recent advancement of area array detectors, high resolution multichannel imaging spectrometers can be realistically constructed. The error analysis study for the Shuttle Imaging Spectrometer Experiment system is documented for the purpose of providing information for design, tradeoff, and performance prediction. Error sources including the Shuttle attitude determination and control system, instrument pointing and misalignment, disturbances, ephemeris, Earth rotation, etc., were investigated. Geometric error mapping functions were developed, characterized, and illustrated extensively with tables and charts. Selected ground patterns and the corresponding image distortions were generated for direct visual inspection of how the various error sources affect the appearance of the ground object images.

  7. Accuracy of cited “facts” in medical research articles: A review of study methodology and recalculation of quotation error rate

    PubMed Central

    2017-01-01

    Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or “facts,” are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval). PMID:28910404

  8. Accuracy of cited "facts" in medical research articles: A review of study methodology and recalculation of quotation error rate.

    PubMed

    Mogull, Scott A

    2017-01-01

    Previous reviews estimated that approximately 20 to 25% of assertions cited from original research articles, or "facts," are inaccurately quoted in the medical literature. These reviews noted that the original studies were dissimilar and only began to compare the methods of the original studies. The aim of this review is to examine the methods of the original studies and provide a more specific rate of incorrectly cited assertions, or quotation errors, in original research articles published in medical journals. Additionally, the estimate of quotation errors calculated here is based on the ratio of quotation errors to quotations examined (a percent) rather than the more prevalent and weighted metric of quotation errors to the references selected. Overall, this resulted in a lower estimate of the quotation error rate in original medical research articles. A total of 15 studies met the criteria for inclusion in the primary quantitative analysis. Quotation errors were divided into two categories: content ("factual") or source (improper indirect citation) errors. Content errors were further subdivided into major and minor errors depending on the degree that the assertion differed from the original source. The rate of quotation errors recalculated here is 14.5% (10.5% to 18.6% at a 95% confidence interval). These content errors are predominantly, 64.8% (56.1% to 73.5% at a 95% confidence interval), major errors or cited assertions in which the referenced source either fails to substantiate, is unrelated to, or contradicts the assertion. Minor errors, which are an oversimplification, overgeneralization, or trivial inaccuracies, are 35.2% (26.5% to 43.9% at a 95% confidence interval). Additionally, improper secondary (or indirect) citations, which are distinguished from calculations of quotation accuracy, occur at a rate of 10.4% (3.4% to 17.5% at a 95% confidence interval).

  9. Remote sensing of channels and riparian zones with a narrow-beam aquatic-terrestrial LIDAR

    Treesearch

    Jim McKean; Dave Nagel; Daniele Tonina; Philip Bailey; Charles Wayne Wright; Carolyn Bohn; Amar Nayegandhi

    2009-01-01

    The high-resolution Experimental Advanced Airborne Research LIDAR (EAARL) is a new technology for cross-environment surveys of channels and floodplains. EAARL measurements of basic channel geometry, such as wetted cross-sectional area, are within a few percent of those from control field surveys. The largest channel mapping errors are along stream banks. The LIDAR data...

  10. The Pearson-Readhead Survey of Compact Extragalactic Radio Sources from Space. I. The Images

    NASA Astrophysics Data System (ADS)

    Lister, M. L.; Tingay, S. J.; Murphy, D. W.; Piner, B. G.; Jones, D. L.; Preston, R. A.

    2001-06-01

    We present images from a space-VLBI survey using the facilities of the VLBI Space Observatory Programme (VSOP), drawing our sample from the well-studied Pearson-Readhead survey of extragalactic radio sources. Our survey has taken advantage of long space-VLBI baselines and large arrays of ground antennas, such as the Very Long Baseline Array and European VLBI Network, to obtain high-resolution images of 27 active galactic nuclei and to measure the core brightness temperatures of these sources more accurately than is possible from the ground. A detailed analysis of the source properties is given in accompanying papers. We have also performed an extensive series of simulations to investigate the errors in VSOP images caused by the relatively large holes in the (u,v)-plane when sources are observed near the orbit normal direction. We find that while the nominal dynamic range (defined as the ratio of map peak to off-source error) often exceeds 1000:1, the true dynamic range (map peak to on-source error) is only about 30:1 for relatively complex core-jet sources. For sources dominated by a strong point source, this value rises to approximately 100:1. We find the true dynamic range to be a relatively weak function of the difference in position angle (P.A.) between the jet P.A. and u-v coverage major axis P.A. For regions with low signal-to-noise ratios, typically located down the jet away from the core, large errors can occur, causing spurious features in VSOP images that should be interpreted with caution.

  11. Positive sliding mode control for blood glucose regulation

    NASA Astrophysics Data System (ADS)

    Menani, Karima; Mohammadridha, Taghreed; Magdelaine, Nicolas; Abdelaziz, Mourad; Moog, Claude H.

    2017-11-01

    Biological systems involving positive variables as concentrations are some examples of so-called positive systems. This is the case of the glycemia-insulinemia system considered in this paper. To cope with these physical constraints, it is shown that a positive sliding mode control (SMC) can be designed for glycemia regulation. The largest positive invariant set (PIS) is obtained for the insulinemia subsystem in open and closed loop. The existence of a positive SMC for glycemia regulation is shown here for the first time. Necessary conditions to design the sliding surface and the discontinuity gain are derived to guarantee a positive SMC for the insulin dynamics. SMC is designed to be positive everywhere in the largest closed-loop PIS of plasma insulin system. Two-stage SMC is employed; the last stage SMC2 block uses the glycemia error to design the desired insulin trajectory. Then the plasma insulin state is forced to track the reference via SMC1. The resulting desired insulin trajectory is the required virtual control input of the glycemia system to eliminate blood glucose (BG) error. The positive control is tested in silico on type-1 diabetic patients model derived from real-life clinical data.

  12. Determining Methane Leak Locations and Rates with a Wireless Network Composed of Low-Cost, Printed Sensors

    NASA Astrophysics Data System (ADS)

    Smith, C. J.; Kim, B.; Zhang, Y.; Ng, T. N.; Beck, V.; Ganguli, A.; Saha, B.; Daniel, G.; Lee, J.; Whiting, G.; Meyyappan, M.; Schwartz, D. E.

    2015-12-01

    We will present our progress on the development of a wireless sensor network that will determine the source and rate of detected methane leaks. The targeted leak detection threshold is 2 g/min with a rate estimation error of 20% and localization error of 1 m within an outdoor area of 100 m2. The network itself is composed of low-cost, high-performance sensor nodes based on printed nanomaterials with expected sensitivity below 1 ppmv methane. High sensitivity to methane is achieved by modifying high surface-area-to-volume-ratio single-walled carbon nanotubes (SWNTs) with materials that adsorb methane molecules. Because the modified SWNTs are not perfectly selective to methane, the sensor nodes contain arrays of variously-modified SWNTs to build diversity of response towards gases with adsorption affinity. Methane selectivity is achieved through advanced pattern-matching algorithms of the array's ensemble response. The system is low power and designed to operate for a year on a single small battery. The SWNT sensing elements consume only microwatts. The largest power consumer is the wireless communication, which provides robust, real-time measurement data. Methane leak localization and rate estimation will be performed by machine-learning algorithms built with the aid of computational fluid dynamics simulations of gas plume formation. This sensor system can be broadly applied at gas wells, distribution systems, refineries, and other downstream facilities. It also can be utilized for industrial and residential safety applications, and adapted to other gases and gas combinations.

  13. Simple transfer calibration method for a Cimel Sun-Moon photometer: calculating lunar calibration coefficients from Sun calibration constants.

    PubMed

    Li, Zhengqiang; Li, Kaitao; Li, Donghui; Yang, Jiuchun; Xu, Hua; Goloub, Philippe; Victori, Stephane

    2016-09-20

    The Cimel new technologies allow both daytime and nighttime aerosol optical depth (AOD) measurements. Although the daytime AOD calibration protocols are well established, accurate and simple nighttime calibration is still a challenging task. Standard lunar-Langley and intercomparison calibration methods both require specific conditions in terms of atmospheric stability and site condition. Additionally, the lunar irradiance model also has some known limits on its uncertainty. This paper presents a simple calibration method that transfers the direct-Sun calibration constant, V0,Sun, to the lunar irradiance calibration coefficient, CMoon. Our approach is a pure calculation method, independent of site limits, e.g., Moon phase. The method is also not affected by the lunar irradiance model limitations, which is the largest error source of traditional calibration methods. Besides, this new transfer calibration approach is easy to use in the field since CMoon can be obtained directly once V0,Sun is known. Error analysis suggests that the average uncertainty of CMoon over the 440-1640 nm bands obtained with the transfer method is 2.4%-2.8%, depending on the V0,Sun approach (Langley or intercomparison), which is comparable with that of lunar-Langley approach, theoretically. In this paper, the Sun-Moon transfer and the Langley methods are compared based on site measurements in Beijing, and the day-night measurement continuity and performance are analyzed.

  14. Revisiting forest road retirement

    Treesearch

    Randy Kolka; Mathew Smidt

    2001-01-01

    Determining the sources of nonpoint source pollution in a watershed is difficult, although the largest source of sediment in forested systems is from skld trails, haul roads, and landings associated with forest harvest- ing (Ketcheson et al., 1999; Swft, 1988) The transport of sediment to streams and subsequent sedimentation leads to the loss of...

  15. Systematic Errors in an Air Track Experiment.

    ERIC Educational Resources Information Center

    Ramirez, Santos A.; Ham, Joe S.

    1990-01-01

    Errors found in a common physics experiment to measure acceleration resulting from gravity using a linear air track are investigated. Glider position at release and initial velocity are shown to be sources of systematic error. (CW)

  16. Mercury Isotopes in Earth and Environmental Sciences

    NASA Astrophysics Data System (ADS)

    Blum, Joel D.; Sherman, Laura S.; Johnson, Marcus W.

    2014-05-01

    Virtually all biotic, dark abiotic, and photochemical transformations of mercury (Hg) produce Hg isotope fractionation, which can be either mass dependent (MDF) or mass independent (MIF). The largest range in MDF is observed among geological materials and rainfall impacted by anthropogenic sources. The largest positive MIF of Hg isotopes (odd-mass excess) is caused by photochemical degradation of methylmercury in water. This signature is retained through the food web and measured in all freshwater and marine fish. The largest negative MIF of Hg isotopes (odd-mass deficit) is caused by photochemical reduction of inorganic Hg and has been observed in Arctic snow and plant foliage. Ratios of MDF to MIF and ratios of 199Hg MIF to 201Hg MIF are often diagnostic of biogeochemical reaction pathways. More than a decade of research demonstrates that Hg isotopes can be used to trace sources, biogeochemical cycling, and reactions involving Hg in the environment.

  17. Attitude errors arising from antenna/satellite altitude errors - Recognition and reduction

    NASA Technical Reports Server (NTRS)

    Godbey, T. W.; Lambert, R.; Milano, G.

    1972-01-01

    A review is presented of the three basic types of pulsed radar altimeter designs, as well as the source and form of altitude bias errors arising from antenna/satellite attitude errors in each design type. A quantitative comparison of the three systems was also made.

  18. Sensitivity of Magnetospheric Multi-Scale (MMS) Mission Navigation Accuracy to Major Error Sources

    NASA Technical Reports Server (NTRS)

    Olson, Corwin; Long, Anne; Car[emter. Russell

    2011-01-01

    The Magnetospheric Multiscale (MMS) mission consists of four satellites flying in formation in highly elliptical orbits about the Earth, with a primary objective of studying magnetic reconnection. The baseline navigation concept is independent estimation of each spacecraft state using GPS pseudorange measurements referenced to an Ultra Stable Oscillator (USO) with accelerometer measurements included during maneuvers. MMS state estimation is performed onboard each spacecraft using the Goddard Enhanced Onboard Navigation System (GEONS), which is embedded in the Navigator GPS receiver. This paper describes the sensitivity of MMS navigation performance to two major error sources: USO clock errors and thrust acceleration knowledge errors.

  19. Sensitivity of Magnetospheric Multi-Scale (MMS) Mission Naviation Accuracy to Major Error Sources

    NASA Technical Reports Server (NTRS)

    Olson, Corwin; Long, Anne; Carpenter, J. Russell

    2011-01-01

    The Magnetospheric Multiscale (MMS) mission consists of four satellites flying in formation in highly elliptical orbits about the Earth, with a primary objective of studying magnetic reconnection. The baseline navigation concept is independent estimation of each spacecraft state using GPS pseudorange measurements referenced to an Ultra Stable Oscillator (USO) with accelerometer measurements included during maneuvers. MMS state estimation is performed onboard each spacecraft using the Goddard Enhanced Onboard Navigation System (GEONS), which is embedded in the Navigator GPS receiver. This paper describes the sensitivity of MMS navigation performance to two major error sources: USO clock errors and thrust acceleration knowledge errors.

  20. General error analysis in the relationship between free thyroxine and thyrotropin and its clinical relevance.

    PubMed

    Goede, Simon L; Leow, Melvin Khee-Shing

    2013-01-01

    This treatise investigates error sources in measurements applicable to the hypothalamus-pituitary-thyroid (HPT) system of analysis for homeostatic set point computation. The hypothalamus-pituitary transfer characteristic (HP curve) describes the relationship between plasma free thyroxine [FT4] and thyrotropin [TSH]. We define the origin, types, causes, and effects of errors that are commonly encountered in TFT measurements and examine how we can interpret these to construct a reliable HP function for set point establishment. The error sources in the clinical measurement procedures are identified and analyzed in relation to the constructed HP model. The main sources of measurement and interpretation uncertainties are (1) diurnal variations in [TSH], (2) TFT measurement variations influenced by timing of thyroid medications, (3) error sensitivity in ranges of [TSH] and [FT4] (laboratory assay dependent), (4) rounding/truncation of decimals in [FT4] which in turn amplify curve fitting errors in the [TSH] domain in the lower [FT4] range, (5) memory effects (rate-independent hysteresis effect). When the main uncertainties in thyroid function tests (TFT) are identified and analyzed, we can find the most acceptable model space with which we can construct the best HP function and the related set point area.

  1. Unaccounted source of systematic errors in measurements of the Newtonian gravitational constant G

    NASA Astrophysics Data System (ADS)

    DeSalvo, Riccardo

    2015-06-01

    Many precision measurements of G have produced a spread of results incompatible with measurement errors. Clearly an unknown source of systematic errors is at work. It is proposed here that most of the discrepancies derive from subtle deviations from Hooke's law, caused by avalanches of entangled dislocations. The idea is supported by deviations from linearity reported by experimenters measuring G, similarly to what is observed, on a larger scale, in low-frequency spring oscillators. Some mitigating experimental apparatus modifications are suggested.

  2. C-band radar pulse Doppler error: Its discovery, modeling, and elimination

    NASA Technical Reports Server (NTRS)

    Krabill, W. B.; Dempsey, D. J.

    1978-01-01

    The discovery of a C Band radar pulse Doppler error is discussed and use of the GEOS 3 satellite's coherent transponder to isolate the error source is described. An analysis of the pulse Doppler tracking loop is presented and a mathematical model for the error was developed. Error correction techniques were developed and are described including implementation details.

  3. Adaptive Sparse Representation for Source Localization with Gain/Phase Errors

    PubMed Central

    Sun, Ke; Liu, Yimin; Meng, Huadong; Wang, Xiqin

    2011-01-01

    Sparse representation (SR) algorithms can be implemented for high-resolution direction of arrival (DOA) estimation. Additionally, SR can effectively separate the coherent signal sources because the spectrum estimation is based on the optimization technique, such as the L1 norm minimization, but not on subspace orthogonality. However, in the actual source localization scenario, an unknown gain/phase error between the array sensors is inevitable. Due to this nonideal factor, the predefined overcomplete basis mismatches the actual array manifold so that the estimation performance is degraded in SR. In this paper, an adaptive SR algorithm is proposed to improve the robustness with respect to the gain/phase error, where the overcomplete basis is dynamically adjusted using multiple snapshots and the sparse solution is adaptively acquired to match with the actual scenario. The simulation results demonstrate the estimation robustness to the gain/phase error using the proposed method. PMID:22163875

  4. Subaperture test of wavefront error of large telescopes: error sources and stitching performance simulations

    NASA Astrophysics Data System (ADS)

    Chen, Shanyong; Li, Shengyi; Wang, Guilin

    2014-11-01

    The wavefront error of large telescopes requires to be measured to check the system quality and also estimate the misalignment of the telescope optics including the primary, the secondary and so on. It is usually realized by a focal plane interferometer and an autocollimator flat (ACF) of the same aperture with the telescope. However, it is challenging for meter class telescopes due to high cost and technological challenges in producing the large ACF. Subaperture test with a smaller ACF is hence proposed in combination with advanced stitching algorithms. Major error sources include the surface error of the ACF, misalignment of the ACF and measurement noises. Different error sources have different impacts on the wavefront error. Basically the surface error of the ACF behaves like systematic error and the astigmatism will be cumulated and enlarged if the azimuth of subapertures remains fixed. It is difficult to accurately calibrate the ACF because it suffers considerable deformation induced by gravity or mechanical clamping force. Therefore a selfcalibrated stitching algorithm is employed to separate the ACF surface error from the subaperture wavefront error. We suggest the ACF be rotated around the optical axis of the telescope for subaperture test. The algorithm is also able to correct the subaperture tip-tilt based on the overlapping consistency. Since all subaperture measurements are obtained in the same imaging plane, lateral shift of the subapertures is always known and the real overlapping points can be recognized in this plane. Therefore lateral positioning error of subapertures has no impact on the stitched wavefront. In contrast, the angular positioning error changes the azimuth of the ACF and finally changes the systematic error. We propose an angularly uneven layout of subapertures to minimize the stitching error, which is very different from our knowledge. At last, measurement noises could never be corrected but be suppressed by means of averaging and environmental control. We simulate the performance of the stitching algorithm dealing with surface error and misalignment of the ACF, and noise suppression, which provides guidelines to optomechanical design of the stitching test system.

  5. Performance monitoring during associative learning and its relation to obsessive-compulsive characteristics.

    PubMed

    Doñamayor, Nuria; Dinani, Jakob; Römisch, Manuel; Ye, Zheng; Münte, Thomas F

    2014-10-01

    Neural responses to performance errors and external feedback have been suggested to be altered in obsessive-compulsive disorder. In the current study, an associative learning task was used in healthy participants assessed for obsessive-compulsive symptoms by the OCI-R questionnaire. The task included a condition with equivocal feedback that did not inform about the participants' performance. Following incorrect responses, an error-related negativity and an error positivity were observed. In the feedback phase, the largest feedback-related negativity was observed following equivocal feedback. Theta and beta oscillatory components were found following incorrect and correct responses, respectively, and an increase in theta power was associated with negative and equivocal feedback. Changes over time were also explored as an indicator for possible learning effects. Finally, event-related potentials and oscillatory components were found to be uncorrelated with OCI-R scores in the current non-clinical sample. Copyright © 2014 Elsevier B.V. All rights reserved.

  6. A study of GPS measurement errors due to noise and multipath interference for CGADS

    NASA Technical Reports Server (NTRS)

    Axelrad, Penina; MacDoran, Peter F.; Comp, Christopher J.

    1996-01-01

    This report describes a study performed by the Colorado Center for Astrodynamics Research (CCAR) on GPS measurement errors in the Codeless GPS Attitude Determination System (CGADS) due to noise and multipath interference. Preliminary simulation models fo the CGADS receiver and orbital multipath are described. The standard FFT algorithms for processing the codeless data is described and two alternative algorithms - an auto-regressive/least squares (AR-LS) method, and a combined adaptive notch filter/least squares (ANF-ALS) method, are also presented. Effects of system noise, quantization, baseband frequency selection, and Doppler rates on the accuracy of phase estimates with each of the processing methods are shown. Typical electrical phase errors for the AR-LS method are 0.2 degrees, compared to 0.3 and 0.5 degrees for the FFT and ANF-ALS algorithms, respectively. Doppler rate was found to have the largest effect on the performance.

  7. Regression Equations for Monthly and Annual Mean and Selected Percentile Streamflows for Ungaged Rivers in Maine

    USGS Publications Warehouse

    Dudley, Robert W.

    2015-12-03

    The largest average errors of prediction are associated with regression equations for the lowest streamflows derived for months during which the lowest streamflows of the year occur (such as the 5 and 1 monthly percentiles for August and September). The regression equations have been derived on the basis of streamflow and basin characteristics data for unregulated, rural drainage basins without substantial streamflow or drainage modifications (for example, diversions and (or) regulation by dams or reservoirs, tile drainage, irrigation, channelization, and impervious paved surfaces), therefore using the equations for regulated or urbanized basins with substantial streamflow or drainage modifications will yield results of unknown error. Input basin characteristics derived using techniques or datasets other than those documented in this report or using values outside the ranges used to develop these regression equations also will yield results of unknown error.

  8. Improvement of Aerosol Optical Depth Retrieval over Hong Kong from a Geostationary Meteorological Satellite Using Critical Reflectance with Background Optical Depth Correction

    NASA Technical Reports Server (NTRS)

    Kim, Mijin; Kim, Jhoon; Wong, Man Sing; Yoon, Jongmin; Lee, Jaehwa; Wu, Dong L.; Chan, P.W.; Nichol, Janet E.; Chung, Chu-Yong; Ou, Mi-Lim

    2014-01-01

    Despite continuous efforts to retrieve aerosol optical depth (AOD) using a conventional 5-channelmeteorological imager in geostationary orbit, the accuracy in urban areas has been poorer than other areas primarily due to complex urban surface properties and mixed aerosol types from different emission sources. The two largest error sources in aerosol retrieval have been aerosol type selection and surface reflectance. In selecting the aerosol type from a single visible channel, the season-dependent aerosol optical properties were adopted from longterm measurements of Aerosol Robotic Network (AERONET) sun-photometers. With the aerosol optical properties obtained fromthe AERONET inversion data, look-up tableswere calculated by using a radiative transfer code: the Second Simulation of the Satellite Signal in the Solar Spectrum (6S). Surface reflectance was estimated using the clear sky composite method, awidely used technique for geostationary retrievals. Over East Asia, the AOD retrieved from the Meteorological Imager showed good agreement, although the values were affected by cloud contamination errors. However, the conventional retrieval of the AOD over Hong Kong was largely underestimated due to the lack of information on the aerosol type and surface properties. To detect spatial and temporal variation of aerosol type over the area, the critical reflectance method, a technique to retrieve single scattering albedo (SSA), was applied. Additionally, the background aerosol effect was corrected to improve the accuracy of the surface reflectance over Hong Kong. The AOD retrieved froma modified algorithmwas compared to the collocated data measured by AERONET in Hong Kong. The comparison showed that the new aerosol type selection using the critical reflectance and the corrected surface reflectance significantly improved the accuracy of AODs in Hong Kong areas,with a correlation coefficient increase from0.65 to 0.76 and a regression line change from tMI [basic algorithm] = 0.41tAERONET + 0.16 to tMI [new algorithm] = 0.70tAERONET + 0.01.

  9. A study of partial coherence for identifying interior noise sources and paths on general aviation aircraft

    NASA Technical Reports Server (NTRS)

    Howlett, J. T.

    1979-01-01

    The partial coherence analysis method for noise source/path determination is summarized and the application to a two input, single output system with coherence between the inputs is illustrated. The augmentation of the calculations on a digital computer interfaced with a two channel, real time analyzer is also discussed. The results indicate possible sources of error in the computations and suggest procedures for avoiding these errors.

  10. Simulating a transmon implementation of the surface code, Part I

    NASA Astrophysics Data System (ADS)

    Tarasinski, Brian; O'Brien, Thomas; Rol, Adriaan; Bultink, Niels; Dicarlo, Leo

    Current experimental efforts aim to realize Surface-17, a distance-3 surface-code logical qubit, using transmon qubits in a circuit QED architecture. Following experimental proposals for this device, and currently achieved fidelities on physical qubits, we define a detailed error model that takes experimentally relevant error sources into account, such as amplitude and phase damping, imperfect gate pulses, and coherent errors due to low-frequency flux noise. Using the GPU-accelerated software package 'quantumsim', we simulate the density matrix evolution of the logical qubit under this error model. Combining the simulation results with a minimum-weight matching decoder, we obtain predictions for the error rate of the resulting logical qubit when used as a quantum memory, and estimate the contribution of different error sources to the logical error budget. Research funded by the Foundation for Fundamental Research on Matter (FOM), the Netherlands Organization for Scientific Research (NWO/OCW), IARPA, an ERC Synergy Grant, the China Scholarship Council, and Intel Corporation.

  11. Determination of microwave vegetation optical depth and water content in the source region of the Yellow River

    NASA Astrophysics Data System (ADS)

    Liu, R.; Wen, J.; Wang, X.

    2017-12-01

    In this study, we use dual polarization brightness temperature observational data at the K frequency band collected by the Micro Wave Radiation Imager (MWRI) on board the Fengyun-3B satellite (FY-3B) to improve the τ-ω model by considering the contribution of water bodies in the pixels to radiation in the wetland area of the Yellow River source region. We define a dual polarization slope parameter and express the surface emissivity in the τ-ω model as the sum of the soil and water body emissivity to retrieve the vegetation optical depth (VOD); however, in regions without water body coverage, we still use the τ-ω model to solve for the VOD. By using the field observation data on the vegetation water content (VWC) in the source region of the Yellow River during the summer of 2012, we establish the regression relationship between the VOD and VWC and retrieve the spatial distribution of the VWC. The results indicate that in the entire source region of the Yellow River in 2012, the VOD was in the range of 0.20-1.20 and the VWC was in the range of 0.20 to 1.40, thereby exhibiting a trend of low values in the west and high values in the east. The area with the largest regional variation is along the Yellow River. We compare the results from remote-sensing estimated and ground-measured vegetation water content, and the root-mean-square error is 0.12. The analysis results indicated that by considering the coverage of seasonal wetlands in the source region of the Yellow River, the microwave remote sensing data collected by the FY-3B MWRI can be used to retrieve the vegetation water content in the source region of the Yellow River.

  12. Clay-mineral suites, sources, and inferred dispersal routes: Southern California continental shelf

    USGS Publications Warehouse

    Hein, J.R.; Dowling, J.S.; Schuetze, A.; Lee, H.J.

    2003-01-01

    Clay mineralogy is useful in determining the distribution, sources, and dispersal routes of fine-grained sediments. In addition, clay minerals, especially smectite, may control the degree to which contaminants are adsorbed by the sediment. We analyzed 250 shelf sediment samples, 24 river-suspended-sediment samples, and 12 river-bed samples for clay-mineral contents in the Southern California Borderland from Point Conception to the Mexico border. In addition, six samples were analyzed from the Palos Verdes Headland in order to characterize the clay minerals contributed to the offshore from that point source. The <2 ??m-size fraction was isolated, Mg-saturated, and glycolated before analysis by X-ray diffraction. Semi-quantitative percentages of smectite, illite, and kaolinite plus chlorite were calculated using peak areas and standard weighting factors. Most fine-grained sediment is supplied to the shelf by rivers during major winter storms, especially during El Nin??o years. The largest sediment fluxes to the region are from the Santa Ynez and Santa Clara Rivers, which drain the Transverse Ranges. The mean clay-mineral suite for the entire shelf sediment data set (26% smectite, 50% illite, 24% kaolinite+chlorite) is closely comparable to that for the mean of all the rivers (31% smectite, 49% illite, 20% kaolinite+chlorite), indicating that the main source of shelf fine-grained sediments is the adjacent rivers. However, regional variations do exist and the shelf is divided into four provinces with characteristic clay-mineral suites. The means of the clay-mineral suites of the two southernmost provinces are within analytical error of the mineral suites of adjacent rivers. The next province to the north includes Santa Monica Bay and has a suite of clay minerals derived from mixing of fine-grained sediments from several sources, both from the north and south. The northernmost province clay-mineral suite matches moderately well that of the adjacent rivers, but does indicate some mixing from sources in adjacent provinces.

  13. Ammonia concentrations at a site in Southern Scotland from 2 yr of continuous measurements

    NASA Astrophysics Data System (ADS)

    Burkhardt, J.; Sutton, M. A.; Milford, C.; Storeton-West, R. L.; Fowler, D.

    Atmospheric ammonia (NH 3) concentrations were measured using a continuous-flow annular denuder over a period of 2 yr at a rural site near Edinburgh, Scotland. Meteorological parameters as well as sulphur dioxide (SO 2) concentrations were also recorded. The overall arithmetic mean NH 3 concentration was 1.4 μg m -3. Although an annual cycle with largest NH 3 concentrations in summer was apparent for seasonal geometric mean concentrations, arithmetic mean concentrations were largest in the spring and autumn, indicating the increased importance of occasional high concentration events in these seasons. The NH 3 concentrations were influenced by local sources as well as by background concentrations, dependent on wind direction, whereas SO 2 geometric standard deviations indicated more distant sources. The daily cycle of NH 3 and SO 2 concentrations was dependent on wind speed ( u). At u<1 m s -1, NH 3 concentrations were smallest and SO 2 concentrations were largest around noon, whereas at u>1 m s -1 this cycle was less pronounced for both gases and NH 3 concentrations were largest around 1800 hours. These opposite diurnal cycles may be explained by the interaction of boundary layer mixing with local sources for NH 3 and remote sources for SO 2. Comparing the ammonia data with critical levels and critical loads shows that the critical level is not exceeded at this site over any averaging time. In contrast, the N critical load would probably be exceeded for moorland vegetation near this site, showing that the contribution of atmospheric NH 3 to nitrogen deposition in the long term is a more significant issue than exceedance of critical levels.

  14. Aerosol Retrievals over the Ocean using Channel 1 and 2 AVHRR Data: A Sensitivity Analysis and Preliminary Results

    NASA Technical Reports Server (NTRS)

    Mishchenko, Michael I.; Geogdzhayev, Igor V.; Cairns, Brian; Rossow, William B.; Lacis, Andrew A.

    1999-01-01

    This paper outlines the methodology of interpreting channel 1 and 2 AVHRR radiance data over the oceans and describes a detailed analysis of the sensitivity of monthly averages of retrieved aerosol parameters to the assumptions made in different retrieval algorithms. The analysis is based on using real AVHRR data and exploiting accurate numerical techniques for computing single and multiple scattering and spectral absorption of light in the vertically inhomogeneous atmosphere-ocean system. We show that two-channel algorithms can be expected to provide significantly more accurate and less biased retrievals of the aerosol optical thickness than one-channel algorithms and that imperfect cloud screening and calibration uncertainties are by far the largest sources of errors in the retrieved aerosol parameters. Both underestimating and overestimating aerosol absorption as well as the potentially strong variability of the real part of the aerosol refractive index may lead to regional and/or seasonal biases in optical thickness retrievals. The Angstrom exponent appears to be the most invariant aerosol size characteristic and should be retrieved along with optical thickness as the second aerosol parameter.

  15. Education in medical billing benefits both neurology trainees and academic departments.

    PubMed

    Waugh, Jeff L

    2014-11-11

    The objective of residency training is to produce physicians who can function independently within their chosen subspecialty and practice environment. Skills in the business of medicine, such as clinical billing, are widely applicable in academic and private practices but are not commonly addressed during formal medical education. Residency and fellowship training include limited exposure to medical billing, but our academic department's performance of these skills was inadequate: in 56% of trainee-generated outpatient notes, documentation was insufficient to sustain the chosen billing level. We developed a curriculum to improve the accuracy of documentation and coding and introduced practice changes to address our largest sources of error. In parallel, we developed tools that increased the speed and efficiency of documentation. Over 15 months, we progressively eliminated note devaluation, increased the mean level billed by trainees to nearly match that of attending physicians, and increased outpatient revenue by $34,313/trainee/year. Our experience suggests that inclusion of billing education topics into the formal medical curriculum benefits both academic medical centers and trainees. © 2014 American Academy of Neurology.

  16. Crowd-sourced pictures geo-localization method based on street view images and 3D reconstruction

    NASA Astrophysics Data System (ADS)

    Cheng, Liang; Yuan, Yi; Xia, Nan; Chen, Song; Chen, Yanming; Yang, Kang; Ma, Lei; Li, Manchun

    2018-07-01

    People are increasingly becoming accustomed to taking photos of everyday life in modern cities and uploading them on major photo-sharing social media sites. These sites contain numerous pictures, but some have incomplete or blurred location information. The geo-localization of crowd-sourced pictures enriches the information contained therein, and is applicable to activities such as urban construction, urban landscape analysis, and crime tracking. However, geo-localization faces huge technical challenges. This paper proposes a method for large-scale geo-localization of crowd-sourced pictures. Our approach uses structured, organized Street View images as a reference dataset and employs a three-step strategy of coarse geo-localization by image retrieval, selecting reliable matches by image registration, and fine geo-localization by 3D reconstruction to attach geographic tags to pictures from unidentified sources. In study area, 3D reconstruction based on close-range photogrammetry is used to restore the 3D geographical information of the crowd-sourced pictures, resulting in the proposed method improving the median error from 256.7 m to 69.0 m, and the percentage of the geo-localized query pictures under a 50 m error from 17.2% to 43.2% compared with the previous method. Another discovery using the proposed method is that, in respect of the causes of reconstruction error, closer distances from the cameras to the main objects in query pictures tend to produce lower errors and the component of error parallel to the road makes a more significant contribution to the Total Error. The proposed method is not limited to small areas, and could be expanded to cities and larger areas owing to its flexible parameters.

  17. Source Memory Errors Associated with Reports of Posttraumatic Flashbacks: A Proof of Concept Study

    ERIC Educational Resources Information Center

    Brewin, Chris R.; Huntley, Zoe; Whalley, Matthew G.

    2012-01-01

    Flashbacks are involuntary, emotion-laden images experienced by individuals with posttraumatic stress disorder (PTSD). The qualities of flashbacks could under certain circumstances lead to source memory errors. Participants with PTSD wrote a trauma narrative and reported the experience of flashbacks. They were later presented with stimuli from…

  18. An Application of Multivariate Generalizability in Selection of Mathematically Gifted Students

    ERIC Educational Resources Information Center

    Kim, Sungyeun; Berebitsky, Dan

    2016-01-01

    This study investigates error sources and the effects of each error source to determine optimal weights of the composite score of teacher recommendation letters and self-introduction letters using multivariate generalizability theory. Data were collected from the science education institute for the gifted attached to the university located within…

  19. Estimating Uncertainty in Annual Forest Inventory Estimates

    Treesearch

    Ronald E. McRoberts; Veronica C. Lessard

    1999-01-01

    The precision of annual forest inventory estimates may be negatively affected by uncertainty from a variety of sources including: (1) sampling error; (2) procedures for updating plots not measured in the current year; and (3) measurement errors. The impact of these sources of uncertainty on final inventory estimates is investigated using Monte Carlo simulation...

  20. Source parameters controlling the generation and propagation of potential local tsunamis along the cascadia margin

    USGS Publications Warehouse

    Geist, E.; Yoshioka, S.

    1996-01-01

    The largest uncertainty in assessing hazards from local tsunamis along the Cascadia margin is estimating the possible earthquake source parameters. We investigate which source parameters exert the largest influence on tsunami generation and determine how each parameter affects the amplitude of the local tsunami. The following source parameters were analyzed: (1) type of faulting characteristic of the Cascadia subduction zone, (2) amount of slip during rupture, (3) slip orientation, (4) duration of rupture, (5) physical properties of the accretionary wedge, and (6) influence of secondary faulting. The effect of each of these source parameters on the quasi-static displacement of the ocean floor is determined by using elastic three-dimensional, finite-element models. The propagation of the resulting tsunami is modeled both near the coastline using the two-dimensional (x-t) Peregrine equations that includes the effects of dispersion and near the source using the three-dimensional (x-y-t) linear long-wave equations. The source parameters that have the largest influence on local tsunami excitation are the shallowness of rupture and the amount of slip. In addition, the orientation of slip has a large effect on the directivity of the tsunami, especially for shallow dipping faults, which consequently has a direct influence on the length of coastline inundated by the tsunami. Duration of rupture, physical properties of the accretionary wedge, and secondary faulting all affect the excitation of tsunamis but to a lesser extent than the shallowness of rupture and the amount and orientation of slip. Assessment of the severity of the local tsunami hazard should take into account that relatively large tsunamis can be generated from anomalous 'tsunami earthquakes' that rupture within the accretionary wedge in comparison to interplate thrust earthquakes of similar magnitude. ?? 1996 Kluwer Academic Publishers.

  1. Error reporting in transfusion medicine at a tertiary care centre: a patient safety initiative.

    PubMed

    Elhence, Priti; Shenoy, Veena; Verma, Anupam; Sachan, Deepti

    2012-11-01

    Errors in the transfusion process can compromise patient safety. A study was undertaken at our center to identify the errors in the transfusion process and their causes in order to reduce their occurrence by corrective and preventive actions. All near miss, no harm events and adverse events reported in the 'transfusion process' during 1 year study period were recorded, classified and analyzed at a tertiary care teaching hospital in North India. In total, 285 transfusion related events were reported during the study period. Of these, there were four adverse (1.5%), 10 no harm (3.5%) and 271 (95%) near miss events. Incorrect blood component transfusion rate was 1 in 6031 component units. ABO incompatible transfusion rate was one in 15,077 component units issued or one in 26,200 PRBC units issued and acute hemolytic transfusion reaction due to ABO incompatible transfusion was 1 in 60,309 component units issued. Fifty-three percent of the antecedent near miss events were bedside events. Patient sample handling errors were the single largest category of errors (n=94, 33%) followed by errors in labeling and blood component handling and storage in user areas. The actual and near miss event data obtained through this initiative provided us with clear evidence about latent defects and critical points in the transfusion process so that corrective and preventive actions could be taken to reduce errors and improve transfusion safety.

  2. Developing Performance Estimates for High Precision Astrometry with TMT

    NASA Astrophysics Data System (ADS)

    Schoeck, Matthias; Do, Tuan; Ellerbroek, Brent; Herriot, Glen; Meyer, Leo; Suzuki, Ryuji; Wang, Lianqi; Yelda, Sylvana

    2013-12-01

    Adaptive optics on Extremely Large Telescopes will open up many new science cases or expand existing science into regimes unattainable with the current generation of telescopes. One example of this is high-precision astrometry, which has requirements in the range from 10 to 50 micro-arc-seconds for some instruments and science cases. Achieving these requirements imposes stringent constraints on the design of the entire observatory, but also on the calibration procedures, observing sequences and the data analysis techniques. This paper summarizes our efforts to develop a top down astrometry error budget for TMT. It is predominantly developed for the first-light AO system, NFIRAOS, and the IRIS instrument, but many terms are applicable to other configurations as well. Astrometry error sources are divided into 5 categories: Reference source and catalog errors, atmospheric refraction correction errors, other residual atmospheric effects, opto-mechanical errors and focal plane measurement errors. Results are developed in parametric form whenever possible. However, almost every error term in the error budget depends on the details of the astrometry observations, such as whether absolute or differential astrometry is the goal, whether one observes a sparse or crowded field, what the time scales of interest are, etc. Thus, it is not possible to develop a single error budget that applies to all science cases and separate budgets are developed and detailed for key astrometric observations. Our error budget is consistent with the requirements for differential astrometry of tens of micro-arc-seconds for certain science cases. While no show stoppers have been found, the work has resulted in several modifications to the NFIRAOS optical surface specifications and reference source design that will help improve the achievable astrometry precision even further.

  3. The comparison of cervical repositioning errors according to smartphone addiction grades.

    PubMed

    Lee, Jeonhyeong; Seo, Kyochul

    2014-04-01

    [Purpose] The purpose of this study was to compare cervical repositioning errors according to smartphone addiction grades of adults in their 20s. [Subjects and Methods] A survey of smartphone addiction was conducted of 200 adults. Based on the survey results, 30 subjects were chosen to participate in this study, and they were divided into three groups of 10; a Normal Group, a Moderate Addiction Group, and a Severe Addiction Group. After attaching a C-ROM, we measured the cervical repositioning errors of flexion, extension, right lateral flexion and left lateral flexion. [Results] Significant differences in the cervical repositioning errors of flexion, extension, and right and left lateral flexion were found among the Normal Group, Moderate Addiction Group, and Severe Addiction Group. In particular, the Severe Addiction Group showed the largest errors. [Conclusion] The result indicates that as smartphone addiction becomes more severe, a person is more likely to show impaired proprioception, as well as impaired ability to recognize the right posture. Thus, musculoskeletal problems due to smartphone addiction should be resolved through social cognition and intervention, and physical therapeutic education and intervention to educate people about correct postures.

  4. LANDSAT-4/5 image data quality analysis

    NASA Technical Reports Server (NTRS)

    Malaret, E.; Bartolucci, L. A.; Lozano, D. F.; Anuta, P. E.; Mcgillem, C. D.

    1984-01-01

    A LANDSAT Thematic Mapper (TM) quality evaluation study was conducted to identify geometric and radiometric sensor errors in the post-launch environment. The study began with the launch of LANDSAT-4. Several error conditions were found, including band-to-band misregistration and detector-to detector radiometric calibration errors. Similar analysis was made for the LANDSAT-5 Thematic Mapper and compared with results for LANDSAT-4. Remaining band-to-band misregistration was found to be within tolerances and detector-to-detector calibration errors were not severe. More coherent noise signals were observed in TM-5 than in TM-4, although the amplitude was generally less. The scan direction differences observed in TM-4 were still evident in TM-5. The largest effect was in Band 4 where nearly a one digital count difference was observed. Resolution estimation was carried out using roads in TM-5 for the primary focal plane bands rather than field edges as in TM-4. Estimates using roads gave better resolution. Thermal IR band calibration studies were conducted and new nonlinear calibration procedures were defined for TM-5. The overall conclusion is that there are no first order errors in TM-5 and any remaining problems are second or third order.

  5. Surface characterization protocol for precision aspheric optics

    NASA Astrophysics Data System (ADS)

    Sarepaka, RamaGopal V.; Sakthibalan, Siva; Doodala, Somaiah; Panwar, Rakesh S.; Kotaria, Rajendra

    2017-10-01

    In Advanced Optical Instrumentation, Aspherics provide an effective performance alternative. The aspheric fabrication and surface metrology, followed by aspheric design are complementary iterative processes for Precision Aspheric development. As in fabrication, a holistic approach of aspheric surface characterization is adopted to evaluate actual surface error and to aim at the deliverance of aspheric optics with desired surface quality. Precision optical surfaces are characterized by profilometry or by interferometry. Aspheric profiles are characterized by contact profilometers, through linear surface scans to analyze their Form, Figure and Finish errors. One must ensure that, the surface characterization procedure does not add to the resident profile errors (generated during the aspheric surface fabrication). This presentation examines the errors introduced post-surface generation and during profilometry of aspheric profiles. This effort is to identify sources of errors and is to optimize the metrology process. The sources of error during profilometry may be due to: profilometer settings, work-piece placement on the profilometer stage, selection of zenith/nadir points of aspheric profiles, metrology protocols, clear aperture - diameter analysis, computational limitations of the profiler and the software issues etc. At OPTICA, a PGI 1200 FTS contact profilometer (Taylor-Hobson make) is used for this study. Precision Optics of various profiles are studied, with due attention to possible sources of errors during characterization, with multi-directional scan approach for uniformity and repeatability of error estimation. This study provides an insight of aspheric surface characterization and helps in optimal aspheric surface production methodology.

  6. Information-Gathering Patterns Associated with Higher Rates of Diagnostic Error

    ERIC Educational Resources Information Center

    Delzell, John E., Jr.; Chumley, Heidi; Webb, Russell; Chakrabarti, Swapan; Relan, Anju

    2009-01-01

    Diagnostic errors are an important source of medical errors. Problematic information-gathering is a common cause of diagnostic errors among physicians and medical students. The objectives of this study were to (1) determine if medical students' information-gathering patterns formed clusters of similar strategies, and if so (2) to calculate the…

  7. More on Systematic Error in a Boyle's Law Experiment

    ERIC Educational Resources Information Center

    McCall, Richard P.

    2012-01-01

    A recent article in "The Physics Teacher" describes a method for analyzing a systematic error in a Boyle's law laboratory activity. Systematic errors are important to consider in physics labs because they tend to bias the results of measurements. There are numerous laboratory examples and resources that discuss this common source of error.

  8. Adult age differences in unconscious transference: source confusion or identity blending?

    PubMed

    Perfect, Timothy J; Harris, Lucy J

    2003-06-01

    Eyewitnesses are known often to falsely identify a familiar but innocent bystander when asked to pick out a perpetrator from a lineup. Such unconscious transference errors have been attributed to either identity confusions at encoding or source retrieval errors. Three experiments contrasted younger and older adults in their susceptibility to such misidentifications. Participants saw photographs of perpetrators, then a series of mug shots of innocent bystanders. A week later, they saw lineups containing bystanders (and others containing perpetrators in Experiment 3) and were asked whether any of the perpetrators were present. When younger faces were used as stimuli (Experiments 1 and 3), older adults showed higher rates of transference errors. When older faces were used as stimuli (Experiments 2 and 3), no such age effects in rates of unconscious transference were apparent. In addition, older adults in Experiment 3 showed an own-age bias effect for correct identification of targets. Unconscious transference errors were found to be due to both source retrieval errors and identity confusions, but age-related increases were found only in the latter.

  9. Investigating the error sources of the online state of charge estimation methods for lithium-ion batteries in electric vehicles

    NASA Astrophysics Data System (ADS)

    Zheng, Yuejiu; Ouyang, Minggao; Han, Xuebing; Lu, Languang; Li, Jianqiu

    2018-02-01

    Sate of charge (SOC) estimation is generally acknowledged as one of the most important functions in battery management system for lithium-ion batteries in new energy vehicles. Though every effort is made for various online SOC estimation methods to reliably increase the estimation accuracy as much as possible within the limited on-chip resources, little literature discusses the error sources for those SOC estimation methods. This paper firstly reviews the commonly studied SOC estimation methods from a conventional classification. A novel perspective focusing on the error analysis of the SOC estimation methods is proposed. SOC estimation methods are analyzed from the views of the measured values, models, algorithms and state parameters. Subsequently, the error flow charts are proposed to analyze the error sources from the signal measurement to the models and algorithms for the widely used online SOC estimation methods in new energy vehicles. Finally, with the consideration of the working conditions, choosing more reliable and applicable SOC estimation methods is discussed, and the future development of the promising online SOC estimation methods is suggested.

  10. Treatability Aspects of Urban Stormwater Stressors

    EPA Science Inventory

    Eleven years into the 21st century, pollution from diffuse sources (pollution from contaminants picked up and carried into surface waters by stormwater runoff) remains the nation's largest source of water quality problems. Scientists and engineers still seek solutions that will ...

  11. [The error, source of learning].

    PubMed

    Joyeux, Stéphanie; Bohic, Valérie

    2016-05-01

    The error itself is not recognised as a fault. It is the intentionality which differentiates between an error and a fault. An error is unintentional while a fault is a failure to respect known rules. The risk of error is omnipresent in health institutions. Public authorities have therefore set out a series of measures to reduce this risk. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  12. Types of Possible Survey Errors in Estimates Published in the Weekly Natural Gas Storage Report

    EIA Publications

    2016-01-01

    This document lists types of potential errors in EIA estimates published in the WNGSR. Survey errors are an unavoidable aspect of data collection. Error is inherent in all collected data, regardless of the source of the data and the care and competence of data collectors. The type and extent of error depends on the type and characteristics of the survey.

  13. A spectrally tunable solid-state source for radiometric, photometric, and colorimetric applications

    NASA Astrophysics Data System (ADS)

    Fryc, Irena; Brown, Steven W.; Eppeldauer, George P.; Ohno, Yoshihiro

    2004-10-01

    A spectrally tunable light source using a large number of LEDs and an integrating sphere has been designed and being developed at NIST. The source is designed to have a capability of producing any spectral distributions mimicking various light sources in the visible region by feedback control of individual LEDs. The output spectral irradiance or radiance of the source will be calibrated by a reference instrument, and the source will be used as a spectroradiometric as well as photometric and colorimetric standard. The use of the tunable source mimicking spectra of display colors, for example, rather than a traditional incandescent standard lamp for calibration of colorimeters, can reduce the spectral mismatch errors of the colorimeter measuring displays significantly. A series of simulations have been conducted to predict the performance of the designed tunable source when used for calibration of colorimeters. The results indicate that the errors can be reduced by an order of magnitude compared with those when the colorimeters are calibrated against Illuminant A. Stray light errors of a spectroradiometer can also be effectively reduced by using the tunable source producing a blackbody spectrum at higher temperature (e.g., 9000 K). The source can also approximate various CIE daylight illuminants and common lamp spectral distributions for other photometric and colorimetric applications.

  14. A mass transfer model of ethanol emission from thin layers of corn silage

    USDA-ARS?s Scientific Manuscript database

    Dairies may be important emission sources for volatile organic compounds (VOCs). Reactive organic gases (ROG) emissions from dairy farms are the second largest source responsible for ozone formation in the California’s San Joaquin Valley. Animal feed was found to be a major ROG emission source on da...

  15. Restoration of the ASCA Source Position Accuracy

    NASA Astrophysics Data System (ADS)

    Gotthelf, E. V.; Ueda, Y.; Fujimoto, R.; Kii, T.; Yamaoka, K.

    2000-11-01

    We present a calibration of the absolute pointing accuracy of the Advanced Satellite for Cosmology and Astrophysics (ASCA) which allows us to compensate for a large error (up to 1') in the derived source coordinates. We parameterize a temperature dependent deviation of the attitude solution which is responsible for this error. By analyzing ASCA coordinates of 100 bright active galactic nuclei, we show that it is possible to reduce the uncertainty in the sky position for any given observation by a factor of 4. The revised 90% error circle radius is then 12", consistent with preflight specifications, effectively restoring the full ASCA pointing accuracy. Herein, we derive an algorithm which compensates for this attitude error and present an internet-based table to be used to correct post facto the coordinate of all ASCA observations. While the above error circle is strictly applicable to data taken with the on-board Solid-state Imaging Spectrometers (SISs), similar coordinate corrections are derived for data obtained with the Gas Imaging Spectrometers (GISs), which, however, have additional instrumental uncertainties. The 90% error circle radius for the central 20' diameter of the GIS is 24". The large reduction in the error circle area for the two instruments offers the opportunity to greatly enhance the search for X-ray counterparts at other wavelengths. This has important implications for current and future ASCA source catalogs and surveys.

  16. Quality of Best Possible Medication History upon Admission to Hospital: Comparison of Nurses and Pharmacy Students and Consideration of National Quality Indicators.

    PubMed

    Sproul, Ashley; Goodine, Carole; Moore, David; McLeod, Amy; Gordon, Jacqueline; Digby, Jennifer; Stoica, George

    2018-01-01

    Medication reconciliation at transitions of care increases patient safety. Collection of an accurate best possible medication history (BPMH) on admission is a key step. National quality indicators are used as surrogate markers for BPMH quality, but no literature on their accuracy exists. Obtaining a high-quality BPMH is often labour- and resource-intensive. Pharmacy students are now being assigned to obtain BPMHs, as a cost-effective means to increase BPMH completion, despite limited information to support the quality of BPMHs obtained by students relative to other health care professionals. To determine whether the national quality indicator of using more than one source to complete a BPMH is a true marker of quality and to assess whether BPMHs obtained by pharmacy students were of quality equal to those obtained by nurses. This prospective trial compared BPMHs for the same group of patients collected by nurses and by trained pharmacy students in the emergency departments of 2 sites within a large health network over a 2-month period (July and August 2016). Discrepancies between the 2 versions were identified by a pharmacist, who determined which party (nurse, pharmacy student, or both) had made an error. A panel of experts reviewed the errors and ranked their severity. BPMHs were prepared for a total of 40 patients. Those prepared by nurses were more likely to contain an error than those prepared by pharmacy students (171 versus 43 errors, p = 0.006). There was a nonsignificant trend toward less severe errors in BPMHs completed by pharmacy students. There was no significant difference in the mean number of errors in relation to the specified quality indicator (mean of 2.7 errors for BPMHs prepared from 1 source versus 4.8 errors for BPMHs prepared from ≥ 2 sources, p = 0.08). The surrogate marker (number of BPMH sources) may not reflect BPMH quality. However, it appears that BPMHs prepared by pharmacy students had fewer errors and were of similar quality (in terms of clinically significant errors) relative to those prepared by nurses.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Passarge, M; Fix, M K; Manser, P

    Purpose: To create and test an accurate EPID-frame-based VMAT QA metric to detect gross dose errors in real-time and to provide information about the source of error. Methods: A Swiss cheese model was created for an EPID-based real-time QA process. The system compares a treatmentplan- based reference set of EPID images with images acquired over each 2° gantry angle interval. The metric utilizes a sequence of independent consecutively executed error detection Methods: a masking technique that verifies infield radiation delivery and ensures no out-of-field radiation; output normalization checks at two different stages; global image alignment to quantify rotation, scaling andmore » translation; standard gamma evaluation (3%, 3 mm) and pixel intensity deviation checks including and excluding high dose gradient regions. Tolerances for each test were determined. For algorithm testing, twelve different types of errors were selected to modify the original plan. Corresponding predictions for each test case were generated, which included measurement-based noise. Each test case was run multiple times (with different noise per run) to assess the ability to detect introduced errors. Results: Averaged over five test runs, 99.1% of all plan variations that resulted in patient dose errors were detected within 2° and 100% within 4° (∼1% of patient dose delivery). Including cases that led to slightly modified but clinically equivalent plans, 91.5% were detected by the system within 2°. Based on the type of method that detected the error, determination of error sources was achieved. Conclusion: An EPID-based during-treatment error detection system for VMAT deliveries was successfully designed and tested. The system utilizes a sequence of methods to identify and prevent gross treatment delivery errors. The system was inspected for robustness with realistic noise variations, demonstrating that it has the potential to detect a large majority of errors in real-time and indicate the error source. J. V. Siebers receives funding support from Varian Medical Systems.« less

  18. Quantifying uncertainty in carbon and nutrient pools of coarse woody debris

    NASA Astrophysics Data System (ADS)

    See, C. R.; Campbell, J. L.; Fraver, S.; Domke, G. M.; Harmon, M. E.; Knoepp, J. D.; Woodall, C. W.

    2016-12-01

    Woody detritus constitutes a major pool of both carbon and nutrients in forested ecosystems. Estimating coarse wood stocks relies on many assumptions, even when full surveys are conducted. Researchers rarely report error in coarse wood pool estimates, despite the importance to ecosystem budgets and modelling efforts. To date, no study has attempted a comprehensive assessment of error rates and uncertainty inherent in the estimation of this pool. Here, we use Monte Carlo analysis to propagate the error associated with the major sources of uncertainty present in the calculation of coarse wood carbon and nutrient (i.e., N, P, K, Ca, Mg, Na) pools. We also evaluate individual sources of error to identify the importance of each source of uncertainty in our estimates. We quantify sampling error by comparing the three most common field methods used to survey coarse wood (two transect methods and a whole-plot survey). We quantify the measurement error associated with length and diameter measurement, and technician error in species identification and decay class using plots surveyed by multiple technicians. We use previously published values of model error for the four most common methods of volume estimation: Smalian's, conical frustum, conic paraboloid, and average-of-ends. We also use previously published values for error in the collapse ratio (cross-sectional height/width) of decayed logs that serves as a surrogate for the volume remaining. We consider sampling error in chemical concentration and density for all decay classes, using distributions from both published and unpublished studies. Analytical uncertainty is calculated using standard reference plant material from the National Institute of Standards. Our results suggest that technician error in decay classification can have a large effect on uncertainty, since many of the error distributions included in the calculation (e.g. density, chemical concentration, volume-model selection, collapse ratio) are decay-class specific.

  19. Reliability of fish size estimates obtained from multibeam imaging sonar

    USGS Publications Warehouse

    Hightower, Joseph E.; Magowan, Kevin J.; Brown, Lori M.; Fox, Dewayne A.

    2013-01-01

    Multibeam imaging sonars have considerable potential for use in fisheries surveys because the video-like images are easy to interpret, and they contain information about fish size, shape, and swimming behavior, as well as characteristics of occupied habitats. We examined images obtained using a dual-frequency identification sonar (DIDSON) multibeam sonar for Atlantic sturgeon Acipenser oxyrinchus oxyrinchus, striped bass Morone saxatilis, white perch M. americana, and channel catfish Ictalurus punctatus of known size (20–141 cm) to determine the reliability of length estimates. For ranges up to 11 m, percent measurement error (sonar estimate – total length)/total length × 100 varied by species but was not related to the fish's range or aspect angle (orientation relative to the sonar beam). Least-square mean percent error was significantly different from 0.0 for Atlantic sturgeon (x̄  =  −8.34, SE  =  2.39) and white perch (x̄  = 14.48, SE  =  3.99) but not striped bass (x̄  =  3.71, SE  =  2.58) or channel catfish (x̄  = 3.97, SE  =  5.16). Underestimating lengths of Atlantic sturgeon may be due to difficulty in detecting the snout or the longer dorsal lobe of the heterocercal tail. White perch was the smallest species tested, and it had the largest percent measurement errors (both positive and negative) and the lowest percentage of images classified as good or acceptable. Automated length estimates for the four species using Echoview software varied with position in the view-field. Estimates tended to be low at more extreme azimuthal angles (fish's angle off-axis within the view-field), but mean and maximum estimates were highly correlated with total length. Software estimates also were biased by fish images partially outside the view-field and when acoustic crosstalk occurred (when a fish perpendicular to the sonar and at relatively close range is detected in the side lobes of adjacent beams). These sources of bias are apparent when files are processed manually and can be filtered out when producing automated software estimates. Multibeam sonar estimates of fish size should be useful for research and management if these potential sources of bias and imprecision are addressed.

  20. Error Modeling of Multi-baseline Optical Truss. Part II; Application to SIM Metrology Truss Field Dependent Error

    NASA Technical Reports Server (NTRS)

    Zhang, Liwei Dennis; Milman, Mark; Korechoff, Robert

    2004-01-01

    The current design of the Space Interferometry Mission (SIM) employs a 19 laser-metrology-beam system (also called L19 external metrology truss) to monitor changes of distances between the fiducials of the flight system's multiple baselines. The function of the external metrology truss is to aid in the determination of the time-variations of the interferometer baseline. The largest contributor to truss error occurs in SIM wide-angle observations when the articulation of the siderostat mirrors (in order to gather starlight from different sky coordinates) brings to light systematic errors due to offsets at levels of instrument components (which include comer cube retro-reflectors, etc.). This error is labeled external metrology wide-angle field-dependent error. Physics-based model of field-dependent error at single metrology gauge level is developed and linearly propagated to errors in interferometer delay. In this manner delay error sensitivity to various error parameters or their combination can be studied using eigenvalue/eigenvector analysis. Also validation of physics-based field-dependent model on SIM testbed lends support to the present approach. As a first example, dihedral error model is developed for the comer cubes (CC) attached to the siderostat mirrors. Then the delay errors due to this effect can be characterized using the eigenvectors of composite CC dihedral error. The essence of the linear error model is contained in an error-mapping matrix. A corresponding Zernike component matrix approach is developed in parallel, first for convenience of describing the RMS of errors across the field-of-regard (FOR), and second for convenience of combining with additional models. Average and worst case residual errors are computed when various orders of field-dependent terms are removed from the delay error. Results of the residual errors are important in arriving at external metrology system component requirements. Double CCs with ideally co-incident vertices reside with the siderostat. The non-common vertex error (NCVE) is treated as a second example. Finally combination of models, and various other errors are discussed.

  1. Registration of pencil beam proton radiography data with X-ray CT.

    PubMed

    Deffet, Sylvain; Macq, Benoît; Righetto, Roberto; Vander Stappen, François; Farace, Paolo

    2017-10-01

    Proton radiography seems to be a promising tool for assessing the quality of the stopping power computation in proton therapy. However, range error maps obtained on the basis of proton radiographs are very sensitive to small misalignment between the planning CT and the proton radiography acquisitions. In order to be able to mitigate misalignment in postprocessing, the authors implemented a fast method for registration between pencil proton radiography data obtained with a multilayer ionization chamber (MLIC) and an X-ray CT acquired on a head phantom. The registration was performed by optimizing a cost function which performs a comparison between the acquired data and simulated integral depth-dose curves. Two methodologies were considered, one based on dual orthogonal projections and the other one on a single projection. For each methodology, the robustness of the registration algorithm with respect to three confounding factors (measurement noise, CT calibration errors, and spot spacing) was investigated by testing the accuracy of the method through simulations based on a CT scan of a head phantom. The present registration method showed robust convergence towards the optimal solution. For the level of measurement noise and the uncertainty in the stopping power computation expected in proton radiography using a MLIC, the accuracy appeared to be better than 0.3° for angles and 0.3 mm for translations by use of the appropriate cost function. The spot spacing analysis showed that a spacing larger than the 5 mm used by other authors for the investigation of a MLIC for proton radiography led to results with absolute accuracy better than 0.3° for angles and 1 mm for translations when orthogonal proton radiographs were fed into the algorithm. In the case of a single projection, 6 mm was the largest spot spacing presenting an acceptable registration accuracy. For registration of proton radiography data with X-ray CT, the use of a direct ray-tracing algorithm to compute sums of squared differences and corrections of range errors showed very good accuracy and robustness with respect to three confounding factors: measurement noise, calibration error, and spot spacing. It is therefore a suitable algorithm to use in the in vivo range verification framework, allowing to separate in postprocessing the proton range uncertainty due to setup errors from the other sources of uncertainty. © 2017 American Association of Physicists in Medicine.

  2. Quantifying methane and nitrous oxide emissions from the UK using a dense monitoring network

    NASA Astrophysics Data System (ADS)

    Ganesan, A. L.; Manning, A. J.; Grant, A.; Young, D.; Oram, D. E.; Sturges, W. T.; Moncrieff, J. B.; O'Doherty, S.

    2015-01-01

    The UK is one of several countries around the world that has enacted legislation to reduce its greenhouse gas emissions. Monitoring of emissions has been done through a detailed sectoral level bottom-up inventory (UK National Atmospheric Emissions Inventory, NAEI) from which national totals are submitted yearly to the United Framework Convention on Climate Change. In parallel, the UK government has funded four atmospheric monitoring stations to infer emissions through top-down methods that assimilate atmospheric observations. In this study, we present top-down emissions of methane (CH4) and nitrous oxide (N2O) for the UK and Ireland over the period August 2012 to August 2014. We used a hierarchical Bayesian inverse framework to infer fluxes as well as a set of covariance parameters that describe uncertainties in the system. We inferred average UK emissions of 2.08 (1.72-2.47) Tg yr-1 CH4 and 0.105 (0.087-0.127) Tg yr-1 N2O and found our derived estimates to be generally lower than the inventory. We used sectoral distributions from the NAEI to determine whether these discrepancies can be attributed to specific source sectors. Because of the distinct distributions of the two dominant CH4 emissions sectors in the UK, agriculture and waste, we found that the inventory may be overestimated in agricultural CH4 emissions. We also found that N2O fertilizer emissions from the NAEI may be overestimated and we derived a significant seasonal cycle in emissions. This seasonality is likely due to seasonality in fertilizer application and in environmental drivers such as temperature and rainfall, which are not reflected in the annual resolution inventory. Through the hierarchical Bayesian inverse framework, we quantified uncertainty covariance parameters and emphasized their importance for high-resolution emissions estimation. We inferred average model errors of approximately 20 and 0.4 ppb and correlation timescales of 1.0 (0.72-1.43) and 2.6 (1.9-3.9) days for CH4 and N2O, respectively. These errors are a combination of transport model errors as well as errors due to unresolved emissions processes in the inventory. We found the largest CH4 errors at the Tacolneston station in eastern England, which is possibly to do with sporadic emissions from landfills and offshore gas in the North Sea.

  3. Brain-based individual difference measures of reading skill in deaf and hearing adults.

    PubMed

    Mehravari, Alison S; Emmorey, Karen; Prat, Chantel S; Klarman, Lindsay; Osterhout, Lee

    2017-07-01

    Most deaf children and adults struggle to read, but some deaf individuals do become highly proficient readers. There is disagreement about the specific causes of reading difficulty in the deaf population, and consequently, disagreement about the effectiveness of different strategies for teaching reading to deaf children. Much of the disagreement surrounds the question of whether deaf children read in similar or different ways as hearing children. In this study, we begin to answer this question by using real-time measures of neural language processing to assess if deaf and hearing adults read proficiently in similar or different ways. Hearing and deaf adults read English sentences with semantic, grammatical, and simultaneous semantic/grammatical errors while event-related potentials (ERPs) were recorded. The magnitude of individuals' ERP responses was compared to their standardized reading comprehension test scores, and potentially confounding variables like years of education, speechreading skill, and language background of deaf participants were controlled for. The best deaf readers had the largest N400 responses to semantic errors in sentences, while the best hearing readers had the largest P600 responses to grammatical errors in sentences. These results indicate that equally proficient hearing and deaf adults process written language in different ways, suggesting there is little reason to assume that literacy education should necessarily be the same for hearing and deaf children. The results also show that the most successful deaf readers focus on semantic information while reading, which suggests aspects of education that may promote improved literacy in the deaf population. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Uncertainty Analysis of Seebeck Coefficient and Electrical Resistivity Characterization

    NASA Technical Reports Server (NTRS)

    Mackey, Jon; Sehirlioglu, Alp; Dynys, Fred

    2014-01-01

    In order to provide a complete description of a materials thermoelectric power factor, in addition to the measured nominal value, an uncertainty interval is required. The uncertainty may contain sources of measurement error including systematic bias error and precision error of a statistical nature. The work focuses specifically on the popular ZEM-3 (Ulvac Technologies) measurement system, but the methods apply to any measurement system. The analysis accounts for sources of systematic error including sample preparation tolerance, measurement probe placement, thermocouple cold-finger effect, and measurement parameters; in addition to including uncertainty of a statistical nature. Complete uncertainty analysis of a measurement system allows for more reliable comparison of measurement data between laboratories.

  5. Phase noise optimization in temporal phase-shifting digital holography with partial coherence light sources and its application in quantitative cell imaging.

    PubMed

    Remmersmann, Christian; Stürwald, Stephan; Kemper, Björn; Langehanenberg, Patrik; von Bally, Gert

    2009-03-10

    In temporal phase-shifting-based digital holographic microscopy, high-resolution phase contrast imaging requires optimized conditions for hologram recording and phase retrieval. To optimize the phase resolution, for the example of a variable three-step algorithm, a theoretical analysis on statistical errors, digitalization errors, uncorrelated errors, and errors due to a misaligned temporal phase shift is carried out. In a second step the theoretically predicted results are compared to the measured phase noise obtained from comparative experimental investigations with several coherent and partially coherent light sources. Finally, the applicability for noise reduction is demonstrated by quantitative phase contrast imaging of pancreas tumor cells.

  6. Insights from Synthetic Star-forming Regions. II. Verifying Dust Surface Density, Dust Temperature, and Gas Mass Measurements With Modified Blackbody Fitting

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Koepferl, Christine M.; Robitaille, Thomas P.; Dale, James E., E-mail: koepferl@usm.lmu.de

    We use a large data set of realistic synthetic observations (produced in Paper I of this series) to assess how observational techniques affect the measurement physical properties of star-forming regions. In this part of the series (Paper II), we explore the reliability of the measured total gas mass, dust surface density and dust temperature maps derived from modified blackbody fitting of synthetic Herschel observations. We find from our pixel-by-pixel analysis of the measured dust surface density and dust temperature a worrisome error spread especially close to star formation sites and low-density regions, where for those “contaminated” pixels the surface densitiesmore » can be under/overestimated by up to three orders of magnitude. In light of this, we recommend to treat the pixel-based results from this technique with caution in regions with active star formation. In regions of high background typical in the inner Galactic plane, we are not able to recover reliable surface density maps of individual synthetic regions, since low-mass regions are lost in the far-infrared background. When measuring the total gas mass of regions in moderate background, we find that modified blackbody fitting works well (absolute error: + 9%; −13%) up to 10 kpc distance (errors increase with distance). Commonly, the initial images are convolved to the largest common beam-size, which smears contaminated pixels over large areas. The resulting information loss makes this commonly used technique less verifiable as now χ {sup 2} values cannot be used as a quality indicator of a fitted pixel. Our control measurements of the total gas mass (without the step of convolution to the largest common beam size) produce similar results (absolute error: +20%; −7%) while having much lower median errors especially for the high-mass stellar feedback phase. In upcoming papers (Paper III; Paper IV) of this series we test the reliability of measured star formation rate with direct and indirect techniques.« less

  7. Wind Prediction Accuracy for Air Traffic Management Decision Support Tools

    NASA Technical Reports Server (NTRS)

    Cole, Rod; Green, Steve; Jardin, Matt; Schwartz, Barry; Benjamin, Stan

    2000-01-01

    The performance of Air Traffic Management and flight deck decision support tools depends in large part on the accuracy of the supporting 4D trajectory predictions. This is particularly relevant to conflict prediction and active advisories for the resolution of conflicts and the conformance with of traffic-flow management flow-rate constraints (e.g., arrival metering / required time of arrival). Flight test results have indicated that wind prediction errors may represent the largest source of trajectory prediction error. The tests also discovered relatively large errors (e.g., greater than 20 knots), existing in pockets of space and time critical to ATM DST performance (one or more sectors, greater than 20 minutes), are inadequately represented by the classic RMS aggregate prediction-accuracy studies of the past. To facilitate the identification and reduction of DST-critical wind-prediction errors, NASA has lead a collaborative research and development activity with MIT Lincoln Laboratories and the Forecast Systems Lab of the National Oceanographic and Atmospheric Administration (NOAA). This activity, begun in 1996, has focussed on the development of key metrics for ATM DST performance, assessment of wind-prediction skill for state of the art systems, and development/validation of system enhancements to improve skill. A 13 month study was conducted for the Denver Center airspace in 1997. Two complementary wind-prediction systems were analyzed and compared to the forecast performance of the then standard 60 km Rapid Update Cycle - version 1 (RUC-1). One system, developed by NOAA, was the prototype 40-km RUC-2 that became operational at NCEP in 1999. RUC-2 introduced a faster cycle (1 hr vs. 3 hr) and improved mesoscale physics. The second system, Augmented Winds (AW), is a prototype en route wind application developed by MITLL based on the Integrated Terminal Wind System (ITWS). AW is run at a local facility (Center) level, and updates RUC predictions based on an optimal interpolation of the latest ACARS reports since the RUC run. This paper presents an overview of the study's results including the identification and use of new large mor wind-prediction accuracy metrics that are key to ATM DST performance.

  8. Logic-based assessment of the compatibility of UMLS ontology sources

    PubMed Central

    2011-01-01

    Background The UMLS Metathesaurus (UMLS-Meta) is currently the most comprehensive effort for integrating independently-developed medical thesauri and ontologies. UMLS-Meta is being used in many applications, including PubMed and ClinicalTrials.gov. The integration of new sources combines automatic techniques, expert assessment, and auditing protocols. The automatic techniques currently in use, however, are mostly based on lexical algorithms and often disregard the semantics of the sources being integrated. Results In this paper, we argue that UMLS-Meta’s current design and auditing methodologies could be significantly enhanced by taking into account the logic-based semantics of the ontology sources. We provide empirical evidence suggesting that UMLS-Meta in its 2009AA version contains a significant number of errors; these errors become immediately apparent if the rich semantics of the ontology sources is taken into account, manifesting themselves as unintended logical consequences that follow from the ontology sources together with the information in UMLS-Meta. We then propose general principles and specific logic-based techniques to effectively detect and repair such errors. Conclusions Our results suggest that the methodologies employed in the design of UMLS-Meta are not only very costly in terms of human effort, but also error-prone. The techniques presented here can be useful for both reducing human effort in the design and maintenance of UMLS-Meta and improving the quality of its contents. PMID:21388571

  9. Flight Test Results: CTAS Cruise/Descent Trajectory Prediction Accuracy for En route ATC Advisories

    NASA Technical Reports Server (NTRS)

    Green, S.; Grace, M.; Williams, D.

    1999-01-01

    The Center/TRACON Automation System (CTAS), under development at NASA Ames Research Center, is designed to assist controllers with the management and control of air traffic transitioning to/from congested airspace. This paper focuses on the transition from the en route environment, to high-density terminal airspace, under a time-based arrival-metering constraint. Two flight tests were conducted at the Denver Air Route Traffic Control Center (ARTCC) to study trajectory-prediction accuracy, the key to accurate Decision Support Tool advisories such as conflict detection/resolution and fuel-efficient metering conformance. In collaboration with NASA Langley Research Center, these test were part of an overall effort to research systems and procedures for the integration of CTAS and flight management systems (FMS). The Langley Transport Systems Research Vehicle Boeing 737 airplane flew a combined total of 58 cruise-arrival trajectory runs while following CTAS clearance advisories. Actual trajectories of the airplane were compared to CTAS and FMS predictions to measure trajectory-prediction accuracy and identify the primary sources of error for both. The research airplane was used to evaluate several levels of cockpit automation ranging from conventional avionics to a performance-based vertical navigation (VNAV) FMS. Trajectory prediction accuracy was analyzed with respect to both ARTCC radar tracking and GPS-based aircraft measurements. This paper presents detailed results describing the trajectory accuracy and error sources. Although differences were found in both accuracy and error sources, CTAS accuracy was comparable to the FMS in terms of both meter-fix arrival-time performance (in support of metering) and 4D-trajectory prediction (key to conflict prediction). Overall arrival time errors (mean plus standard deviation) were measured to be approximately 24 seconds during the first flight test (23 runs) and 15 seconds during the second flight test (25 runs). The major source of error during these tests was found to be the predicted winds aloft used by CTAS. Position and velocity estimates of the airplane provided to CTAS by the ATC Host radar tracker were found to be a relatively insignificant error source for the trajectory conditions evaluated. Airplane performance modeling errors within CTAS were found to not significantly affect arrival time errors when the constrained descent procedures were used. The most significant effect related to the flight guidance was observed to be the cross-track and turn-overshoot errors associated with conventional VOR guidance. Lateral navigation (LNAV) guidance significantly reduced both the cross-track and turn-overshoot error. Pilot procedures and VNAV guidance were found to significantly reduce the vertical profile errors associated with atmospheric and aircraft performance model errors.

  10. A Sleeping Giant Awakened: Reignition of AGN Activity, Reborn Star Formation, and a Multiphase Outflow in one of the Largest Radio Galaxies Known

    NASA Astrophysics Data System (ADS)

    Tremblay, Grant; O'Dea, Christopher; Labiano, Alvaro; Baum, Stefi; McDermid, Richard; Combes, Francoise; Garcia-Burillo, Santiago; Davis, Timothy

    2014-08-01

    3C 236 is the second largest known radio galaxy and one of the largest objects in the known Universe. Its central AGN has recently reignited after a 10 Myr dormancy period, giving rise to a very young and compact radio source and a 1000 km/sec outflow of warm ionized and atomic HI gas. We propose GMOS-N IFU observations to resolve this outflow, determine its driver, and estimate the relative coupling efficiencies between the warm ionized, atomic, and cold molecular gas phases. We will assemble a much-needed spatially resolved Balmer decrement (extinction map) across the dramatic double dust lanes of this source, enabling high spatial resolution star formation rate, efficiency, and gas excitation and velocity maps. These will address several mysteries related to the very high star formation efficiency and the unique nature of the multiphase outflow in this source. 3C 236 is such a remarkable galaxy that whatever the results of the proposed observations, they will have wide-ranging implications for the triggering of star formation and AGN activity, their possibly coupled co-evolution, and the feedback effects of the latter on the former.

  11. General model for the pointing error analysis of Risley-prism system based on ray direction deviation in light refraction

    NASA Astrophysics Data System (ADS)

    Zhang, Hao; Yuan, Yan; Su, Lijuan; Huang, Fengzhen; Bai, Qing

    2016-09-01

    The Risley-prism-based light beam steering apparatus delivers superior pointing accuracy and it is used in imaging LIDAR and imaging microscopes. A general model for pointing error analysis of the Risley prisms is proposed in this paper, based on ray direction deviation in light refraction. This model captures incident beam deviation, assembly deflections, and prism rotational error. We derive the transmission matrixes of the model firstly. Then, the independent and cumulative effects of different errors are analyzed through this model. Accuracy study of the model shows that the prediction deviation of pointing error for different error is less than 4.1×10-5° when the error amplitude is 0.1°. Detailed analyses of errors indicate that different error sources affect the pointing accuracy to varying degree, and the major error source is the incident beam deviation. The prism tilting has a relative big effect on the pointing accuracy when prism tilts in the principal section. The cumulative effect analyses of multiple errors represent that the pointing error can be reduced by tuning the bearing tilting in the same direction. The cumulative effect of rotational error is relative big when the difference of these two prism rotational angles equals 0 or π, while it is relative small when the difference equals π/2. The novelty of these results suggests that our analysis can help to uncover the error distribution and aid in measurement calibration of Risley-prism systems.

  12. SUS Source Level Error Analysis

    DTIC Science & Technology

    1978-01-20

    RIECIP1IEN’ CATALOG NUMBER * ITLE (and SubaltIe) S. TYP aof REPORT & _V9RCO SUS~ SOURCE LEVEL ERROR ANALYSIS & Fia 1.r,. -. pAURWORONTIUMm N (s)$S...Fourier Transform (FFTl) SUS Signal model ___ 10 TRA&C (CeEOINIMII1& ro"* *140O tidat n9#*#*Y a"d 0e~ntiff 6T 69*.4 apbt The report provides an analysis ...of major terms which contribute to signal analysis error in a proposed experiment to c-librate sourr - I levels of SUS (Signal Underwater Sound). A

  13. Treatability Aspects of Urban Stormwater Stressors - journal

    EPA Science Inventory

    Eleven years into the 21st century, pollution from diffuse sources (pollution from contaminants picked up and carried into surface waters by stormwater runoff) remains the nation's largest source of water quality problems. Scientists and engineers still seek solutions that will a...

  14. Treatability Aspects of Urban Stormwater Stressors - paper

    EPA Science Inventory

    Eleven years into the 21st century, pollution from diffuse sources (pollution from contaminants picked up and carried into surface waters by stormwater runoff) remains the nation's largest source of water quality problems. Scientists and engineers still seek solutions that will a...

  15. Inference of emission rates from multiple sources using Bayesian probability theory.

    PubMed

    Yee, Eugene; Flesch, Thomas K

    2010-03-01

    The determination of atmospheric emission rates from multiple sources using inversion (regularized least-squares or best-fit technique) is known to be very susceptible to measurement and model errors in the problem, rendering the solution unusable. In this paper, a new perspective is offered for this problem: namely, it is argued that the problem should be addressed as one of inference rather than inversion. Towards this objective, Bayesian probability theory is used to estimate the emission rates from multiple sources. The posterior probability distribution for the emission rates is derived, accounting fully for the measurement errors in the concentration data and the model errors in the dispersion model used to interpret the data. The Bayesian inferential methodology for emission rate recovery is validated against real dispersion data, obtained from a field experiment involving various source-sensor geometries (scenarios) consisting of four synthetic area sources and eight concentration sensors. The recovery of discrete emission rates from three different scenarios obtained using Bayesian inference and singular value decomposition inversion are compared and contrasted.

  16. Permitted water use in Iowa, 1985

    USGS Publications Warehouse

    Runkle, D.L.; Newman, J.L.; Shields, E.M.

    1985-01-01

    This report summarizes where, how much and for what purpose water is allocated for use in Iowa with permits issued by the Department of Water, Air and Waste Management. In Iowa, from a total permitted water use of 855,175.45 million gallons per year, about 58 percent is from surface-water sources and about 42 percent is from ground-water sources. Streams are 80.5 percent of the total surface-water use and wells make up 80.1 percent of the total ground-water use, with 65.4 percent of ground water coming from surficial aquifers. Power generation is the use category that is permitted the largest amount of total water use, 46.6 percent, with surface water being the source of 96.7 percent and 77.9 percent of the surface water is from streams. The public water suppliers' category is the next largest use type with 15.7 percent of the total permitted water. Ground water constitutes 74.4 percent of the public water supplier category with 51.7 percent from surficial aquifers. Surface water makes up 25.6 percent of this category with 83.0 percent of the surface water withdrawn from streams. Mining comprises 13.4 percent of the total water use and is the third largest water-use category. Ground water is the source of 63.3 percent of permitted mining water use with 94.3 percent of this from quarries and sand and gravel pits. Surface water is the source of 36.7 percent of the permitted mining water use with 97.6 percent from streams. Irrigation is the fourth largest permitted use type using 12.0 percent of the total water use. Eighty-eight percent of irrigation is from ground-water sources where surficial aquifers account for 94.7 percent. Streams are 81.1 percent of irrigational surface-water use. Self-supplied industrial users are permitted 10.6 percent of the total permitted water use with 85.5 percent of this from ground-water sources and 14.5 percent from surface-water sources. Of the self-supplied industrial ground-water use, 47.9 percent comes from surficial aquifers and of the self-supplied industrial surface-water use 86.1 percent is from streams. Self-supplied commercial use is allocated 1.5 percent of the total permitted water. Surface-water is the source of 37.7 percent of this and 62.3 percent is from ground-water sources. Agricultural (non-irrigation) use is 0.3 percent of the total permitted water with 73.3 percent from groundwater sources and 26.7 percent from surface-water sources. The areas that are allocated the most water permits are east-central Iowa and west-central Iowa.

  17. Effects of minute misregistrations of prefabricated markers for image-guided dental implant surgery: an analytical evaluation.

    PubMed

    Rußig, Lorenz L; Schulze, Ralf K W

    2013-12-01

    The goal of the present study was to develop a theoretical analysis of errors in implant position, which can occur owing to minute registration errors of a reference marker in a cone beam computed tomography volume when inserting an implant with a surgical stent. A virtual dental-arch model was created using anatomic data derived from the literature. Basic trigonometry was used to compute effects of defined minute registration errors of only voxel size. The errors occurring at the implant's neck and apex both in horizontal as in vertical direction were computed for mean ±95%-confidence intervals of jaw width and length and typical implant lengths (8, 10 and 12 mm). Largest errors occur in vertical direction for larger voxel sizes and for greater arch dimensions. For a 10 mm implant in the frontal region, these can amount to a mean of 0.716 mm (range: 0.201-1.533 mm). Horizontal errors at the neck are negligible, with a mean overall deviation of 0.009 mm (range: 0.001-0.034 mm). Errors increase with distance to the registration marker and voxel size and are affected by implant length. Our study shows that minute and realistic errors occurring in the automated registration of a reference object have an impact on the implant's position and angulation. These errors occur in the fundamental initial step in the long planning chain; thus, they are critical and should be made aware to users of these systems. © 2012 John Wiley & Sons A/S.

  18. Increased Error-Related Negativity (ERN) in Childhood Anxiety Disorders: ERP and Source Localization

    ERIC Educational Resources Information Center

    Ladouceur, Cecile D.; Dahl, Ronald E.; Birmaher, Boris; Axelson, David A.; Ryan, Neal D.

    2006-01-01

    Background: In this study we used event-related potentials (ERPs) and source localization analyses to track the time course of neural activity underlying response monitoring in children diagnosed with an anxiety disorder compared to age-matched low-risk normal controls. Methods: High-density ERPs were examined following errors on a flanker task…

  19. Development of Action Monitoring through Adolescence into Adulthood: ERP and Source Localization

    ERIC Educational Resources Information Center

    Ladouceur, Cecile D.; Dahl, Ronald E.; Carter, Cameron S.

    2007-01-01

    In this study we examined the development of three action monitoring event-related potentials (ERPs)--the error-related negativity (ERN/Ne), error positivity (P[subscript E]) and the N2--and estimated their neural sources. These ERPs were recorded during a flanker task in the following groups: early adolescents (mean age = 12 years), late…

  20. Random Error in Judgment: The Contribution of Encoding and Retrieval Processes

    ERIC Educational Resources Information Center

    Pleskac, Timothy J.; Dougherty, Michael R.; Rivadeneira, A. Walkyria; Wallsten, Thomas S.

    2009-01-01

    Theories of confidence judgments have embraced the role random error plays in influencing responses. An important next step is to identify the source(s) of these random effects. To do so, we used the stochastic judgment model (SJM) to distinguish the contribution of encoding and retrieval processes. In particular, we investigated whether dividing…

  1. Measuring the Lense-Thirring precession using a second Lageos satellite

    NASA Technical Reports Server (NTRS)

    Tapley, B. D.; Ciufolini, I.

    1989-01-01

    A complete numerical simulation and error analysis was performed for the proposed experiment with the objective of establishing an accurate assessment of the feasibility and the potential accuracy of the measurement of the Lense-Thirring precession. Consideration was given to identifying the error sources which limit the accuracy of the experiment and proposing procedures for eliminating or reducing the effect of these errors. Analytic investigations were conducted to study the effects of major error sources with the objective of providing error bounds on the experiment. The analysis of realistic simulated data is used to demonstrate that satellite laser ranging of two Lageos satellites, orbiting with supplemental inclinations, collected for a period of 3 years or more, can be used to verify the Lense-Thirring precession. A comprehensive covariance analysis for the solution was also developed.

  2. The Use of Neural Networks in Identifying Error Sources in Satellite-Derived Tropical SST Estimates

    PubMed Central

    Lee, Yung-Hsiang; Ho, Chung-Ru; Su, Feng-Chun; Kuo, Nan-Jung; Cheng, Yu-Hsin

    2011-01-01

    An neural network model of data mining is used to identify error sources in satellite-derived tropical sea surface temperature (SST) estimates from thermal infrared sensors onboard the Geostationary Operational Environmental Satellite (GOES). By using the Back Propagation Network (BPN) algorithm, it is found that air temperature, relative humidity, and wind speed variation are the major factors causing the errors of GOES SST products in the tropical Pacific. The accuracy of SST estimates is also improved by the model. The root mean square error (RMSE) for the daily SST estimate is reduced from 0.58 K to 0.38 K and mean absolute percentage error (MAPE) is 1.03%. For the hourly mean SST estimate, its RMSE is also reduced from 0.66 K to 0.44 K and the MAPE is 1.3%. PMID:22164030

  3. Article Errors in the English Writing of Saudi EFL Preparatory Year Students

    ERIC Educational Resources Information Center

    Alhaisoni, Eid; Gaudel, Daya Ram; Al-Zuoud, Khalid M.

    2017-01-01

    This study aims at providing a comprehensive account of the types of errors produced by Saudi EFL students enrolled in the preparatory year programe in their use of articles, based on the Surface Structure Taxonomies (SST) of errors. The study describes the types, frequency and sources of the definite and indefinite article errors in writing…

  4. An Analysis of Spanish and German Learners' Errors. Working Papers on Bilingualism, No. 7.

    ERIC Educational Resources Information Center

    LoCoco, Veronica Gonzalez-Mena

    This study analyzes Spanish and German errors committed by adult native speakers of English enrolled in elementary and intermediate levels. Four written samples were collected for each target language, over a period of five months. Errors were categorized according to their possible source. Types of errors were ordered according to their…

  5. Spindle Thermal Error Optimization Modeling of a Five-axis Machine Tool

    NASA Astrophysics Data System (ADS)

    Guo, Qianjian; Fan, Shuo; Xu, Rufeng; Cheng, Xiang; Zhao, Guoyong; Yang, Jianguo

    2017-05-01

    Aiming at the problem of low machining accuracy and uncontrollable thermal errors of NC machine tools, spindle thermal error measurement, modeling and compensation of a two turntable five-axis machine tool are researched. Measurement experiment of heat sources and thermal errors are carried out, and GRA(grey relational analysis) method is introduced into the selection of temperature variables used for thermal error modeling. In order to analyze the influence of different heat sources on spindle thermal errors, an ANN (artificial neural network) model is presented, and ABC(artificial bee colony) algorithm is introduced to train the link weights of ANN, a new ABC-NN(Artificial bee colony-based neural network) modeling method is proposed and used in the prediction of spindle thermal errors. In order to test the prediction performance of ABC-NN model, an experiment system is developed, the prediction results of LSR (least squares regression), ANN and ABC-NN are compared with the measurement results of spindle thermal errors. Experiment results show that the prediction accuracy of ABC-NN model is higher than LSR and ANN, and the residual error is smaller than 3 μm, the new modeling method is feasible. The proposed research provides instruction to compensate thermal errors and improve machining accuracy of NC machine tools.

  6. Pollutant source identification model for water pollution incidents in small straight rivers based on genetic algorithm

    NASA Astrophysics Data System (ADS)

    Zhang, Shou-ping; Xin, Xiao-kang

    2017-07-01

    Identification of pollutant sources for river pollution incidents is an important and difficult task in the emergency rescue, and an intelligent optimization method can effectively compensate for the weakness of traditional methods. An intelligent model for pollutant source identification has been established using the basic genetic algorithm (BGA) as an optimization search tool and applying an analytic solution formula of one-dimensional unsteady water quality equation to construct the objective function. Experimental tests show that the identification model is effective and efficient: the model can accurately figure out the pollutant amounts or positions no matter single pollution source or multiple sources. Especially when the population size of BGA is set as 10, the computing results are sound agree with analytic results for a single source amount and position identification, the relative errors are no more than 5 %. For cases of multi-point sources and multi-variable, there are some errors in computing results for the reasons that there exist many possible combinations of the pollution sources. But, with the help of previous experience to narrow the search scope, the relative errors of the identification results are less than 5 %, which proves the established source identification model can be used to direct emergency responses.

  7. Evaluation of the impact of ionospheric disturbances on air navigation augmentation system using multi-point GPS receivers

    NASA Astrophysics Data System (ADS)

    Omatsu, N.; Otsuka, Y.; Shiokawa, K.; Saito, S.

    2013-12-01

    In recent years, GPS has been utilized for navigation system for airplanes. Propagation delays in the ionosphere due to total electron content (TEC) between GPS satellite and receiver cause large positioning errors. In precision measurement using GPS, the ionospheric delay correction is generally conducted using both GPS L1 and L2 frequencies. However, L2 frequency is not internationally accepted as air navigation band, so it is not available for positioning directly in air navigation. In air navigation, not only positioning accuracy but safety is important, so augmentation systems are required to ensure the safety. Augmentation systems such as the satellite-based augmentation system (SBAS) or the ground-based augmentation system (GBAS) are being developed and some of them are already in operation. GBAS is available in a relatively narrow area around airports. In general, it corrects for the combined effects of multiple sources of positioning errors simultaneously, including satellite clock and orbital information errors, ionospheric delay errors, and tropospheric delay errors, using the differential corrections broadcast by GBAS ground station. However, if the spatial ionospheric delay gradient exists in the area, correction errors remain even after correction by GBAS. It must be a threat to GBAS. In this study, we use the GPS data provided by the Geographical Survey Institute in Japan. From the GPS data, TEC is obtained every 30 seconds. We select 4 observation points from 24.4 to 35.6 degrees north latitude in Japan, and analyze TEC data of these points from 2001 to 2011. Then we reveal dependences of Rate of TEC change Index (ROTI) on latitude, season, and solar activity statistically. ROTI is the root-mean-square deviation of time subtraction of TEC within 5 minutes. In the result, it is the midnight of the spring and the summer of the solar maximum in the point of 26.4 degrees north latitude that the value of ROTI becomes the largest. We think it is caused by plasma bubbles, and the maximum value of ROTI is about 6 TECU/min. Since it is thought that ROTI is an index representing the spatial ionospheric delay gradient, we can evaluate the effect of spatial ionospheric delay gradient to GBAS. In addition, we will discuss azimuth angle dependence of ROTI. We have found that ROTI tends to be high when the GPS satellites are seen westward. Initial analysis results in Indonesia show a similar feature. This feature could arise from the westward tilt of the plasma bubbles with altitude. More detailed results will be reported in this presentation.

  8. Vector velocity volume flow estimation: Sources of error and corrections applied for arteriovenous fistulas.

    PubMed

    Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo; Hansen, Peter Møller; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt

    2016-08-01

    A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo. This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis to cross-sectional scans of the fistulas, the major axis was on average 10.2mm, which is 8.6% larger than the minor axis. The ultrasound beam was on average 1.5mm from the vessel center, corresponding to 28% of the semi-major axis in an average fistula. Estimating volume flow with an elliptical, rather than circular, vessel area and correcting the ultrasound beam for being off-axis, gave a significant (p=0.008) reduction in error from 31.2% to 24.3%. The error is relative to the Ultrasound Dilution Technique, which is considered the gold standard for volume flow estimation for dialysis patients. The study shows the importance of correcting for volume flow errors, which are often made in clinical practice. Copyright © 2016 Elsevier B.V. All rights reserved.

  9. Estimating Aboveground Biomass in Tropical Forests: Field Methods and Error Analysis for the Calibration of Remote Sensing Observations

    DOE PAGES

    Gonçalves, Fabio; Treuhaft, Robert; Law, Beverly; ...

    2017-01-07

    Mapping and monitoring of forest carbon stocks across large areas in the tropics will necessarily rely on remote sensing approaches, which in turn depend on field estimates of biomass for calibration and validation purposes. Here, we used field plot data collected in a tropical moist forest in the central Amazon to gain a better understanding of the uncertainty associated with plot-level biomass estimates obtained specifically for the calibration of remote sensing measurements. In addition to accounting for sources of error that would be normally expected in conventional biomass estimates (e.g., measurement and allometric errors), we examined two sources of uncertaintymore » that are specific to the calibration process and should be taken into account in most remote sensing studies: the error resulting from spatial disagreement between field and remote sensing measurements (i.e., co-location error), and the error introduced when accounting for temporal differences in data acquisition. We found that the overall uncertainty in the field biomass was typically 25% for both secondary and primary forests, but ranged from 16 to 53%. Co-location and temporal errors accounted for a large fraction of the total variance (>65%) and were identified as important targets for reducing uncertainty in studies relating tropical forest biomass to remotely sensed data. Although measurement and allometric errors were relatively unimportant when considered alone, combined they accounted for roughly 30% of the total variance on average and should not be ignored. Lastly, our results suggest that a thorough understanding of the sources of error associated with field-measured plot-level biomass estimates in tropical forests is critical to determine confidence in remote sensing estimates of carbon stocks and fluxes, and to develop strategies for reducing the overall uncertainty of remote sensing approaches.« less

  10. Application of advanced shearing techniques to the calibration of autocollimators with small angle generators and investigation of error sources.

    PubMed

    Yandayan, T; Geckeler, R D; Aksulu, M; Akgoz, S A; Ozgur, B

    2016-05-01

    The application of advanced error-separating shearing techniques to the precise calibration of autocollimators with Small Angle Generators (SAGs) was carried out for the first time. The experimental realization was achieved using the High Precision Small Angle Generator (HPSAG) of TUBITAK UME under classical dimensional metrology laboratory environmental conditions. The standard uncertainty value of 5 mas (24.2 nrad) reached by classical calibration method was improved to the level of 1.38 mas (6.7 nrad). Shearing techniques, which offer a unique opportunity to separate the errors of devices without recourse to any external standard, were first adapted by Physikalisch-Technische Bundesanstalt (PTB) to the calibration of autocollimators with angle encoders. It has been demonstrated experimentally in a clean room environment using the primary angle standard of PTB (WMT 220). The application of the technique to a different type of angle measurement system extends the range of the shearing technique further and reveals other advantages. For example, the angular scales of the SAGs are based on linear measurement systems (e.g., capacitive nanosensors for the HPSAG). Therefore, SAGs show different systematic errors when compared to angle encoders. In addition to the error-separation of HPSAG and the autocollimator, detailed investigations on error sources were carried out. Apart from determination of the systematic errors of the capacitive sensor used in the HPSAG, it was also demonstrated that the shearing method enables the unique opportunity to characterize other error sources such as errors due to temperature drift in long term measurements. This proves that the shearing technique is a very powerful method for investigating angle measuring systems, for their improvement, and for specifying precautions to be taken during the measurements.

  11. Adaptive error detection for HDR/PDR brachytherapy: Guidance for decision making during real-time in vivo point dosimetry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kertzscher, Gustavo, E-mail: guke@dtu.dk; Andersen, Claus E., E-mail: clan@dtu.dk; Tanderup, Kari, E-mail: karitand@rm.dk

    Purpose: This study presents an adaptive error detection algorithm (AEDA) for real-timein vivo point dosimetry during high dose rate (HDR) or pulsed dose rate (PDR) brachytherapy (BT) where the error identification, in contrast to existing approaches, does not depend on an a priori reconstruction of the dosimeter position. Instead, the treatment is judged based on dose rate comparisons between measurements and calculations of the most viable dosimeter position provided by the AEDA in a data driven approach. As a result, the AEDA compensates for false error cases related to systematic effects of the dosimeter position reconstruction. Given its nearly exclusivemore » dependence on stable dosimeter positioning, the AEDA allows for a substantially simplified and time efficient real-time in vivo BT dosimetry implementation. Methods: In the event of a measured potential treatment error, the AEDA proposes the most viable dosimeter position out of alternatives to the original reconstruction by means of a data driven matching procedure between dose rate distributions. If measured dose rates do not differ significantly from the most viable alternative, the initial error indication may be attributed to a mispositioned or misreconstructed dosimeter (false error). However, if the error declaration persists, no viable dosimeter position can be found to explain the error, hence the discrepancy is more likely to originate from a misplaced or misreconstructed source applicator or from erroneously connected source guide tubes (true error). Results: The AEDA applied on twoin vivo dosimetry implementations for pulsed dose rate BT demonstrated that the AEDA correctly described effects responsible for initial error indications. The AEDA was able to correctly identify the major part of all permutations of simulated guide tube swap errors and simulated shifts of individual needles from the original reconstruction. Unidentified errors corresponded to scenarios where the dosimeter position was sufficiently symmetric with respect to error and no-error source position constellations. The AEDA was able to correctly identify all false errors represented by mispositioned dosimeters contrary to an error detection algorithm relying on the original reconstruction. Conclusions: The study demonstrates that the AEDA error identification during HDR/PDR BT relies on a stable dosimeter position rather than on an accurate dosimeter reconstruction, and the AEDA’s capacity to distinguish between true and false error scenarios. The study further shows that the AEDA can offer guidance in decision making in the event of potential errors detected with real-timein vivo point dosimetry.« less

  12. An error analysis perspective for patient alignment systems.

    PubMed

    Figl, Michael; Kaar, Marcus; Hoffman, Rainer; Kratochwil, Alfred; Hummel, Johann

    2013-09-01

    This paper analyses the effects of error sources which can be found in patient alignment systems. As an example, an ultrasound (US) repositioning system and its transformation chain are assessed. The findings of this concept can also be applied to any navigation system. In a first step, all error sources were identified and where applicable, corresponding target registration errors were computed. By applying error propagation calculations on these commonly used registration/calibration and tracking errors, we were able to analyse the components of the overall error. Furthermore, we defined a special situation where the whole registration chain reduces to the error caused by the tracking system. Additionally, we used a phantom to evaluate the errors arising from the image-to-image registration procedure, depending on the image metric used. We have also discussed how this analysis can be applied to other positioning systems such as Cone Beam CT-based systems or Brainlab's ExacTrac. The estimates found by our error propagation analysis are in good agreement with the numbers found in the phantom study but significantly smaller than results from patient evaluations. We probably underestimated human influences such as the US scan head positioning by the operator and tissue deformation. Rotational errors of the tracking system can multiply these errors, depending on the relative position of tracker and probe. We were able to analyse the components of the overall error of a typical patient positioning system. We consider this to be a contribution to the optimization of the positioning accuracy for computer guidance systems.

  13. Optical radiation measurements: instrumentation and sources of error.

    PubMed

    Landry, R J; Andersen, F A

    1982-07-01

    Accurate measurement of optical radiation is required when sources of this radiation are used in biological research. The most difficult measurements of broadband noncoherent optical radiations usually must be performed by a highly trained specialist using sophisticated, complex, and expensive instruments. Presentation of the results of such measurement requires correct use of quantities and units with which many biological researchers are unfamiliar. The measurement process, physical quantities and units, measurement systems with instruments, and sources of error and uncertainties associated with optical radiation measurements are reviewed.

  14. Water displacement leg volumetry in clinical studies - A discussion of error sources

    PubMed Central

    2010-01-01

    Background Water displacement leg volumetry is a highly reproducible method, allowing the confirmation of efficacy of vasoactive substances. Nevertheless errors of its execution and the selection of unsuitable patients are likely to negatively affect the outcome of clinical studies in chronic venous insufficiency (CVI). Discussion Placebo controlled double-blind drug studies in CVI were searched (Cochrane Review 2005, MedLine Search until December 2007) and assessed with regard to efficacy (volume reduction of the leg), patient characteristics, and potential methodological error sources. Almost every second study reported only small drug effects (≤ 30 mL volume reduction). As the most relevant error source the conduct of volumetry was identified. Because the practical use of available equipment varies, volume differences of more than 300 mL - which is a multifold of a potential treatment effect - have been reported between consecutive measurements. Other potential error sources were insufficient patient guidance or difficulties with the transition from the Widmer CVI classification to the CEAP (Clinical Etiological Anatomical Pathophysiological) grading. Summary Patients should be properly diagnosed with CVI and selected for stable oedema and further clinical symptoms relevant for the specific study. Centres require a thorough training on the use of the volumeter and on patient guidance. Volumetry should be performed under constant conditions. The reproducibility of short term repeat measurements has to be ensured. PMID:20070899

  15. An improved methodology for heliostat testing and evaluation at the Plataforma Solar de Almería

    NASA Astrophysics Data System (ADS)

    Monterreal, Rafael; Enrique, Raúl; Fernández-Reche, Jesús

    2017-06-01

    The optical quality of a heliostat basically quantifies the difference between the scattering effects of the actual solar radiation reflected on its optical surface, compared to the so called canonical dispersion, that is, the one reflected on an optical surface free of constructional errors (paradigm). However, apart from the uncertainties of the measuring process itself, the value of the optical quality must be independent of the measuring instrument; so, any new measuring techniques that provide additional information about the error sources on the heliostat reflecting surface would be welcome. That error sources are responsible for the final optical quality value, with different degrees of influence. For the constructor of heliostats it will be extremely useful to know the value of the classical sources of error and their weight on the overall optical quality of a heliostat, such as facets geometry or focal length, as well as the characteristics of the heliostat as a whole, i.e., its geometry, focal length, facets misalignment and also the possible dependence of these effects with mechanical and/or meteorological factors. It is the goal of the present paper to unfold these optical quality error sources by exploring directly the reflecting surface of the heliostat with the help of a laser-scanner device and link the result with the traditional methods of heliostat evaluation at the Plataforma Solar de Almería.

  16. THE ASSOCIATION OF LAND USE/LAND COVER AND NUTRIENT LEVELS IN MARYLAND STREAMS

    EPA Science Inventory

    Anthropogenic nonpoint sources of nutrients are known to cause accelerated eutrophication of estuaries. The Chesapeake Bay is one of the world's largest estuaries exhibiting the eutrophication problem caused by pollution from various land use activities. The sources contributing ...

  17. Comparison of different source calculations in two-nucleon channel at large quark mass

    NASA Astrophysics Data System (ADS)

    Yamazaki, Takeshi; Ishikawa, Ken-ichi; Kuramashi, Yoshinobu

    2018-03-01

    We investigate a systematic error coming from higher excited state contributions in the energy shift of light nucleus in the two-nucleon channel by comparing two different source calculations with the exponential and wall sources. Since it is hard to obtain a clear signal of the wall source correlation function in a plateau region, we employ a large quark mass as the pion mass is 0.8 GeV in quenched QCD. We discuss the systematic error in the spin-triplet channel of the two-nucleon system, and the volume dependence of the energy shift.

  18. Research on the Emission Inventory of Major Air Pollutants in 2012 for the Sichuan City Cluster in China

    NASA Astrophysics Data System (ADS)

    Qian, J.; He, Q.

    2014-12-01

    This paper developed a high resolution emission inventory of major pollutants in city cluster of Sichuan Basin, one of the most polluted regions in China. The city cluster included five cities, which were Chengdu, Deyang, Mianyang, Meishan and Ziyang. Pollution source census and field measurements were conducted for the major emission sources such as the industry sources, on-road mobile sources, catering sources and the dust sources. The inventory results showed that in the year of 2012, the emission of SO2、NOX、CO、PM10、PM2.5、VOCs and NH3 in the region were 143.5、251.9、1659.9、299.3、163.5、464.1 and 995kt respectively. Chengdu, the provincial capital city, had the largest emission load of every pollutant among the cities. The industry sources, including power plants, fuel combustion facilities and non-combustion processes were the largest emission sources for SO2、NOX and CO, contributing to 84%, 46.5%, 35% of total SO2, NOX and CO emissions. On-road mobile sources accounted for 46.5%, 33%, 16% of the total NOx, CO, PM2.5 emissions and 28% of the anthropogenic VOCs emission. Dust and industry sources contributed to 42% and 23% of the PM10 emission with the dust sources also as the largest source of PM2.5, contributing to 27%. Anthropogenic and biogenic sources took 75% and 25% of the total VOCs emission while 36% of anthropogenic VOCs emission was owing to solvent use. Livestock contributed to 62% of NH3 emissions, followed by nitrogen fertilizer application whose contribution was 23%. Based on the developed emission inventory and local meteorological data, the regional air quality modeling system WRF-CMAQ was applied to simulate the status of PM2.5 pollution in a regional scale. The results showed that high PM2.5 concentration was distributed over the urban area of Chengdu and Deyang. On-road mobile sources and dust sources were two major contributors to the PM2.5 pollution in Chengdu, both had an contribution ratio of 27%. In Deyang, Mianyang, Meishan and Ziyang, industry sources had a relatively high contribution ratio to the PM2.5 pollution, accounting for about 35%, 33%, 38% and 24% respectively.

  19. Effects of Heterogeneity and Uncertainties in Sources and Initial and Boundary Conditions on Spatiotemporal Variations of Groundwater Levels

    NASA Astrophysics Data System (ADS)

    Zhang, Y. K.; Liang, X.

    2014-12-01

    Effects of aquifer heterogeneity and uncertainties in source/sink, and initial and boundary conditions in a groundwater flow model on the spatiotemporal variations of groundwater level, h(x,t), were investigated. Analytical solutions for the variance and covariance of h(x, t) in an unconfined aquifer described by a linearized Boussinesq equation with a white noise source/sink and a random transmissivity field were derived. It was found that in a typical aquifer the error in h(x,t) in early time is mainly caused by the random initial condition and the error reduces as time goes to reach a constant error in later time. The duration during which the effect of the random initial condition is significant may last a few hundred days in most aquifers. The constant error in groundwater in later time is due to the combined effects of the uncertain source/sink and flux boundary: the closer to the flux boundary, the larger the error. The error caused by the uncertain head boundary is limited in a narrow zone near the boundary but it remains more or less constant over time. The effect of the heterogeneity is to increase the variation of groundwater level and the maximum effect occurs close to the constant head boundary because of the linear mean hydraulic gradient. The correlation of groundwater level decreases with temporal interval and spatial distance. In addition, the heterogeneity enhances the correlation of groundwater level, especially at larger time intervals and small spatial distances.

  20. Regionalized PM2.5 Community Multiscale Air Quality model performance evaluation across a continuous spatiotemporal domain.

    PubMed

    Reyes, Jeanette M; Xu, Yadong; Vizuete, William; Serre, Marc L

    2017-01-01

    The regulatory Community Multiscale Air Quality (CMAQ) model is a means to understanding the sources, concentrations and regulatory attainment of air pollutants within a model's domain. Substantial resources are allocated to the evaluation of model performance. The Regionalized Air quality Model Performance (RAMP) method introduced here explores novel ways of visualizing and evaluating CMAQ model performance and errors for daily Particulate Matter ≤ 2.5 micrometers (PM2.5) concentrations across the continental United States. The RAMP method performs a non-homogenous, non-linear, non-homoscedastic model performance evaluation at each CMAQ grid. This work demonstrates that CMAQ model performance, for a well-documented 2001 regulatory episode, is non-homogeneous across space/time. The RAMP correction of systematic errors outperforms other model evaluation methods as demonstrated by a 22.1% reduction in Mean Square Error compared to a constant domain wide correction. The RAMP method is able to accurately reproduce simulated performance with a correlation of r = 76.1%. Most of the error coming from CMAQ is random error with only a minority of error being systematic. Areas of high systematic error are collocated with areas of high random error, implying both error types originate from similar sources. Therefore, addressing underlying causes of systematic error will have the added benefit of also addressing underlying causes of random error.

  1. Changes in U.S. hardwood lumber exports, 1990 to 2008

    Treesearch

    William Luppold; Matthew Bumgardner

    2011-01-01

    The volume of hardwood lumber exported from the United States grew by 63 percent between 1990 and 2006 before decreasing by 29 percent between 2006 and 2008. Canada is both the largest export market for U.S. hardwood lumber and the largest source country for hardwood lumber imported into the United States. In the last 19 years China/Hong Kong has displaced Japan as the...

  2. An Application of Semi-parametric Estimator with Weighted Matrix of Data Depth in Variance Component Estimation

    NASA Astrophysics Data System (ADS)

    Pan, X. G.; Wang, J. Q.; Zhou, H. Y.

    2013-05-01

    The variance component estimation (VCE) based on semi-parametric estimator with weighted matrix of data depth has been proposed, because the coupling system model error and gross error exist in the multi-source heterogeneous measurement data of space and ground combined TT&C (Telemetry, Tracking and Command) technology. The uncertain model error has been estimated with the semi-parametric estimator model, and the outlier has been restrained with the weighted matrix of data depth. On the basis of the restriction of the model error and outlier, the VCE can be improved and used to estimate weighted matrix for the observation data with uncertain model error or outlier. Simulation experiment has been carried out under the circumstance of space and ground combined TT&C. The results show that the new VCE based on the model error compensation can determine the rational weight of the multi-source heterogeneous data, and restrain the outlier data.

  3. Improvements in medical quality and patient safety through implementation of a case bundle management strategy in a large outpatient blood collection center.

    PubMed

    Zhao, Shuzhen; He, Lujia; Feng, Chenchen; He, Xiaoli

    2018-06-01

    Laboratory errors in blood collection center (BCC) are most common in the preanalytical phase. It is, therefore, of vital importance for administrators to take measures to improve healthcare quality and patient safety.In 2015, a case bundle management strategy was applied in a large outpatient BCC to improve its medical quality and patient safety.Unqualified blood sampling, complications, patient waiting time, largest number of patients waiting during peak hours, patient complaints, and patient satisfaction were compared over the period from 2014 to 2016.The strategy reduced unqualified blood sampling, complications, patient waiting time, largest number of patients waiting during peak hours, and patient complaints, while improving patient satisfaction.This strategy was effective in improving BCC healthcare quality and patient safety.

  4. Comparison of neutron spectra measured with three sizes of organic liquid scintillators using differentiation analysis

    NASA Technical Reports Server (NTRS)

    Shook, D. F.; Pierce, C. R.

    1972-01-01

    Proton recoil distributions were obtained by using organic liquid scintillators of different size. The measured distributions are converted to neutron spectra by differentiation analysis for comparison to the unfolded spectra of the largest scintillator. The approximations involved in the differentiation analysis are indicated to have small effects on the precision of neutron spectra measured with the smaller scintillators but introduce significant error for the largest scintillator. In the case of the smallest cylindrical scintillator, nominally 1.2 by 1.3 cm, the efficiency is shown to be insensitive to multiple scattering and to the angular distribution to the incident flux. These characteristics of the smaller scintillator make possible its use to measure scalar flux spectra within media high efficiency is not required.

  5. Quantifying predictability variations in a low-order ocean-atmosphere model - A dynamical systems approach

    NASA Technical Reports Server (NTRS)

    Nese, Jon M.; Dutton, John A.

    1993-01-01

    The predictability of the weather and climatic states of a low-order moist general circulation model is quantified using a dynamic systems approach, and the effect of incorporating a simple oceanic circulation on predictability is evaluated. The predictability and the structure of the model attractors are compared using Liapunov exponents, local divergence rates, and the correlation and Liapunov dimensions. It was found that the activation of oceanic circulation increases the average error doubling time of the atmosphere and the coupled ocean-atmosphere system by 10 percent and decreases the variance of the largest local divergence rate by 20 percent. When an oceanic circulation develops, the average predictability of annually averaged states is improved by 25 percent and the variance of the largest local divergence rate decreases by 25 percent.

  6. A data fusion framework for floodplain analysis using GIS and remotely sensed data

    NASA Astrophysics Data System (ADS)

    Necsoiu, Dorel Marius

    Throughout history floods have been part of the human experience. They are recurring phenomena that form a necessary and enduring feature of all river basin and lowland coastal systems. In an average year, they benefit millions of people who depend on them. In the more developed countries, major floods can be the largest cause of economic losses from natural disasters, and are also a major cause of disaster-related deaths in the less developed countries. Flood disaster mitigation research was conducted to determine how remotely sensed data can effectively be used to produce accurate flood plain maps (FPMs), and to identify/quantify the sources of error associated with such data. Differences were analyzed between flood maps produced by an automated remote sensing analysis tailored to the available satellite remote sensing datasets (rFPM), the 100-year flooded areas "predicted" by the Flood Insurance Rate Maps, and FPMs based on DEM and hydrological data (aFPM). Landuse/landcover was also examined to determine its influence on rFPM errors. These errors were identified and the results were integrated in a GIS to minimize landuse/landcover effects. Two substantial flood events were analyzed. These events were selected because of their similar characteristics (i.e., the existence of FIRM or Q3 data; flood data which included flood peaks, rating curves, and flood profiles; and DEM and remote sensing imagery). Automatic feature extraction was determined to be an important component for successful flood analysis. A process network, in conjunction with domain specific information, was used to map raw remotely sensed data onto a representation that is more compatible with a GIS data model. From a practical point of view, rFPM provides a way to automatically match existing data models to the type of remote sensing data available for each event under investigation. Overall, results showed how remote sensing could contribute to the complex problem of flood management by providing an efficient way to revise the National Flood Insurance Program maps.

  7. Assessment of errors and biases in retrievals of XCO2, XCH4, XCO, and XN2O from a 0.5 cm-1 resolution solar-viewing spectrometer

    NASA Astrophysics Data System (ADS)

    Hedelius, Jacob K.; Viatte, Camille; Wunch, Debra; Roehl, Coleen M.; Toon, Geoffrey C.; Chen, Jia; Jones, Taylor; Wofsy, Steven C.; Franklin, Jonathan E.; Parker, Harrison; Dubey, Manvendra K.; Wennberg, Paul O.

    2016-08-01

    Bruker™ EM27/SUN instruments are commercial mobile solar-viewing near-IR spectrometers. They show promise for expanding the global density of atmospheric column measurements of greenhouse gases and are being marketed for such applications. They have been shown to measure the same variations of atmospheric gases within a day as the high-resolution spectrometers of the Total Carbon Column Observing Network (TCCON). However, there is little known about the long-term precision and uncertainty budgets of EM27/SUN measurements. In this study, which includes a comparison of 186 measurement days spanning 11 months, we note that atmospheric variations of Xgas within a single day are well captured by these low-resolution instruments, but over several months, the measurements drift noticeably. We present comparisons between EM27/SUN instruments and the TCCON using GGG as the retrieval algorithm. In addition, we perform several tests to evaluate the robustness of the performance and determine the largest sources of errors from these spectrometers. We include comparisons of XCO2, XCH4, XCO, and XN2O. Specifically we note EM27/SUN biases for January 2015 of 0.03, 0.75, -0.12, and 2.43 % for XCO2, XCH4, XCO, and XN2O respectively, with 1σ running precisions of 0.08 and 0.06 % for XCO2 and XCH4 from measurements in Pasadena. We also identify significant error caused by nonlinear sensitivity when using an extended spectral range detector used to measure CO and N2O.

  8. Techniques in Altitude Registration for Limb Scatter Instruments

    NASA Astrophysics Data System (ADS)

    Moy, L.; Jaross, G.; Bhartia, P. K.; Kramarova, N. A.

    2017-12-01

    One of the largest constraints to the retrieval of accurate ozone profiles from limb sounding sensors is altitude registration. As described in Moy et al. (2017) two methods applicable to UV limb scattering, the Rayleigh Scattering Attitude Sensing (RSAS) and Absolute Radiance Residual Method (ARRM), have been used to determine altitude registration to the accuracy necessary for long-term ozone monitoring. The methods compare model calculations of radiances to measured radiances and are independent of onboard tracking devices. RSAS determines absolute altitude errors but, because the method is susceptible to aerosol interference, it is limited to latitudes and time periods with minimal aerosol contamination. ARRM, a new technique using wavelengths near 300 nm, can be applied across all seasons and altitudes, but its sensitivity to accurate instrument calibration means it may be inappropriate for anything but monitoring change. These characteristics make the two techniques complementary. Both methods have been applied to Limb Profiler instrument measurements from the Ozone Mapping and Profiler Suite (OMPS) onboard the Suomi NPP (SNPP) satellite. The results from RSAS and ARRM differ by as much as 500 m over orbital and seasonal time scales, but long-term pointing trends derived from the two indicate changes within 100 m over the 5 year data record. In this paper we further discuss what these methods are revealing about the stability of LP's altitude registration. An independent evaluation of pointing errors using VIIRS, another sensor onboard the Suomi NPP satellite, indicates changes of as much as 80 m over the course of the mission. The correlations between VIIRS and the ARRM time series suggest a high degree of precision in this limb technique. We have therefore relied upon ARRM to evaluate error sources in more widespread altitude registration techniques such as RSAS and lunar observations. These techniques can be more readily applied to other limb scatter missions such as SAGE III and ALTIUS

  9. Neutrino Nucleon Elastic Scattering in MiniBooNE

    NASA Astrophysics Data System (ADS)

    Cox, D. Christopher

    2007-12-01

    Neutrino nucleon elastic scattering νN→νN is a fundamental process of the weak interaction, and can be used to study the structure of the nucleon. This is the third largest scattering process in MiniBooNE comprising ˜15% of all neutrino interactions. Analysis of this sample has yielded a neutral current elastic differential cross section as a function of Q2 that agrees within errors to model predictions.

  10. Sources of Student Errors and Misconceptions in Algebra and Effectiveness of Classroom Practice Remediation in Machakos County--Kenya

    ERIC Educational Resources Information Center

    Mulungye, Mary M.; O'Connor, Miheso; Ndethiu, S.

    2016-01-01

    This paper is based on a study which sought to examine the various errors and misconceptions committed by students in algebra with the view to exposing the nature and origin of the errors and misconceptions in secondary schools in Machakos district. Teachers' knowledge on students' errors was investigated together with strategies for remedial…

  11. The inference of atmospheric ozone using satellite horizon measurements in the 1042 per cm band.

    NASA Technical Reports Server (NTRS)

    Russell, J. M., III; Drayson, S. R.

    1972-01-01

    Description of a method for inferring atmospheric ozone information using infrared horizon radiance measurements in the 1042 per cm band. An analysis based on this method proves the feasibility of the horizon experiment for determining ozone information and shows that the ozone partial pressure can be determined in the altitude range from 50 down to 25 km. A comprehensive error study is conducted which considers effects of individual errors as well as the effect of all error sources acting simultaneously. The results show that in the absence of a temperature profile bias error, it should be possible to determine the ozone partial pressure to within an rms value of 15 to 20%. It may be possible to reduce this rms error to 5% by smoothing the solution profile. These results would be seriously degraded by an atmospheric temperature bias error of only 3 K; thus, great care should be taken to minimize this source of error in an experiment. It is probable, in view of recent technological developments, that these errors will be much smaller in future flight experiments and the altitude range will widen to include from about 60 km down to the tropopause region.

  12. The Reliability and Sources of Error of Using Rubrics-Based Assessment for Student Projects

    ERIC Educational Resources Information Center

    Menéndez-Varela, José-Luis; Gregori-Giralt, Eva

    2018-01-01

    Rubrics are widely used in higher education to assess performance in project-based learning environments. To date, the sources of error that may affect their reliability have not been studied in depth. Using generalisability theory as its starting-point, this article analyses the influence of the assessors and the criteria of the rubrics on the…

  13. A Robust Sound Source Localization Approach for Microphone Array with Model Errors

    NASA Astrophysics Data System (ADS)

    Xiao, Hua; Shao, Huai-Zong; Peng, Qi-Cong

    In this paper, a robust sound source localization approach is proposed. The approach retains good performance even when model errors exist. Compared with previous work in this field, the contributions of this paper are as follows. First, an improved broad-band and near-field array model is proposed. It takes array gain, phase perturbations into account and is based on the actual positions of the elements. It can be used in arbitrary planar geometry arrays. Second, a subspace model errors estimation algorithm and a Weighted 2-Dimension Multiple Signal Classification (W2D-MUSIC) algorithm are proposed. The subspace model errors estimation algorithm estimates unknown parameters of the array model, i. e., gain, phase perturbations, and positions of the elements, with high accuracy. The performance of this algorithm is improved with the increasing of SNR or number of snapshots. The W2D-MUSIC algorithm based on the improved array model is implemented to locate sound sources. These two algorithms compose the robust sound source approach. The more accurate steering vectors can be provided for further processing such as adaptive beamforming algorithm. Numerical examples confirm effectiveness of this proposed approach.

  14. Impact of ITRS 2014 realizations on altimeter satellite precise orbit determination

    NASA Astrophysics Data System (ADS)

    Zelensky, Nikita P.; Lemoine, Frank G.; Beckley, Brian D.; Chinn, Douglas S.; Pavlis, Despina E.

    2018-01-01

    This paper evaluates orbit accuracy and systematic error for altimeter satellite precise orbit determination on TOPEX, Jason-1, Jason-2 and Jason-3 by comparing the use of four SLR/DORIS station complements from the International Terrestrial Reference System (ITRS) 2014 realizations with those based on ITRF2008. The new Terrestrial Reference Frame 2014 (TRF2014) station complements include ITRS realizations from the Institut National de l'Information Géographique et Forestière (IGN) ITRF2014, the Jet Propulsion Laboratory (JPL) JTRF2014, the Deutsche Geodätisches Forschungsinstitut (DGFI) DTRF2014, and the DORIS extension to ITRF2014 for Precise Orbit Determination, DPOD2014. The largest source of error stems from ITRF2008 station position extrapolation past the 2009 solution end time. The TRF2014 SLR/DORIS complement impact on the ITRF2008 orbit is only 1-2 mm RMS radial difference between 1992-2009, and increases after 2009, up to 5 mm RMS radial difference in 2016. Residual analysis shows that station position extrapolation error past the solution span becomes evident even after two years, and will contribute to about 3-4 mm radial orbit error after seven years. Crossover data show the DTRF2014 orbits are the most accurate for the TOPEX and Jason-2 test periods, and the JTRF2014 orbits for the Jason-1 period. However for the 2016 Jason-3 test period only the DPOD2014-based orbits show a strong and statistically significant margin of improvement. The positive results with DTRF2014 suggest the new approach to correct station positions or normal equations for non-tidal loading before combination is beneficial. We did not find any compelling POD advantage in using non-linear over linear station velocity models in our SLR & DORIS orbit tests on the Jason satellites. The JTRF2014 proof-of-concept ITRS realization demonstrates the need for improved SLR+DORIS orbit centering when compared to the Ries (2013) CM annual model. Orbit centering error is seen as an annual radial signal of 0.4 mm amplitude with the CM model. The unmodeled CM signals show roughly a 1.8 mm peak-to-peak annual variation in the orbit radial component. We find the TRF network stability pertinent to POD can be defined only by examination of the orbit-specific tracking network time series. Drift stability between the ITRF2008 and the other TRF2014-based orbits is very high, the relative mean radial drift error over water is no larger than 0.04 mm/year over 1993-2015. Analyses also show TRF induced orbit error meets current altimeter rate accuracy goals for global and regional sea level estimation.

  15. AMMONIA: ENVIRONMENTAL IMPACTS, EMISSIONS, INORGANIC PM 2.5, AND CLEAN AIR INTERSTATE RULE

    EPA Science Inventory

    This presentation discusses the role of ammonia as an atmospheric pollutant. Ammonia is emitted primarily from agricultural sources, although vehicles are the largest sources in urban centers. When combined with nitrate and sulfate, ammonia forms particulate matter which has be...

  16. Error Model and Compensation of Bell-Shaped Vibratory Gyro

    PubMed Central

    Su, Zhong; Liu, Ning; Li, Qing

    2015-01-01

    A bell-shaped vibratory angular velocity gyro (BVG), inspired by the Chinese traditional bell, is a type of axisymmetric shell resonator gyroscope. This paper focuses on development of an error model and compensation of the BVG. A dynamic equation is firstly established, based on a study of the BVG working mechanism. This equation is then used to evaluate the relationship between the angular rate output signal and bell-shaped resonator character, analyze the influence of the main error sources and set up an error model for the BVG. The error sources are classified from the error propagation characteristics, and the compensation method is presented based on the error model. Finally, using the error model and compensation method, the BVG is calibrated experimentally including rough compensation, temperature and bias compensation, scale factor compensation and noise filter. The experimentally obtained bias instability is from 20.5°/h to 4.7°/h, the random walk is from 2.8°/h1/2 to 0.7°/h1/2 and the nonlinearity is from 0.2% to 0.03%. Based on the error compensation, it is shown that there is a good linear relationship between the sensing signal and the angular velocity, suggesting that the BVG is a good candidate for the field of low and medium rotational speed measurement. PMID:26393593

  17. The pros and cons of code validation

    NASA Technical Reports Server (NTRS)

    Bobbitt, Percy J.

    1988-01-01

    Computational and wind tunnel error sources are examined and quantified using specific calculations of experimental data, and a substantial comparison of theoretical and experimental results, or a code validation, is discussed. Wind tunnel error sources considered include wall interference, sting effects, Reynolds number effects, flow quality and transition, and instrumentation such as strain gage balances, electronically scanned pressure systems, hot film gages, hot wire anemometers, and laser velocimeters. Computational error sources include math model equation sets, the solution algorithm, artificial viscosity/dissipation, boundary conditions, the uniqueness of solutions, grid resolution, turbulence modeling, and Reynolds number effects. It is concluded that, although improvements in theory are being made more quickly than in experiments, wind tunnel research has the advantage of the more realistic transition process of a right turbulence model in a free-transition test.

  18. Image reduction pipeline for the detection of variable sources in highly crowded fields

    NASA Astrophysics Data System (ADS)

    Gössl, C. A.; Riffeser, A.

    2002-01-01

    We present a reduction pipeline for CCD (charge-coupled device) images which was built to search for variable sources in highly crowded fields like the M 31 bulge and to handle extensive databases due to large time series. We describe all steps of the standard reduction in detail with emphasis on the realisation of per pixel error propagation: Bias correction, treatment of bad pixels, flatfielding, and filtering of cosmic rays. The problems of conservation of PSF (point spread function) and error propagation in our image alignment procedure as well as the detection algorithm for variable sources are discussed: we build difference images via image convolution with a technique called OIS (optimal image subtraction, Alard & Lupton \\cite{1998ApJ...503..325A}), proceed with an automatic detection of variable sources in noise dominated images and finally apply a PSF-fitting, relative photometry to the sources found. For the WeCAPP project (Riffeser et al. \\cite{2001A&A...0000..00R}) we achieve 3sigma detections for variable sources with an apparent brightness of e.g. m = 24.9;mag at their minimum and a variation of Delta m = 2.4;mag (or m = 21.9;mag brightness minimum and a variation of Delta m = 0.6;mag) on a background signal of 18.1;mag/arcsec2 based on a 500;s exposure with 1.5;arcsec seeing at a 1.2;m telescope. The complete per pixel error propagation allows us to give accurate errors for each measurement.

  19. Orbit determination strategy and results for the Pioneer 10 Jupiter mission

    NASA Technical Reports Server (NTRS)

    Wong, S. K.; Lubeley, A. J.

    1974-01-01

    Pioneer 10 is the first earth-based vehicle to encounter Jupiter and occult its moon, Io. In contributing to the success of the mission, the Orbit Determination Group evaluated the effects of the dominant error sources on the spacecraft's computed orbit and devised an encounter strategy minimizing the effects of these error sources. The encounter results indicated that: (1) errors in the satellite model played a very important role in the accuracy of the computed orbit, (2) encounter strategy was sound, (3) all mission objectives were met, and (4) Jupiter-Saturn mission for Pioneer 11 is within the navigation capability.

  20. S-193 scatterometer transfer function analysis for data processing

    NASA Technical Reports Server (NTRS)

    Johnson, L.

    1974-01-01

    A mathematical model for converting raw data measurements of the S-193 scatterometer into processed values of radar scattering coefficient is presented. The argument is based on an approximation derived from the Radar Equation and actual operating principles of the S-193 Scatterometer hardware. Possible error sources are inaccuracies in transmitted wavelength, range, antenna illumination integrals, and the instrument itself. The dominant source of error in the calculation of scattering coefficent is accuracy of the range. All other ractors with the possible exception of illumination integral are not considered to cause significant error in the calculation of scattering coefficient.

  1. The GEOS Ozone Data Assimilation System: Specification of Error Statistics

    NASA Technical Reports Server (NTRS)

    Stajner, Ivanka; Riishojgaard, Lars Peter; Rood, Richard B.

    2000-01-01

    A global three-dimensional ozone data assimilation system has been developed at the Data Assimilation Office of the NASA/Goddard Space Flight Center. The Total Ozone Mapping Spectrometer (TOMS) total ozone and the Solar Backscatter Ultraviolet (SBUV) or (SBUV/2) partial ozone profile observations are assimilated. The assimilation, into an off-line ozone transport model, is done using the global Physical-space Statistical Analysis Scheme (PSAS). This system became operational in December 1999. A detailed description of the statistical analysis scheme, and in particular, the forecast and observation error covariance models is given. A new global anisotropic horizontal forecast error correlation model accounts for a varying distribution of observations with latitude. Correlations are largest in the zonal direction in the tropics where data is sparse. Forecast error variance model is proportional to the ozone field. The forecast error covariance parameters were determined by maximum likelihood estimation. The error covariance models are validated using x squared statistics. The analyzed ozone fields in the winter 1992 are validated against independent observations from ozone sondes and HALOE. There is better than 10% agreement between mean Halogen Occultation Experiment (HALOE) and analysis fields between 70 and 0.2 hPa. The global root-mean-square (RMS) difference between TOMS observed and forecast values is less than 4%. The global RMS difference between SBUV observed and analyzed ozone between 50 and 3 hPa is less than 15%.

  2. Error-dependent modulation of speech-induced auditory suppression for pitch-shifted voice feedback.

    PubMed

    Behroozmand, Roozbeh; Larson, Charles R

    2011-06-06

    The motor-driven predictions about expected sensory feedback (efference copies) have been proposed to play an important role in recognition of sensory consequences of self-produced motor actions. In the auditory system, this effect was suggested to result in suppression of sensory neural responses to self-produced voices that are predicted by the efference copies during vocal production in comparison with passive listening to the playback of the identical self-vocalizations. In the present study, event-related potentials (ERPs) were recorded in response to upward pitch shift stimuli (PSS) with five different magnitudes (0, +50, +100, +200 and +400 cents) at voice onset during active vocal production and passive listening to the playback. Results indicated that the suppression of the N1 component during vocal production was largest for unaltered voice feedback (PSS: 0 cents), became smaller as the magnitude of PSS increased to 200 cents, and was almost completely eliminated in response to 400 cents stimuli. Findings of the present study suggest that the brain utilizes the motor predictions (efference copies) to determine the source of incoming stimuli and maximally suppresses the auditory responses to unaltered feedback of self-vocalizations. The reduction of suppression for 50, 100 and 200 cents and its elimination for 400 cents pitch-shifted voice auditory feedback support the idea that motor-driven suppression of voice feedback leads to distinctly different sensory neural processing of self vs. non-self vocalizations. This characteristic may enable the audio-vocal system to more effectively detect and correct for unexpected errors in the feedback of self-produced voice pitch compared with externally-generated sounds.

  3. A Feasibility Study for Simultaneous Measurements of Water Vapor and Precipitation Parameters using a Three-frequency Radar

    NASA Technical Reports Server (NTRS)

    Meneghini, R.; Liao, L.; Tian, L.

    2005-01-01

    The radar return powers from a three-frequency radar, with center frequency at 22.235 GHz and upper and lower frequencies chosen with equal water vapor absorption coefficients, can be used to estimate water vapor density and parameters of the precipitation. A linear combination of differential measurements between the center and lower frequencies on one hand and the upper and lower frequencies on the other provide an estimate of differential water vapor absorption. The coupling between the precipitation and water vapor estimates is generally weak but increases with bandwidth and the amount of non-Rayleigh scattering of the hydrometeors. The coupling leads to biases in the estimates of water vapor absorption that are related primarily to the phase state and the median mass diameter of the hydrometeors. For a down-looking radar, path-averaged estimates of water vapor absorption are possible under rain-free as well as raining conditions by using the surface returns at the three frequencies. Simulations of the water vapor attenuation retrieval show that the largest source of error typically arises from the variance in the measured radar return powers. Although the error can be mitigated by a combination of a high pulse repetition frequency, pulse compression, and averaging in range and time, the radar receiver must be stable over the averaging period. For fractional bandwidths of 20% or less, the potential exists for simultaneous measurements at the three frequencies with a single antenna and transceiver, thereby significantly reducing the cost and mass of the system.

  4. Approximating natural connectivity of scale-free networks based on largest eigenvalue

    NASA Astrophysics Data System (ADS)

    Tan, S.-Y.; Wu, J.; Li, M.-J.; Lu, X.

    2016-06-01

    It has been recently proposed that natural connectivity can be used to efficiently characterize the robustness of complex networks. The natural connectivity has an intuitive physical meaning and a simple mathematical formulation, which corresponds to an average eigenvalue calculated from the graph spectrum. However, as a network model close to the real-world system that widely exists, the scale-free network is found difficult to obtain its spectrum analytically. In this article, we investigate the approximation of natural connectivity based on the largest eigenvalue in both random and correlated scale-free networks. It is demonstrated that the natural connectivity of scale-free networks can be dominated by the largest eigenvalue, which can be expressed asymptotically and analytically to approximate natural connectivity with small errors. Then we show that the natural connectivity of random scale-free networks increases linearly with the average degree given the scaling exponent and decreases monotonically with the scaling exponent given the average degree. Moreover, it is found that, given the degree distribution, the more assortative a scale-free network is, the more robust it is. Experiments in real networks validate our methods and results.

  5. Time-dependent gravity in Southern California, May 1974 to April 1979

    NASA Technical Reports Server (NTRS)

    Whitcomb, J. H.; Franzen, W. O.; Given, J. W.; Pechmann, J. C.; Ruff, L. J.

    1980-01-01

    The Southern California gravity survey, begun in May 1974 to obtain high spatial and temporal density gravity measurements to be coordinated with long-baseline three dimensional geodetic measurements of the Astronomical Radio Interferometric Earth Surveying project, is presented. Gravity data was obtained from 28 stations located in and near the seismically active San Gabriel section of the Southern California Transverse Ranges and adjoining San Andreas Fault at intervals of one to two months using gravity meters relative to a base station standard meter. A single-reading standard deviation of 11 microGal is obtained which leads to a relative deviation of 16 microGal between stations, with data averaging reducing the standard error to 2 to 3 microGal. The largest gravity variations observed are found to correlate with nearby well water variations and smoothed rainfall levels, indicating the importance of ground water variations to gravity measurements. The largest earthquake to occur during the survey, which extended to April, 1979, is found to be accompanied in the station closest to the earthquake by the largest measured gravity changes that cannot be related to factors other than tectonic distortion.

  6. Impact and quantification of the sources of error in DNA pooling designs.

    PubMed

    Jawaid, A; Sham, P

    2009-01-01

    The analysis of genome wide variation offers the possibility of unravelling the genes involved in the pathogenesis of disease. Genome wide association studies are also particularly useful for identifying and validating targets for therapeutic intervention as well as for detecting markers for drug efficacy and side effects. The cost of such large-scale genetic association studies may be reduced substantially by the analysis of pooled DNA from multiple individuals. However, experimental errors inherent in pooling studies lead to a potential increase in the false positive rate and a loss in power compared to individual genotyping. Here we quantify various sources of experimental error using empirical data from typical pooling experiments and corresponding individual genotyping counts using two statistical methods. We provide analytical formulas for calculating these different errors in the absence of complete information, such as replicate pool formation, and for adjusting for the errors in the statistical analysis. We demonstrate that DNA pooling has the potential of estimating allele frequencies accurately, and adjusting the pooled allele frequency estimates for differential allelic amplification considerably improves accuracy. Estimates of the components of error show that differential allelic amplification is the most important contributor to the error variance in absolute allele frequency estimation, followed by allele frequency measurement and pool formation errors. Our results emphasise the importance of minimising experimental errors and obtaining correct error estimates in genetic association studies.

  7. Long-term particulate matter modeling for health effect studies in California - Part 2: Concentrations and sources of ultrafine organic aerosols

    NASA Astrophysics Data System (ADS)

    Hu, Jianlin; Jathar, Shantanu; Zhang, Hongliang; Ying, Qi; Chen, Shu-Hua; Cappa, Christopher D.; Kleeman, Michael J.

    2017-04-01

    Organic aerosol (OA) is a major constituent of ultrafine particulate matter (PM0. 1). Recent epidemiological studies have identified associations between PM0. 1 OA and premature mortality and low birth weight. In this study, the source-oriented UCD/CIT model was used to simulate the concentrations and sources of primary organic aerosols (POA) and secondary organic aerosols (SOA) in PM0. 1 in California for a 9-year (2000-2008) modeling period with 4 km horizontal resolution to provide more insights about PM0. 1 OA for health effect studies. As a related quality control, predicted monthly average concentrations of fine particulate matter (PM2. 5) total organic carbon at six major urban sites had mean fractional bias of -0.31 to 0.19 and mean fractional errors of 0.4 to 0.59. The predicted ratio of PM2. 5 SOA / OA was lower than estimates derived from chemical mass balance (CMB) calculations by a factor of 2-3, which suggests the potential effects of processes such as POA volatility, additional SOA formation mechanism, and missing sources. OA in PM0. 1, the focus size fraction of this study, is dominated by POA. Wood smoke is found to be the single biggest source of PM0. 1 OA in winter in California, while meat cooking, mobile emissions (gasoline and diesel engines), and other anthropogenic sources (mainly solvent usage and waste disposal) are the most important sources in summer. Biogenic emissions are predicted to be the largest PM0. 1 SOA source, followed by mobile sources and other anthropogenic sources, but these rankings are sensitive to the SOA model used in the calculation. Air pollution control programs aiming to reduce the PM0. 1 OA concentrations should consider controlling solvent usage, waste disposal, and mobile emissions in California, but these findings should be revisited after the latest science is incorporated into the SOA exposure calculations. The spatial distributions of SOA associated with different sources are not sensitive to the choice of SOA model, although the absolute amount of SOA can change significantly. Therefore, the spatial distributions of PM0. 1 POA and SOA over the 9-year study period provide useful information for epidemiological studies to further investigate the associations with health outcomes.

  8. Optimizing dynamic downscaling in one-way nesting using a regional ocean model

    NASA Astrophysics Data System (ADS)

    Pham, Van Sy; Hwang, Jin Hwan; Ku, Hyeyun

    2016-10-01

    Dynamical downscaling with nested regional oceanographic models has been demonstrated to be an effective approach for both operationally forecasted sea weather on regional scales and projections of future climate change and its impact on the ocean. However, when nesting procedures are carried out in dynamic downscaling from a larger-scale model or set of observations to a smaller scale, errors are unavoidable due to the differences in grid sizes and updating intervals. The present work assesses the impact of errors produced by nesting procedures on the downscaled results from Ocean Regional Circulation Models (ORCMs). Errors are identified and evaluated based on their sources and characteristics by employing the Big-Brother Experiment (BBE). The BBE uses the same model to produce both nesting and nested simulations; so it addresses those error sources separately (i.e., without combining the contributions of errors from different sources). Here, we focus on discussing errors resulting from the spatial grids' differences, the updating times and the domain sizes. After the BBE was separately run for diverse cases, a Taylor diagram was used to analyze the results and recommend an optimal combination of grid size, updating period and domain sizes. Finally, suggested setups for the downscaling were evaluated by examining the spatial correlations of variables and the relative magnitudes of variances between the nested model and the original data.

  9. Error Sources in Proccessing LIDAR Based Bridge Inspection

    NASA Astrophysics Data System (ADS)

    Bian, H.; Chen, S. E.; Liu, W.

    2017-09-01

    Bridge inspection is a critical task in infrastructure management and is facing unprecedented challenges after a series of bridge failures. The prevailing visual inspection was insufficient in providing reliable and quantitative bridge information although a systematic quality management framework was built to ensure visual bridge inspection data quality to minimize errors during the inspection process. The LiDAR based remote sensing is recommended as an effective tool in overcoming some of the disadvantages of visual inspection. In order to evaluate the potential of applying this technology in bridge inspection, some of the error sources in LiDAR based bridge inspection are analysed. The scanning angle variance in field data collection and the different algorithm design in scanning data processing are the found factors that will introduce errors into inspection results. Besides studying the errors sources, advanced considerations should be placed on improving the inspection data quality, and statistical analysis might be employed to evaluate inspection operation process that contains a series of uncertain factors in the future. Overall, the development of a reliable bridge inspection system requires not only the improvement of data processing algorithms, but also systematic considerations to mitigate possible errors in the entire inspection workflow. If LiDAR or some other technology can be accepted as a supplement for visual inspection, the current quality management framework will be modified or redesigned, and this would be as urgent as the refine of inspection techniques.

  10. Identification of 'Point A' as the prevalent source of error in cephalometric analysis of lateral radiographs.

    PubMed

    Grogger, P; Sacher, C; Weber, S; Millesi, G; Seemann, R

    2018-04-10

    Deviations in measuring dentofacial components in a lateral X-ray represent a major hurdle in the subsequent treatment of dysgnathic patients. In a retrospective study, we investigated the most prevalent source of error in the following commonly used cephalometric measurements: the angles Sella-Nasion-Point A (SNA), Sella-Nasion-Point B (SNB) and Point A-Nasion-Point B (ANB); the Wits appraisal; the anteroposterior dysplasia indicator (APDI); and the overbite depth indicator (ODI). Preoperative lateral radiographic images of patients with dentofacial deformities were collected and the landmarks digitally traced by three independent raters. Cephalometric analysis was automatically performed based on 1116 tracings. Error analysis identified the x-coordinate of Point A as the prevalent source of error in all investigated measurements, except SNB, in which it is not incorporated. In SNB, the y-coordinate of Nasion predominated error variance. SNB showed lowest inter-rater variation. In addition, our observations confirmed previous studies showing that landmark identification variance follows characteristic error envelopes in the highest number of tracings analysed up to now. Variance orthogonal to defining planes was of relevance, while variance parallel to planes was not. Taking these findings into account, orthognathic surgeons as well as orthodontists would be able to perform cephalometry more accurately and accomplish better therapeutic results. Copyright © 2018 International Association of Oral and Maxillofacial Surgeons. Published by Elsevier Ltd. All rights reserved.

  11. Typing Style and the Use of Different Sources of Information during Typing: An Investigation Using Self-Reports

    PubMed Central

    Rieger, Martina; Bart, Victoria K. E.

    2016-01-01

    We investigated to what extent different sources of information are used in typing on a computer keyboard. Using self-reports 10 finger typists and idiosyncratic typists estimated how much attention they pay to different sources of information during copy typing and free typing and how much they use them for error detection. 10 finger typists reported less attention to the keyboard and the fingers and more attention to the template and the screen than idiosyncratic typists. The groups did not differ in attention to touch/kinaesthesis in copy typing and free typing, but 10 finger typists reported more use of touch/kinaesthesis in error detection. This indicates that processing of tactile/kinaesthetic information may occur largely outside conscious control, as long as no errors occur. 10 finger typists reported more use of internal prediction of movement consequences for error detection than idiosyncratic typists, reflecting more precise internal models. Further in copy typing compared to free typing attention to the template is required, thus leaving less attentional capacity for other sources of information. Correlations showed that higher skilled typists, regardless of typing style, rely more on sources of information which are usually associated with 10 finger typing. One limitation of the study is that only self-reports were used. We conclude that typing task, typing proficiency, and typing style influence how attention is distributed during typing. PMID:28018256

  12. Typing Style and the Use of Different Sources of Information during Typing: An Investigation Using Self-Reports.

    PubMed

    Rieger, Martina; Bart, Victoria K E

    2016-01-01

    We investigated to what extent different sources of information are used in typing on a computer keyboard. Using self-reports 10 finger typists and idiosyncratic typists estimated how much attention they pay to different sources of information during copy typing and free typing and how much they use them for error detection. 10 finger typists reported less attention to the keyboard and the fingers and more attention to the template and the screen than idiosyncratic typists. The groups did not differ in attention to touch/kinaesthesis in copy typing and free typing, but 10 finger typists reported more use of touch/kinaesthesis in error detection. This indicates that processing of tactile/kinaesthetic information may occur largely outside conscious control, as long as no errors occur. 10 finger typists reported more use of internal prediction of movement consequences for error detection than idiosyncratic typists, reflecting more precise internal models. Further in copy typing compared to free typing attention to the template is required, thus leaving less attentional capacity for other sources of information. Correlations showed that higher skilled typists, regardless of typing style, rely more on sources of information which are usually associated with 10 finger typing. One limitation of the study is that only self-reports were used. We conclude that typing task, typing proficiency, and typing style influence how attention is distributed during typing.

  13. The Robustness of Acoustic Analogies

    NASA Technical Reports Server (NTRS)

    Freund, J. B.; Lele, S. K.; Wei, M.

    2004-01-01

    Acoustic analogies for the prediction of flow noise are exact rearrangements of the flow equations N(right arrow q) = 0 into a nominal sound source S(right arrow q) and sound propagation operator L such that L(right arrow q) = S(right arrow q). In practice, the sound source is typically modeled and the propagation operator inverted to make predictions. Since the rearrangement is exact, any sufficiently accurate model of the source will yield the correct sound, so other factors must determine the merits of any particular formulation. Using data from a two-dimensional mixing layer direct numerical simulation (DNS), we evaluate the robustness of two analogy formulations to different errors intentionally introduced into the source. The motivation is that since S can not be perfectly modeled, analogies that are less sensitive to errors in S are preferable. Our assessment is made within the framework of Goldstein's generalized acoustic analogy, in which different choices of a base flow used in constructing L give different sources S and thus different analogies. A uniform base flow yields a Lighthill-like analogy, which we evaluate against a formulation in which the base flow is the actual mean flow of the DNS. The more complex mean flow formulation is found to be significantly more robust to errors in the energetic turbulent fluctuations, but its advantage is less pronounced when errors are made in the smaller scales.

  14. The Public Understanding of Error in Educational Assessment

    ERIC Educational Resources Information Center

    Gardner, John

    2013-01-01

    Evidence from recent research suggests that in the UK the public perception of errors in national examinations is that they are simply mistakes; events that are preventable. This perception predominates over the more sophisticated technical view that errors arise from many sources and create an inevitable variability in assessment outcomes. The…

  15. An Analysis of Errors in Written English Sentences: A Case Study of Thai EFL Students

    ERIC Educational Resources Information Center

    Sermsook, Kanyakorn; Liamnimit, Jiraporn; Pochakorn, Rattaneekorn

    2017-01-01

    The purposes of the present study were to examine the language errors in a writing of English major students in a Thai university and to explore the sources of the errors. This study focused mainly on sentences because the researcher found that errors in Thai EFL students' sentence construction may lead to miscommunication. 104 pieces of writing…

  16. Unavoidable Errors: A Spatio-Temporal Analysis of Time-Course and Neural Sources of Evoked Potentials Associated with Error Processing in a Speeded Task

    ERIC Educational Resources Information Center

    Vocat, Roland; Pourtois, Gilles; Vuilleumier, Patrik

    2008-01-01

    The detection of errors is known to be associated with two successive neurophysiological components in EEG, with an early time-course following motor execution: the error-related negativity (ERN/Ne) and late positivity (Pe). The exact cognitive and physiological processes contributing to these two EEG components, as well as their functional…

  17. Medical Errors and Barriers to Reporting in Ten Hospitals in Southern Iran

    PubMed Central

    Khammarnia, Mohammad; Ravangard, Ramin; Barfar, Eshagh; Setoodehzadeh, Fatemeh

    2015-01-01

    Background: International research shows that medical errors (MEs) are a major threat to patient safety. The present study aimed to describe MEs and barriers to reporting them in Shiraz public hospitals, Iran. Methods: A cross-sectional, retrospective study was conducted in 10 Shiraz public hospitals in the south of Iran, 2013. Using the standardised checklist of Shiraz University of Medical Sciences (referred to the Clinical Governance Department and recorded documentations) and Uribe questionnaire, we gathered the data in the hospitals. Results: A total of 4379 MEs were recorded in 10 hospitals. The highest frequency (27.1%) was related to systematic errors. Besides, most of the errors had occurred in the largest hospital (54.9%), internal wards (36.3%), and morning shifts (55.0%). The results revealed a significant association between the MEs and wards and hospitals (p < 0.001). Moreover, individual and organisational factors were the barriers to reporting ME in the studied hospitals. Also, a significant correlation was observed between the ME reporting barriers and the participants’ job experiences (p < 0.001). Conclusion: The medical errors were highly frequent in the studied hospitals especially in the larger hospitals, morning shift and in the nursing practice. Moreover, individual and organisational factors were considered as the barriers to reporting MEs. PMID:28729811

  18. Error and its meaning in forensic science.

    PubMed

    Christensen, Angi M; Crowder, Christian M; Ousley, Stephen D; Houck, Max M

    2014-01-01

    The discussion of "error" has gained momentum in forensic science in the wake of the Daubert guidelines and has intensified with the National Academy of Sciences' Report. Error has many different meanings, and too often, forensic practitioners themselves as well as the courts misunderstand scientific error and statistical error rates, often confusing them with practitioner error (or mistakes). Here, we present an overview of these concepts as they pertain to forensic science applications, discussing the difference between practitioner error (including mistakes), instrument error, statistical error, and method error. We urge forensic practitioners to ensure that potential sources of error and method limitations are understood and clearly communicated and advocate that the legal community be informed regarding the differences between interobserver errors, uncertainty, variation, and mistakes. © 2013 American Academy of Forensic Sciences.

  19. The orthopaedic error index: development and application of a novel national indicator for assessing the relative safety of hospital care using a cross-sectional approach.

    PubMed

    Panesar, Sukhmeet S; Netuveli, Gopalakrishnan; Carson-Stevens, Andrew; Javad, Sundas; Patel, Bhavesh; Parry, Gareth; Donaldson, Liam J; Sheikh, Aziz

    2013-11-21

    The Orthopaedic Error Index for hospitals aims to provide the first national assessment of the relative safety of provision of orthopaedic surgery. Cross-sectional study (retrospective analysis of records in a database). The National Reporting and Learning System is the largest national repository of patient-safety incidents in the world with over eight million error reports. It offers a unique opportunity to develop novel approaches to enhancing patient safety, including investigating the relative safety of different healthcare providers and specialties. We extracted all orthopaedic error reports from the system over 1 year (2009-2010). The Orthopaedic Error Index was calculated as a sum of the error propensity and severity. All relevant hospitals offering orthopaedic surgery in England were then ranked by this metric to identify possible outliers that warrant further attention. 155 hospitals reported 48 971 orthopaedic-related patient-safety incidents. The mean Orthopaedic Error Index was 7.09/year (SD 2.72); five hospitals were identified as outliers. Three of these units were specialist tertiary hospitals carrying out complex surgery; the remaining two outlier hospitals had unusually high Orthopaedic Error Indexes: mean 14.46 (SD 0.29) and 15.29 (SD 0.51), respectively. The Orthopaedic Error Index has enabled identification of hospitals that may be putting patients at disproportionate risk of orthopaedic-related iatrogenic harm and which therefore warrant further investigation. It provides the prototype of a summary index of harm to enable surveillance of unsafe care over time across institutions. Further validation and scrutiny of the method will be required to assess its potential to be extended to other hospital specialties in the UK and also internationally to other health systems that have comparable national databases of patient-safety incidents.

  20. Analysis and optimization of cyclic methods in orbit computation

    NASA Technical Reports Server (NTRS)

    Pierce, S.

    1973-01-01

    The mathematical analysis and computation of the K=3, order 4; K=4, order 6; and K=5, order 7 cyclic methods and the K=5, order 6 Cowell method and some results of optimizing the 3 backpoint cyclic multistep methods for solving ordinary differential equations are presented. Cyclic methods have the advantage over traditional methods of having higher order for a given number of backpoints while at the same time having more free parameters. After considering several error sources the primary source for the cyclic methods has been isolated. The free parameters for three backpoint methods were used to minimize the effects of some of these error sources. They now yield more accuracy with the same computing time as Cowell's method on selected problems. This work is being extended to the five backpoint methods. The analysis and optimization are more difficult here since the matrices are larger and the dimension of the optimizing space is larger. Indications are that the primary error source can be reduced. This will still leave several parameters free to minimize other sources.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gonçalves, Fabio; Treuhaft, Robert; Law, Beverly

    Mapping and monitoring of forest carbon stocks across large areas in the tropics will necessarily rely on remote sensing approaches, which in turn depend on field estimates of biomass for calibration and validation purposes. Here, we used field plot data collected in a tropical moist forest in the central Amazon to gain a better understanding of the uncertainty associated with plot-level biomass estimates obtained specifically for the calibration of remote sensing measurements. In addition to accounting for sources of error that would be normally expected in conventional biomass estimates (e.g., measurement and allometric errors), we examined two sources of uncertaintymore » that are specific to the calibration process and should be taken into account in most remote sensing studies: the error resulting from spatial disagreement between field and remote sensing measurements (i.e., co-location error), and the error introduced when accounting for temporal differences in data acquisition. We found that the overall uncertainty in the field biomass was typically 25% for both secondary and primary forests, but ranged from 16 to 53%. Co-location and temporal errors accounted for a large fraction of the total variance (>65%) and were identified as important targets for reducing uncertainty in studies relating tropical forest biomass to remotely sensed data. Although measurement and allometric errors were relatively unimportant when considered alone, combined they accounted for roughly 30% of the total variance on average and should not be ignored. Lastly, our results suggest that a thorough understanding of the sources of error associated with field-measured plot-level biomass estimates in tropical forests is critical to determine confidence in remote sensing estimates of carbon stocks and fluxes, and to develop strategies for reducing the overall uncertainty of remote sensing approaches.« less

  2. Immobilization precision of a modified GTC frame.

    PubMed

    Winey, Brian; Daartz, Juliane; Dankers, Frank; Bussière, Marc

    2012-05-10

    The purpose of this study was to evaluate and quantify the interfraction reproducibility and intrafraction immobilization precision of a modified GTC frame. The error of the patient alignment and imaging systems were measured using a cranial skull phantom, with simulated, predetermined shifts. The kV setup images were acquired with a room-mounted set of kV sources and panels. Calculated translations and rotations provided by the computer alignment software relying upon three implanted fiducials were compared to the known shifts, and the accuracy of the imaging and positioning systems was calculated. Orthogonal kV setup images for 45 proton SRT patients and 1002 fractions (average 22.3 fractions/patient) were analyzed for interfraction and intrafraction immobilization precision using a modified GTC frame. The modified frame employs a radiotransparent carbon cup and molded pillow to allow for more treatment angles from posterior directions for cranial lesions. Patients and the phantom were aligned with three 1.5 mm stainless steel fiducials implanted into the skull. The accuracy and variance of the patient positioning and imaging systems were measured to be 0.10 ± 0.06 mm, with the maximum uncertainty of rotation being ±0.07°. 957 pairs of interfraction image sets and 974 intrafraction image sets were analyzed. 3D translations and rotations were recorded. The 3D vector interfraction setup reproducibility was 0.13 mm ± 1.8 mm for translations and the largest uncertainty of ± 1.07º for rotations. The intrafraction immobilization efficacy was 0.19 mm ± 0.66 mm for translations and the largest uncertainty of ± 0.50º for rotations. The modified GTC frame provides reproducible setup and effective intrafraction immobilization, while allowing for the complete range of entrance angles from the posterior direction.

  3. Estimated withdrawals and use of freshwater in New Hampshire, 1990

    USGS Publications Warehouse

    Medalie, Laura; Horn, M.A.

    1994-01-01

    Estimated freshwater withdrawals during 1990 in New Hampshire totaled about 422 million gallons per day from ground-water and surface-water sources. The largest withdrawals were for thermoelectric-power generation (60 percent), public supply (23 percent), and industrial use (9 percent). Most withdrawals, 358 million gallons per day, were made from surface- water sources, as compared to 63.7 million gallons per day from ground-water sources. The largest with- drawals were in the Merrimack river basin (322 million gallons per day). An additional 46,000 million gallons per day was used instream for hydroelectric-power generation, primarily in the Upper Androscoggin and Upper Connecticut River subbasins. Other information describing water-use patterns is shown in tables, bar graphs, pie charts, maps, and accompanying text. The data are aggregated by river basin (hydrologic cataloging unit), and all values are reported in million gallons per day.

  4. Imaging phased telescope array study

    NASA Technical Reports Server (NTRS)

    Harvey, James E.

    1989-01-01

    The problems encountered in obtaining a wide field-of-view with large, space-based direct imaging phased telescope arrays were considered. After defining some of the critical systems issues, previous relevant work in the literature was reviewed and summarized. An extensive list was made of potential error sources and the error sources were categorized in the form of an error budget tree including optical design errors, optical fabrication errors, assembly and alignment errors, and environmental errors. After choosing a top level image quality requirment as a goal, a preliminary tops-down error budget allocation was performed; then, based upon engineering experience, detailed analysis, or data from the literature, a bottoms-up error budget reallocation was performed in an attempt to achieve an equitable distribution of difficulty in satisfying the various allocations. This exercise provided a realistic allocation for residual off-axis optical design errors in the presence of state-of-the-art optical fabrication and alignment errors. Three different computational techniques were developed for computing the image degradation of phased telescope arrays due to aberrations of the individual telescopes. Parametric studies and sensitivity analyses were then performed for a variety of subaperture configurations and telescope design parameters in an attempt to determine how the off-axis performance of a phased telescope array varies as the telescopes are scaled up in size. The Air Force Weapons Laboratory (AFWL) multipurpose telescope testbed (MMTT) configuration was analyzed in detail with regard to image degradation due to field curvature and distortion of the individual telescopes as they are scaled up in size.

  5. A Preliminary ZEUS Lightning Location Error Analysis Using a Modified Retrieval Theory

    NASA Technical Reports Server (NTRS)

    Elander, Valjean; Koshak, William; Phanord, Dieudonne

    2004-01-01

    The ZEUS long-range VLF arrival time difference lightning detection network now covers both Europe and Africa, and there are plans for further expansion into the western hemisphere. In order to fully optimize and assess ZEUS lightning location retrieval errors and to determine the best placement of future receivers expected to be added to the network, a software package is being developed jointly between the NASA Marshall Space Flight Center (MSFC) and the University of Nevada Las Vegas (UNLV). The software package, called the ZEUS Error Analysis for Lightning (ZEAL), will be used to obtain global scale lightning location retrieval error maps using both a Monte Carlo approach and chi-squared curvature matrix theory. At the core of ZEAL will be an implementation of an Iterative Oblate (IO) lightning location retrieval method recently developed at MSFC. The IO method will be appropriately modified to account for variable wave propagation speed, and the new retrieval results will be compared with the current ZEUS retrieval algorithm to assess potential improvements. In this preliminary ZEAL work effort, we defined 5000 source locations evenly distributed across the Earth. We then used the existing (as well as potential future ZEUS sites) to simulate arrival time data between source and ZEUS site. A total of 100 sources were considered at each of the 5000 locations, and timing errors were selected from a normal distribution having a mean of 0 seconds and a standard deviation of 20 microseconds. This simulated "noisy" dataset was analyzed using the IO algorithm to estimate source locations. The exact locations were compared with the retrieved locations, and the results are summarized via several color-coded "error maps."

  6. Intuitive theories of information: beliefs about the value of redundancy.

    PubMed

    Soll, J B

    1999-03-01

    In many situations, quantity estimates from multiple experts or diagnostic instruments must be collected and combined. Normatively, and all else equal, one should value information sources that are nonredundant, in the sense that correlation in forecast errors should be minimized. Past research on the preference for redundancy has been inconclusive. While some studies have suggested that people correctly place higher value on uncorrelated inputs when collecting estimates, others have shown that people either ignore correlation or, in some cases, even prefer it. The present experiments show that the preference for redundancy depends on one's intuitive theory of information. The most common intuitive theory identified is the Error Tradeoff Model (ETM), which explicitly distinguishes between measurement error and bias. According to ETM, measurement error can only be averaged out by consulting the same source multiple times (normatively false), and bias can only be averaged out by consulting different sources (normatively true). As a result, ETM leads people to prefer redundant estimates when the ratio of measurement error to bias is relatively high. Other participants favored different theories. Some adopted the normative model, while others were reluctant to mathematically average estimates from different sources in any circumstance. In a post hoc analysis, science majors were more likely than others to subscribe to the normative model. While tentative, this result lends insight into how intuitive theories might develop and also has potential ramifications for how statistical concepts such as correlation might best be learned and internalized. Copyright 1999 Academic Press.

  7. Pointing error analysis of Risley-prism-based beam steering system.

    PubMed

    Zhou, Yuan; Lu, Yafei; Hei, Mo; Liu, Guangcan; Fan, Dapeng

    2014-09-01

    Based on the vector form Snell's law, ray tracing is performed to quantify the pointing errors of Risley-prism-based beam steering systems, induced by component errors, prism orientation errors, and assembly errors. Case examples are given to elucidate the pointing error distributions in the field of regard and evaluate the allowances of the error sources for a given pointing accuracy. It is found that the assembly errors of the second prism will result in more remarkable pointing errors in contrast with the first one. The pointing errors induced by prism tilt depend on the tilt direction. The allowances of bearing tilt and prism tilt are almost identical if the same pointing accuracy is planned. All conclusions can provide a theoretical foundation for practical works.

  8. A recent Cleanroom success story: The Redwing project

    NASA Technical Reports Server (NTRS)

    Hausler, Philip A.

    1992-01-01

    Redwing is the largest completed Cleanroom software engineering project in IBM, both in terms of lines of code and project staffing. The product provides a decision-support facility that utilizes artificial intelligence (AI) technology for predicting and preventing complex operating problems in an MVS environment. The project used the Cleanroom process for development and realized a defect rate of 2.6 errors/KLOC, measured from first execution. This represents the total amount of errors that were found in testing and installation at three field test sites. Development productivity was 486 LOC/PM, which included all development labor expended in design specification through completion of incremental testing. In short, the Redwing team produced a complex systems software product with an extraordinarily low error rate, while maintaining high productivity. All of this was accomplished by a project team using Cleanroom for the first time. An 'introductory implementation' of Cleanroom was defined and used on Redwing. This paper describes the quality and productivity results, the Redwing project, and how Cleanroom was implemented.

  9. Analysis/forecast experiments with a flow-dependent correlation function using FGGE data

    NASA Technical Reports Server (NTRS)

    Baker, W. E.; Bloom, S. C.; Carus, H.; Nestler, M. S.

    1986-01-01

    The use of a flow-dependent correlation function to improve the accuracy of an optimum interpolation (OI) scheme is examined. The development of the correlation function for the OI analysis scheme used for numerical weather prediction is described. The scheme uses a multivariate surface analysis over the oceans to model the pressure-wind error cross-correlation and it has the ability to use an error correlation function that is flow- and geographically-dependent. A series of four-day data assimilation experiments, conducted from January 5-9, 1979, were used to investigate the effect of the different features of the OI scheme (error correlation) on forecast skill for the barotropic lows and highs. The skill of the OI was compared with that of a successive correlation method (SCM) of analysis. It is observed that the largest difference in the correlation statistics occurred in barotropic and baroclinic lows and highs. The comparison reveals that the OI forecasts were more accurate than the SCM forecasts.

  10. The resolution of identity and chain of spheres approximations for the LPNO-CCSD singles Fock term

    NASA Astrophysics Data System (ADS)

    Izsák, Róbert; Hansen, Andreas; Neese, Frank

    2012-10-01

    In the present work, the RIJCOSX approximation, developed earlier for accelerating the SCF procedure, is applied to one of the limiting factors of LPNO-CCSD calculations: the evaluation of the singles Fock term. It turns out that the introduction of RIJCOSX in the evaluation of the closed shell LPNO-CCSD singles Fock term causes errors below the microhartree limit. If the proposed procedure is also combined with RIJCOSX in SCF, then a somewhat larger error occurs, but reaction energy errors will still remain negligible. The speedup for the singles Fock term only is about 9-10 fold for the largest basis set applied. For the case of Penicillin using the def2-QZVPP basis set, a single point energy evaluation takes 2 day 16 h on a single processor leading to a total speedup of 2.6 as compared to a fully analytic calculation. Using eight processors, the same calculation takes only 14 h.

  11. Frontal midline theta and the error-related negativity: neurophysiological mechanisms of action regulation.

    PubMed

    Luu, Phan; Tucker, Don M; Makeig, Scott

    2004-08-01

    The error-related negativity (ERN) is an event-related potential (ERP) peak occurring between 50 and 100 ms after the commission of a speeded motor response that the subject immediately realizes to be in error. The ERN is believed to index brain processes that monitor action outcomes. Our previous analyses of ERP and EEG data suggested that the ERN is dominated by partial phase-locking of intermittent theta-band EEG activity. In this paper, this possibility is further evaluated. The possibility that the ERN is produced by phase-locking of theta-band EEG activity was examined by analyzing the single-trial EEG traces from a forced-choice speeded response paradigm before and after applying theta-band (4-7 Hz) filtering and by comparing the averaged and single-trial phase-locked (ERP) and non-phase-locked (other) EEG data. Electrical source analyses were used to estimate the brain sources involved in the generation of the ERN. Beginning just before incorrect button presses in a speeded choice response paradigm, midfrontal theta-band activity increased in amplitude and became partially and transiently phase-locked to the subject's motor response, accounting for 57% of ERN peak amplitude. The portion of the theta-EEG activity increase remaining after subtracting the response-locked ERP from each trial was larger and longer lasting after error responses than after correct responses, extending on average 400 ms beyond the ERN peak. Multiple equivalent-dipole source analysis suggested 3 possible equivalent dipole sources of the theta-bandpassed ERN, while the scalp distribution of non-phase-locked theta amplitude suggested the presence of additional frontal theta-EEG sources. These results appear consistent with a body of research that demonstrates a relationship between limbic theta activity and action regulation, including error monitoring and learning.

  12. Transient shifts in frontal and parietal circuits scale with enhanced visual feedback and changes in force variability and error

    PubMed Central

    Poon, Cynthia; Coombes, Stephen A.; Corcos, Daniel M.; Christou, Evangelos A.

    2013-01-01

    When subjects perform a learned motor task with increased visual gain, error and variability are reduced. Neuroimaging studies have identified a corresponding increase in activity in parietal cortex, premotor cortex, primary motor cortex, and extrastriate visual cortex. Much less is understood about the neural processes that underlie the immediate transition from low to high visual gain within a trial. This study used 128-channel electroencephalography to measure cortical activity during a visually guided precision grip task, in which the gain of the visual display was changed during the task. Force variability during the transition from low to high visual gain was characterized by an inverted U-shape, whereas force error decreased from low to high gain. Source analysis identified cortical activity in the same structures previously identified using functional magnetic resonance imaging. Source analysis also identified a time-varying shift in the strongest source activity. Superior regions of the motor and parietal cortex had stronger source activity from 300 to 600 ms after the transition, whereas inferior regions of the extrastriate visual cortex had stronger source activity from 500 to 700 ms after the transition. Force variability and electrical activity were linearly related, with a positive relation in the parietal cortex and a negative relation in the frontal cortex. Force error was nonlinearly related to electrical activity in the parietal cortex and frontal cortex by a quadratic function. This is the first evidence that force variability and force error are systematically related to a time-varying shift in cortical activity in frontal and parietal cortex in response to enhanced visual gain. PMID:23365186

  13. Infovigilance: reporting errors in official drug information sources.

    PubMed

    Fusier, Isabelle; Tollier, Corinne; Husson, Marie-Caroline

    2005-06-01

    The French drug database Thériaque (http://www.theriaque.org) developed by the (Centre National Hospitalier d'Information sur le Médicament) (CNHIM), is responsible for the dissemination of independent information about all drugs available in France. Each month the CNHIM pharmacists report problems due to inaccuracies in these sources to the French drug agency. In daily practice we devised the term "infovigilance": "Activity of error or inaccuracy notification in information sources which could be responsible for medication errors". The aim of this study was to evaluate the impact of CNHIM infovigilance on the contents of the Summary of Product Characteristics (SPCs). The study was a prospective study from 09/11/2001 to 31/12/2002. The problems related to the quality of information were classified into four types (inaccuracy/confusion, error/lack of information, discordance between SPC sections and discordance between generic SPCs). (1) Number of notifications and number of SPCs integrated into the database during the study period. (2) Percentage of notifications for each type: with or without potential patient impact, with or without later correction of the SPC, per section. 2.7% (85/3151) of SPCs integrated into the database were concerned by a notification of a problem. Notifications according to type of problem were inaccuracy/confusion (32%), error/lack of information (13%), discordance between SPC sections (27%) and discordance between generic SPCs (28%). 55% of problems were evaluated as 'likely to have an impact on the patient' and 45% as 'unlikely to have an impact on the patient'. 22 of problems which have been reported to the French drug agency were corrected and new updated SPCs were published with the corrections. Our efforts to improve the quality of drug information sources through a continuous "infovigilance" process need to be continued and extended to other information sources.

  14. Stable Carbon Fractionation In Size Segregated Aerosol Particles Produced By Controlled Biomass Burning

    NASA Astrophysics Data System (ADS)

    Masalaite, Agne; Garbaras, Andrius; Garbariene, Inga; Ceburnis, Darius; Martuzevicius, Dainius; Puida, Egidijus; Kvietkus, Kestutis; Remeikis, Vidmantas

    2014-05-01

    Biomass burning is the largest source of primary fine fraction carbonaceous particles and the second largest source of trace gases in the global atmosphere with a strong effect not only on the regional scale but also in areas distant from the source . Many studies have often assumed no significant carbon isotope fractionation occurring between black carbon and the original vegetation during combustion. However, other studies suggested that stable carbon isotope ratios of char or BC may not reliably reflect carbon isotopic signatures of the source vegetation. Overall, the apparently conflicting results throughout the literature regarding the observed fractionation suggest that combustion conditions may be responsible for the observed effects. The purpose of the present study was to gather more quantitative information on carbonaceous aerosols produced in controlled biomass burning, thereby having a potential impact on interpreting ambient atmospheric observations. Seven different biomass fuel types were burned under controlled conditions to determine the effect of the biomass type on the emitted particulate matter mass and stable carbon isotope composition of bulk and size segregated particles. Size segregated aerosol particles were collected using the total suspended particle (TSP) sampler and a micro-orifice uniform deposit impactor (MOUDI). The results demonstrated that particle emissions were dominated by the submicron particles in all biomass types. However, significant differences in emissions of submicron particles and their dominant sizes were found between different biomass fuels. The largest negative fractionation was obtained for the wood pellet fuel type while the largest positive isotopic fractionation was observed during the buckwheat shells combustion. The carbon isotope composition of MOUDI samples compared very well with isotope composition of TSP samples indicating consistency of the results. The measurements of the stable carbon isotope ratio in size segregated aerosol particles suggested that combustion processes could strongly affect isotopic fractionation in aerosol particles of different sizes thereby potentially affecting an interpretation of ambient atmospheric observations.

  15. Visuomotor adaptation needs a validation of prediction error by feedback error

    PubMed Central

    Gaveau, Valérie; Prablanc, Claude; Laurent, Damien; Rossetti, Yves; Priot, Anne-Emmanuelle

    2014-01-01

    The processes underlying short-term plasticity induced by visuomotor adaptation to a shifted visual field are still debated. Two main sources of error can induce motor adaptation: reaching feedback errors, which correspond to visually perceived discrepancies between hand and target positions, and errors between predicted and actual visual reafferences of the moving hand. These two sources of error are closely intertwined and difficult to disentangle, as both the target and the reaching limb are simultaneously visible. Accordingly, the goal of the present study was to clarify the relative contributions of these two types of errors during a pointing task under prism-displaced vision. In “terminal feedback error” condition, viewing of their hand by subjects was allowed only at movement end, simultaneously with viewing of the target. In “movement prediction error” condition, viewing of the hand was limited to movement duration, in the absence of any visual target, and error signals arose solely from comparisons between predicted and actual reafferences of the hand. In order to prevent intentional corrections of errors, a subthreshold, progressive stepwise increase in prism deviation was used, so that subjects remained unaware of the visual deviation applied in both conditions. An adaptive aftereffect was observed in the “terminal feedback error” condition only. As far as subjects remained unaware of the optical deviation and self-assigned pointing errors, prediction error alone was insufficient to induce adaptation. These results indicate a critical role of hand-to-target feedback error signals in visuomotor adaptation; consistent with recent neurophysiological findings, they suggest that a combination of feedback and prediction error signals is necessary for eliciting aftereffects. They also suggest that feedback error updates the prediction of reafferences when a visual perturbation is introduced gradually and cognitive factors are eliminated or strongly attenuated. PMID:25408644

  16. Imagery encoding and false recognition errors: Examining the role of imagery process and imagery content on source misattributions.

    PubMed

    Foley, Mary Ann; Foy, Jeffrey; Schlemmer, Emily; Belser-Ehrlich, Janna

    2010-11-01

    Imagery encoding effects on source-monitoring errors were explored using the Deese-Roediger-McDermott paradigm in two experiments. While viewing thematically related lists embedded in mixed picture/word presentations, participants were asked to generate images of objects or words (Experiment 1) or to simply name the items (Experiment 2). An encoding task intended to induce spontaneous images served as a control for the explicit imagery instruction conditions (Experiment 1). On the picture/word source-monitoring tests, participants were much more likely to report "seeing" a picture of an item presented as a word than the converse particularly when images were induced spontaneously. However, this picture misattribution error was reversed after generating images of words (Experiment 1) and was eliminated after simply labelling the items (Experiment 2). Thus source misattributions were sensitive to the processes giving rise to imagery experiences (spontaneous vs deliberate), the kinds of images generated (object vs word images), and the ways in which materials were presented (as pictures vs words).

  17. Over-Distribution in Source Memory

    PubMed Central

    Brainerd, C. J.; Reyna, V. F.; Holliday, R. E.; Nakamura, K.

    2012-01-01

    Semantic false memories are confounded with a second type of error, over-distribution, in which items are attributed to contradictory episodic states. Over-distribution errors have proved to be more common than false memories when the two are disentangled. We investigated whether over-distribution is prevalent in another classic false memory paradigm: source monitoring. It is. Conventional false memory responses (source misattributions) were predominantly over-distribution errors, but unlike semantic false memory, over-distribution also accounted for more than half of true memory responses (correct source attributions). Experimental control of over-distribution was achieved via a series of manipulations that affected either recollection of contextual details or item memory (concreteness, frequency, list-order, number of presentation contexts, and individual differences in verbatim memory). A theoretical model was used to analyze the data (conjoint process dissociation) that predicts that predicts that (a) over-distribution is directly proportional to item memory but inversely proportional to recollection and (b) item memory is not a necessary precondition for recollection of contextual details. The results were consistent with both predictions. PMID:21942494

  18. A model of memory impairment in schizophrenia: cognitive and clinical factors associated with memory efficiency and memory errors.

    PubMed

    Brébion, Gildas; Bressan, Rodrigo A; Ohlsen, Ruth I; David, Anthony S

    2013-12-01

    Memory impairments in patients with schizophrenia have been associated with various cognitive and clinical factors. Hallucinations have been more specifically associated with errors stemming from source monitoring failure. We conducted a broad investigation of verbal memory and visual memory as well as source memory functioning in a sample of patients with schizophrenia. Various memory measures were tallied, and we studied their associations with processing speed, working memory span, and positive, negative, and depressive symptoms. Superficial and deep memory processes were differentially associated with processing speed, working memory span, avolition, depression, and attention disorders. Auditory/verbal and visual hallucinations were differentially associated with specific types of source memory error. We integrated all the results into a revised version of a previously published model of memory functioning in schizophrenia. The model describes the factors that affect memory efficiency, as well as the cognitive underpinnings of hallucinations within the source monitoring framework. © 2013.

  19. Analytical investigation of adaptive control of radiated inlet noise from turbofan engines

    NASA Technical Reports Server (NTRS)

    Risi, John D.; Burdisso, Ricardo A.

    1994-01-01

    An analytical model has been developed to predict the resulting far field radiation from a turbofan engine inlet. A feedforward control algorithm was simulated to predict the controlled far field radiation from the destructive combination of fan noise and secondary control sources. Numerical results were developed for two system configurations, with the resulting controlled far field radiation patterns showing varying degrees of attenuation and spillover. With one axial station of twelve control sources and error sensors with equal relative angular positions, nearly global attenuation is achieved. Shifting the angular position of one error sensor resulted in an increase of spillover to the extreme sidelines. The complex control inputs for each configuration was investigated to identify the structure of the wave pattern created by the control sources, giving an indication of performance of the system configuration. It is deduced that the locations of the error sensors and the control source configuration are equally critical to the operation of the active noise control system.

  20. Geometric error characterization and error budgets. [thematic mapper

    NASA Technical Reports Server (NTRS)

    Beyer, E.

    1982-01-01

    Procedures used in characterizing geometric error sources for a spaceborne imaging system are described using the LANDSAT D thematic mapper ground segment processing as the prototype. Software was tested through simulation and is undergoing tests with the operational hardware as part of the prelaunch system evaluation. Geometric accuracy specifications, geometric correction, and control point processing are discussed. Cross track and along track errors are tabulated for the thematic mapper, the spacecraft, and ground processing to show the temporal registration error budget in pixel (42.5 microrad) 90%.

  1. Source structure errors in radio-interferometric clock synchronization for ten measured distributions

    NASA Technical Reports Server (NTRS)

    Thomas, J. B.

    1981-01-01

    The effects of source structure on radio interferometry measurements were investigated. The brightness distribution measurements for ten extragalactic sources were analyzed. Significant results are reported.

  2. Number-counts slope estimation in the presence of Poisson noise

    NASA Technical Reports Server (NTRS)

    Schmitt, Juergen H. M. M.; Maccacaro, Tommaso

    1986-01-01

    The slope determination of a power-law number flux relationship in the case of photon-limited sampling. This case is important for high-sensitivity X-ray surveys with imaging telescopes, where the error in an individual source measurement depends on integrated flux and is Poisson, rather than Gaussian, distributed. A bias-free method of slope estimation is developed that takes into account the exact error distribution, the influence of background noise, and the effects of varying limiting sensitivities. It is shown that the resulting bias corrections are quite insensitive to the bias correction procedures applied, as long as only sources with signal-to-noise ratio five or greater are considered. However, if sources with signal-to-noise ratio five or less are included, the derived bias corrections depend sensitively on the shape of the error distribution.

  3. Radiant Temperature Nulling Radiometer

    NASA Technical Reports Server (NTRS)

    Ryan, Robert (Inventor)

    2003-01-01

    A self-calibrating nulling radiometer for non-contact temperature measurement of an object, such as a body of water, employs a black body source as a temperature reference, an optomechanical mechanism, e.g., a chopper, to switch back and forth between measuring the temperature of the black body source and that of a test source, and an infrared detection technique. The radiometer functions by measuring radiance of both the test and the reference black body sources; adjusting the temperature of the reference black body so that its radiance is equivalent to the test source; and, measuring the temperature of the reference black body at this point using a precision contact-type temperature sensor, to determine the radiative temperature of the test source. The radiation from both sources is detected by an infrared detector that converts the detected radiation to an electrical signal that is fed with a chopper reference signal to an error signal generator, such as a synchronous detector, that creates a precision rectified signal that is approximately proportional to the difference between the temperature of the reference black body and that of the test infrared source. This error signal is then used in a feedback loop to adjust the reference black body temperature until it equals that of the test source, at which point the error signal is nulled to zero. The chopper mechanism operates at one or more Hertz allowing minimization of l/f noise. It also provides pure chopping between the black body and the test source and allows continuous measurements.

  4. Levels of CDDs, CDFs, PCBs and Hg in Rural Soils of US (Project Overview)

    EPA Science Inventory

    No systematic survey of dioxins in soil has been conducted in the US. Soils represent the largest reservoir source of dioxins. As point source emissions are reduced emissions from soils become increasingly important. Understanding the distribution of dioxin levels in soils is ...

  5. Technical Note: Millimeter precision in ultrasound based patient positioning: Experimental quantification of inherent technical limitations

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ballhausen, Hendrik, E-mail: hendrik.ballhausen@med.uni-muenchen.de; Hieber, Sheila; Li, Minglun

    2014-08-15

    Purpose: To identify the relevant technical sources of error of a system based on three-dimensional ultrasound (3D US) for patient positioning in external beam radiotherapy. To quantify these sources of error in a controlled laboratory setting. To estimate the resulting end-to-end geometric precision of the intramodality protocol. Methods: Two identical free-hand 3D US systems at both the planning-CT and the treatment room were calibrated to the laboratory frame of reference. Every step of the calibration chain was repeated multiple times to estimate its contribution to overall systematic and random error. Optimal margins were computed given the identified and quantified systematicmore » and random errors. Results: In descending order of magnitude, the identified and quantified sources of error were: alignment of calibration phantom to laser marks 0.78 mm, alignment of lasers in treatment vs planning room 0.51 mm, calibration and tracking of 3D US probe 0.49 mm, alignment of stereoscopic infrared camera to calibration phantom 0.03 mm. Under ideal laboratory conditions, these errors are expected to limit ultrasound-based positioning to an accuracy of 1.05 mm radially. Conclusions: The investigated 3D ultrasound system achieves an intramodal accuracy of about 1 mm radially in a controlled laboratory setting. The identified systematic and random errors require an optimal clinical tumor volume to planning target volume margin of about 3 mm. These inherent technical limitations do not prevent clinical use, including hypofractionation or stereotactic body radiation therapy.« less

  6. Dissociation of item and source memory in rhesus monkeys.

    PubMed

    Basile, Benjamin M; Hampton, Robert R

    2017-09-01

    Source memory, or memory for the context in which a memory was formed, is a defining characteristic of human episodic memory and source memory errors are a debilitating symptom of memory dysfunction. Evidence for source memory in nonhuman primates is sparse despite considerable evidence for other types of sophisticated memory and the practical need for good models of episodic memory in nonhuman primates. A previous study showed that rhesus monkeys confused the identity of a monkey they saw with a monkey they heard, but only after an extended memory delay. This suggests that they initially remembered the source - visual or auditory - of the information but forgot the source as time passed. Here, we present a monkey model of source memory that is based on this previous study. In each trial, monkeys studied two images, one that they simply viewed and touched and the other that they classified as a bird, fish, flower, or person. In a subsequent memory test, they were required to select the image from one source but avoid the other. With training, monkeys learned to suppress responding to images from the to-be-avoided source. After longer memory intervals, monkeys continued to show reliable item memory, discriminating studied images from distractors, but made many source memory errors. Monkeys discriminated source based on study method, not study order, providing preliminary evidence that our manipulation of retention interval caused errors due to source forgetting instead of source confusion. Finally, some monkeys learned to select remembered images from either source on cue, showing that they did indeed remember both items and both sources. This paradigm potentially provides a new model to study a critical aspect of episodic memory in nonhuman primates. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. A two-factor error model for quantitative steganalysis

    NASA Astrophysics Data System (ADS)

    Böhme, Rainer; Ker, Andrew D.

    2006-02-01

    Quantitative steganalysis refers to the exercise not only of detecting the presence of hidden stego messages in carrier objects, but also of estimating the secret message length. This problem is well studied, with many detectors proposed but only a sparse analysis of errors in the estimators. A deep understanding of the error model, however, is a fundamental requirement for the assessment and comparison of different detection methods. This paper presents a rationale for a two-factor model for sources of error in quantitative steganalysis, and shows evidence from a dedicated large-scale nested experimental set-up with a total of more than 200 million attacks. Apart from general findings about the distribution functions found in both classes of errors, their respective weight is determined, and implications for statistical hypothesis tests in benchmarking scenarios or regression analyses are demonstrated. The results are based on a rigorous comparison of five different detection methods under many different external conditions, such as size of the carrier, previous JPEG compression, and colour channel selection. We include analyses demonstrating the effects of local variance and cover saturation on the different sources of error, as well as presenting the case for a relative bias model for between-image error.

  8. Intrinsic errors in transporting a single-spin qubit through a double quantum dot

    NASA Astrophysics Data System (ADS)

    Li, Xiao; Barnes, Edwin; Kestner, J. P.; Das Sarma, S.

    2017-07-01

    Coherent spatial transport or shuttling of a single electron spin through semiconductor nanostructures is an important ingredient in many spintronic and quantum computing applications. In this work we analyze the possible errors in solid-state quantum computation due to leakage in transporting a single-spin qubit through a semiconductor double quantum dot. In particular, we consider three possible sources of leakage errors associated with such transport: finite ramping times, spin-dependent tunneling rates between quantum dots induced by finite spin-orbit couplings, and the presence of multiple valley states. In each case we present quantitative estimates of the leakage errors, and discuss how they can be minimized. The emphasis of this work is on how to deal with the errors intrinsic to the ideal semiconductor structure, such as leakage due to spin-orbit couplings, rather than on errors due to defects or noise sources. In particular, we show that in order to minimize leakage errors induced by spin-dependent tunnelings, it is necessary to apply pulses to perform certain carefully designed spin rotations. We further develop a formalism that allows one to systematically derive constraints on the pulse shapes and present a few examples to highlight the advantage of such an approach.

  9. Non-methane volatile organic compounds in Africa: A view from space

    NASA Astrophysics Data System (ADS)

    Marais, Eloise Ann

    Isoprene emissions affect human health, air quality, and the oxidative capacity of the atmosphere. Globally anthropogenic non-methane volatile organic compounds (NMVOC) emissions are lower than that of isoprene, but local hotspots are hazardous to human health and air quality. In Africa the tropics are a large source of isoprene, while Nigeria appears as a large contributor to regional anthropogenic NMVOC emissions. I make extensive use of space-based formaldehyde (HCHO) observations from the Ozone Monitoring Instrument (OMI) and the chemical transport model (CTM) GEOS-Chem to estimate and examine seasonality of isoprene emissions across Africa, and identify sources and air quality consequences of anthropogenic NMVOC emissions in Nigeria. To estimate isoprene emissions I first developed a filtering scheme to remove (1) contamination from biomass burning and anthropogenic influences; and (2) displacement of HCHO from the isoprene emission source diagnosed with the GEOS-Chem CTM. Conversion to isoprene emissions is with NOx-dependent GEOS-Chem HCHO yields, obtained as the local sensitivity S of the HCHO column ΩHCHO to a perturbation Delta in isoprene emissions EISOP (S = DeltaΩHCHO/DeltaE ISOP). The error in OMI-derived isoprene emissions is 40% at low levels of NOx and 40-90% under high-NOx conditions and is reduced by spatial and temporal averaging to the extent that errors are random. Weak isoprene emission seasonality in equatorial forests is driven predominantly by temperature, while large seasonality in northern and southern savannas is driven by temperature and leaf area index. The largest contribution of African isoprene emissions to surface ozone and particulate matter, determined with GEOS-Chem, of 8 ppbv and 1.5 μg m-3, respectively, is over West Africa. The OMI HCHO data feature a large enhancement over Nigeria that is due to anthropogenic NMVOC emissions. With the OMI HCHO data, coincident satellite observations of atmospheric composition, aircraft measurements, and GEOS-Chem I estimate Nigerian NMVOC emissions that are higher per capita than China (5.7 Tg C a-1). Should Nigeria develop its electricity sector to sustain economic growth with local natural gas and coal reserves NO x emissions will exacerbate wintertime (December-February) surface ozone pollution that exceeds 90 ppbv due to poor ventilation and the Harmattan inversion layer.

  10. [Determination of the error of aerosol extinction coefficient measured by DOAS].

    PubMed

    Si, Fu-qi; Liu, Jian-guo; Xie, Pin-hua; Zhang, Yu-jun; Wang, Mian; Liu, Wen-qing; Hiroaki, Kuze; Liu, Cheng; Nobuo, Takeuchi

    2006-10-01

    The method of defining the error of aerosol extinction coefficient measured by differential optical absorption spectroscopy (DOAS) is described. Some factors which could bring errors to result, such as variation of source, integral time, atmospheric turbulence, calibration of system parameter, displacement of system, and back scattering of particles, are analyzed. The error of aerosol extinction coefficient, 0.03 km(-1), is determined by theoretical analysis and practical measurement.

  11. Multiple window spatial registration error of a gamma camera: 133Ba point source as a replacement of the NEMA procedure.

    PubMed

    Bergmann, Helmar; Minear, Gregory; Raith, Maria; Schaffarich, Peter M

    2008-12-09

    The accuracy of multiple window spatial resolution characterises the performance of a gamma camera for dual isotope imaging. In the present study we investigate an alternative method to the standard NEMA procedure for measuring this performance parameter. A long-lived 133Ba point source with gamma energies close to 67Ga and a single bore lead collimator were used to measure the multiple window spatial registration error. Calculation of the positions of the point source in the images used the NEMA algorithm. The results were validated against the values obtained by the standard NEMA procedure which uses a liquid 67Ga source with collimation. Of the source-collimator configurations under investigation an optimum collimator geometry, consisting of a 5 mm thick lead disk with a diameter of 46 mm and a 5 mm central bore, was selected. The multiple window spatial registration errors obtained by the 133Ba method showed excellent reproducibility (standard deviation < 0.07 mm). The values were compared with the results from the NEMA procedure obtained at the same locations and showed small differences with a correlation coefficient of 0.51 (p < 0.05). In addition, the 133Ba point source method proved to be much easier to use. A Bland-Altman analysis showed that the 133Ba and the 67Ga Method can be used interchangeably. The 133Ba point source method measures the multiple window spatial registration error with essentially the same accuracy as the NEMA-recommended procedure, but is easier and safer to use and has the potential to replace the current standard procedure.

  12. A line-source method for aligning on-board and other pinhole SPECT systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yan, Susu; Bowsher, James; Yin, Fang-Fang

    2013-12-15

    Purpose: In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system—to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)—is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems.Methods: An alignment model consisting of multiple alignmentmore » parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot.Results: In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist.Conclusions: Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC.« less

  13. A line-source method for aligning on-board and other pinhole SPECT systems

    PubMed Central

    Yan, Susu; Bowsher, James; Yin, Fang-Fang

    2013-01-01

    Purpose: In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system—to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)—is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems. Methods: An alignment model consisting of multiple alignment parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot. Results: In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist. Conclusions: Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC. PMID:24320537

  14. A line-source method for aligning on-board and other pinhole SPECT systems.

    PubMed

    Yan, Susu; Bowsher, James; Yin, Fang-Fang

    2013-12-01

    In order to achieve functional and molecular imaging as patients are in position for radiation therapy, a robotic multipinhole SPECT system is being developed. Alignment of the SPECT system-to the linear accelerator (LINAC) coordinate frame and to the coordinate frames of other on-board imaging systems such as cone-beam CT (CBCT)-is essential for target localization and image reconstruction. An alignment method that utilizes line sources and one pinhole projection is proposed and investigated to achieve this goal. Potentially, this method could also be applied to the calibration of the other pinhole SPECT systems. An alignment model consisting of multiple alignment parameters was developed which maps line sources in three-dimensional (3D) space to their two-dimensional (2D) projections on the SPECT detector. In a computer-simulation study, 3D coordinates of line-sources were defined in a reference room coordinate frame, such as the LINAC coordinate frame. Corresponding 2D line-source projections were generated by computer simulation that included SPECT blurring and noise effects. The Radon transform was utilized to detect angles (α) and offsets (ρ) of the line-source projections. Alignment parameters were then estimated by a nonlinear least squares method, based on the α and ρ values and the alignment model. Alignment performance was evaluated as a function of number of line sources, Radon transform accuracy, finite line-source width, intrinsic camera resolution, Poisson noise, and acquisition geometry. Experimental evaluations were performed using a physical line-source phantom and a pinhole-collimated gamma camera attached to a robot. In computer-simulation studies, when there was no error in determining angles (α) and offsets (ρ) of the measured projections, six alignment parameters (three translational and three rotational) were estimated perfectly using three line sources. When angles (α) and offsets (ρ) were provided by the Radon transform, estimation accuracy was reduced. The estimation error was associated with rounding errors of Radon transform, finite line-source width, Poisson noise, number of line sources, intrinsic camera resolution, and detector acquisition geometry. Statistically, the estimation accuracy was significantly improved by using four line sources rather than three and by thinner line-source projections (obtained by better intrinsic detector resolution). With five line sources, median errors were 0.2 mm for the detector translations, 0.7 mm for the detector radius of rotation, and less than 0.5° for detector rotation, tilt, and twist. In experimental evaluations, average errors relative to a different, independent registration technique were about 1.8 mm for detector translations, 1.1 mm for the detector radius of rotation (ROR), 0.5° and 0.4° for detector rotation and tilt, respectively, and 1.2° for detector twist. Alignment parameters can be estimated using one pinhole projection of line sources. Alignment errors are largely associated with limited accuracy of the Radon transform in determining angles (α) and offsets (ρ) of the line-source projections. This alignment method may be important for multipinhole SPECT, where relative pinhole alignment may vary during rotation. For pinhole and multipinhole SPECT imaging on-board radiation therapy machines, the method could provide alignment of SPECT coordinates with those of CBCT and the LINAC.

  15. Accounting for uncertain fault geometry in earthquake source inversions - I: theory and simplified application

    NASA Astrophysics Data System (ADS)

    Ragon, Théa; Sladen, Anthony; Simons, Mark

    2018-05-01

    The ill-posed nature of earthquake source estimation derives from several factors including the quality and quantity of available observations and the fidelity of our forward theory. Observational errors are usually accounted for in the inversion process. Epistemic errors, which stem from our simplified description of the forward problem, are rarely dealt with despite their potential to bias the estimate of a source model. In this study, we explore the impact of uncertainties related to the choice of a fault geometry in source inversion problems. The geometry of a fault structure is generally reduced to a set of parameters, such as position, strike and dip, for one or a few planar fault segments. While some of these parameters can be solved for, more often they are fixed to an uncertain value. We propose a practical framework to address this limitation by following a previously implemented method exploring the impact of uncertainties on the elastic properties of our models. We develop a sensitivity analysis to small perturbations of fault dip and position. The uncertainties in fault geometry are included in the inverse problem under the formulation of the misfit covariance matrix that combines both prediction and observation uncertainties. We validate this approach with the simplified case of a fault that extends infinitely along strike, using both Bayesian and optimization formulations of a static inversion. If epistemic errors are ignored, predictions are overconfident in the data and source parameters are not reliably estimated. In contrast, inclusion of uncertainties in fault geometry allows us to infer a robust posterior source model. Epistemic uncertainties can be many orders of magnitude larger than observational errors for great earthquakes (Mw > 8). Not accounting for uncertainties in fault geometry may partly explain observed shallow slip deficits for continental earthquakes. Similarly, ignoring the impact of epistemic errors can also bias estimates of near surface slip and predictions of tsunamis induced by megathrust earthquakes. (Mw > 8)

  16. Gauging Through the Crowd: A Crowd-Sourcing Approach to Urban Rainfall Measurement and Storm Water Modeling Implications

    NASA Astrophysics Data System (ADS)

    Yang, Pan; Ng, Tze Ling

    2017-11-01

    Accurate rainfall measurement at high spatial and temporal resolutions is critical for the modeling and management of urban storm water. In this study, we conduct computer simulation experiments to test the potential of a crowd-sourcing approach, where smartphones, surveillance cameras, and other devices act as precipitation sensors, as an alternative to the traditional approach of using rain gauges to monitor urban rainfall. The crowd-sourcing approach is promising as it has the potential to provide high-density measurements, albeit with relatively large individual errors. We explore the potential of this approach for urban rainfall monitoring and the subsequent implications for storm water modeling through a series of simulation experiments involving synthetically generated crowd-sourced rainfall data and a storm water model. The results show that even under conservative assumptions, crowd-sourced rainfall data lead to more accurate modeling of storm water flows as compared to rain gauge data. We observe the relative superiority of the crowd-sourcing approach to vary depending on crowd participation rate, measurement accuracy, drainage area, choice of performance statistic, and crowd-sourced observation type. A possible reason for our findings is the differences between the error structures of crowd-sourced and rain gauge rainfall fields resulting from the differences between the errors and densities of the raw measurement data underlying the two field types.

  17. Ozone Trend Detectability

    NASA Technical Reports Server (NTRS)

    Campbell, J. W. (Editor)

    1981-01-01

    The detection of anthropogenic disturbances in the Earth's ozone layer was studied. Two topics were addressed: (1) the level at which a trend in total ozoning is detected by existing data sources; and (2) empirical evidence in the prediction of the depletion in total ozone. Error sources are identified. The predictability of climatological series, whether empirical models can be trusted, and how errors in the Dobson total ozone data impact trend detectability, are discussed.

  18. Runtime Detection of C-Style Errors in UPC Code

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pirkelbauer, P; Liao, C; Panas, T

    2011-09-29

    Unified Parallel C (UPC) extends the C programming language (ISO C 99) with explicit parallel programming support for the partitioned global address space (PGAS), which provides a global memory space with localized partitions to each thread. Like its ancestor C, UPC is a low-level language that emphasizes code efficiency over safety. The absence of dynamic (and static) safety checks allows programmer oversights and software flaws that can be hard to spot. In this paper, we present an extension of a dynamic analysis tool, ROSE-Code Instrumentation and Runtime Monitor (ROSECIRM), for UPC to help programmers find C-style errors involving the globalmore » address space. Built on top of the ROSE source-to-source compiler infrastructure, the tool instruments source files with code that monitors operations and keeps track of changes to the system state. The resulting code is linked to a runtime monitor that observes the program execution and finds software defects. We describe the extensions to ROSE-CIRM that were necessary to support UPC. We discuss complications that arise from parallel code and our solutions. We test ROSE-CIRM against a runtime error detection test suite, and present performance results obtained from running error-free codes. ROSE-CIRM is released as part of the ROSE compiler under a BSD-style open source license.« less

  19. In Search of the Largest Possible Tsunami: An Example Following the 2011 Japan Tsunami

    NASA Astrophysics Data System (ADS)

    Geist, E. L.; Parsons, T.

    2012-12-01

    Many tsunami hazard assessments focus on estimating the largest possible tsunami: i.e., the worst-case scenario. This is typically performed by examining historic and prehistoric tsunami data or by estimating the largest source that can produce a tsunami. We demonstrate that worst-case assessments derived from tsunami and tsunami-source catalogs are greatly affected by sampling bias. Both tsunami and tsunami sources are well represented by a Pareto distribution. It is intuitive to assume that there is some limiting size (i.e., runup or seismic moment) for which a Pareto distribution is truncated or tapered. Likelihood methods are used to determine whether a limiting size can be determined from existing catalogs. Results from synthetic catalogs indicate that several observations near the limiting size are needed for accurate parameter estimation. Accordingly, the catalog length needed to empirically determine the limiting size is dependent on the difference between the limiting size and the observation threshold, with larger catalog lengths needed for larger limiting-threshold size differences. Most, if not all, tsunami catalogs and regional tsunami source catalogs are of insufficient length to determine the upper bound on tsunami runup. As an example, estimates of the empirical tsunami runup distribution are obtained from the Miyako tide gauge station in Japan, which recorded the 2011 Tohoku-oki tsunami as the largest tsunami among 51 other events. Parameter estimation using a tapered Pareto distribution is made both with and without the Tohoku-oki event. The catalog without the 2011 event appears to have a low limiting tsunami runup. However, this is an artifact of undersampling. Including the 2011 event, the catalog conforms more to a pure Pareto distribution with no confidence in estimating a limiting runup. Estimating the size distribution of regional tsunami sources is subject to the same sampling bias. Physical attenuation mechanisms such as wave breaking likely limit the maximum tsunami runup at a particular site. However, historic and prehistoric data alone cannot determine the upper bound on tsunami runup. Because of problems endemic to sampling Pareto distributions of tsunamis and their sources, we recommend that tsunami hazard assessment be based on a specific design probability of exceedance following a pure Pareto distribution, rather than attempting to determine the worst-case scenario.

  20. Improving emissions inventories in North America through systematic analysis of model performance during ICARTT and MILAGRO

    NASA Astrophysics Data System (ADS)

    Mena, Marcelo Andres

    During 2004 and 2006 the University of Iowa provided air quality forecast support for flight planning of the ICARTT and MILAGRO field campaigns. A method for improvement of model performance in comparison to observations is showed. The method allows identifying sources of model error from boundary conditions and emissions inventories. Simultaneous analysis of horizontal interpolation of model error and error covariance showed that error in ozone modeling is highly correlated to the error of its precursors, and that there is geographical correlation also. During ICARTT ozone modeling error was improved by updating from the National Emissions Inventory from 1999 and 2001, and furthermore by updating large point source emissions from continuous monitoring data. Further improvements were achieved by reducing area emissions of NOx y 60% for states in the Southeast United States. Ozone error was highly correlated to NOy error during this campaign. Also ozone production in the United States was most sensitive to NOx emissions. During MILAGRO model performance in terms of correlation coefficients was higher, but model error in ozone modeling was high due overestimation of NOx and VOC emissions in Mexico City during forecasting. Large model improvements were shown by decreasing NOx emissions in Mexico City by 50% and VOC by 60%. Recurring ozone error is spatially correlated to CO and NOy error. Sensitivity studies show that Mexico City aerosol can reduce regional photolysis rates by 40% and ozone formation by 5-10%. Mexico City emissions can enhance NOy and O3 concentrations over the Gulf of Mexico in up to 10-20%. Mexico City emissions can convert regional ozone production regimes from VOC to NOx limited. A method of interpolation of observations along flight tracks is shown, which can be used to infer on the direction of outflow plumes. The use of ratios such as O3/NOy and NOx/NOy can be used to provide information on chemical characteristics of the plume, such as age, and ozone production regime. Interpolated MTBE observations can be used as a tracer of urban mobile source emissions. Finally procedures for estimating and gridding emissions inventories in Brazil and Mexico are presented.

  1. Radiofrequency Electromagnetic Radiation and Memory Performance: Sources of Uncertainty in Epidemiological Cohort Studies.

    PubMed

    Brzozek, Christopher; Benke, Kurt K; Zeleke, Berihun M; Abramson, Michael J; Benke, Geza

    2018-03-26

    Uncertainty in experimental studies of exposure to radiation from mobile phones has in the past only been framed within the context of statistical variability. It is now becoming more apparent to researchers that epistemic or reducible uncertainties can also affect the total error in results. These uncertainties are derived from a wide range of sources including human error, such as data transcription, model structure, measurement and linguistic errors in communication. The issue of epistemic uncertainty is reviewed and interpreted in the context of the MoRPhEUS, ExPOSURE and HERMES cohort studies which investigate the effect of radiofrequency electromagnetic radiation from mobile phones on memory performance. Research into this field has found inconsistent results due to limitations from a range of epistemic sources. Potential analytic approaches are suggested based on quantification of epistemic error using Monte Carlo simulation. It is recommended that future studies investigating the relationship between radiofrequency electromagnetic radiation and memory performance pay more attention to treatment of epistemic uncertainties as well as further research into improving exposure assessment. Use of directed acyclic graphs is also encouraged to display the assumed covariate relationship.

  2. Common Cents? The Role of Pennies in the U.S. Economy

    DTIC Science & Technology

    2006-12-01

    economy. This debate stems from political and economical sources with ties to historical references. This paper explores the various reasons for...roughly $.50. Today, that same pound of zinc costs nearly $1.50.14 Additionally, China is currently experiencing an economic whirlwind. The... economic growth in China has turned it from one of the world’s largest zinc exporters to one of the largest zinc importers.15 As a result, many items

  3. Spectral purity study for IPDA lidar measurement of CO2

    NASA Astrophysics Data System (ADS)

    Ma, Hui; Liu, Dong; Xie, Chen-Bo; Tan, Min; Deng, Qian; Xu, Ji-Wei; Tian, Xiao-Min; Wang, Zhen-Zhu; Wang, Bang-Xin; Wang, Ying-Jian

    2018-02-01

    A high sensitivity and global covered observation of carbon dioxide (CO2) is expected by space-borne integrated path differential absorption (IPDA) lidar which has been designed as the next generation measurement. The stringent precision of space-borne CO2 data, for example 1ppm or better, is required to address the largest number of carbon cycle science questions. Spectral purity, which is defined as the ratio of effective absorbed energy to the total energy transmitted, is one of the most important system parameters of IPDA lidar which directly influences the precision of CO2. Due to the column averaged dry air mixing ratio of CO2 is inferred from comparison of the two echo pulse signals, the laser output usually accompanied by an unexpected spectrally broadband background radiation would posing significant systematic error. In this study, the spectral energy density line shape and spectral impurity line shape are modeled as Lorentz line shape for the simulation, and the latter is assumed as an unabsorbed component by CO2. An error equation is deduced according to IPDA detecting theory for calculating the system error caused by spectral impurity. For a spectral purity of 99%, the induced error could reach up to 8.97 ppm.

  4. Estimating the densities of benzene-derived explosives using atomic volumes.

    PubMed

    Ghule, Vikas D; Nirwan, Ayushi; Devi, Alka

    2018-02-09

    The application of average atomic volumes to predict the crystal densities of benzene-derived energetic compounds of general formula C a H b N c O d is presented, along with the reliability of this method. The densities of 119 neutral nitrobenzenes, energetic salts, and cocrystals with diverse compositions were estimated and compared with experimental data. Of the 74 nitrobenzenes for which direct comparisons could be made, the % error in the estimated density was within 0-3% for 54 compounds, 3-5% for 12 compounds, and 5-8% for the remaining 8 compounds. Among 45 energetic salts and cocrystals, the % error in the estimated density was within 0-3% for 25 compounds, 3-5% for 13 compounds, and 5-7.4% for 7 compounds. The absolute error surpassed 0.05 g/cm 3 for 27 of the 119 compounds (22%). The largest errors occurred for compounds containing fused rings and for compounds with three -NH 2 or -OH groups. Overall, the present approach for estimating the densities of benzene-derived explosives with different functional groups was found to be reliable. Graphical abstract Application and reliability of average atom volume in the crystal density prediction of energetic compounds containing benzene ring.

  5. An Empirical State Error Covariance Matrix Orbit Determination Example

    NASA Technical Reports Server (NTRS)

    Frisbee, Joseph H., Jr.

    2015-01-01

    State estimation techniques serve effectively to provide mean state estimates. However, the state error covariance matrices provided as part of these techniques suffer from some degree of lack of confidence in their ability to adequately describe the uncertainty in the estimated states. A specific problem with the traditional form of state error covariance matrices is that they represent only a mapping of the assumed observation error characteristics into the state space. Any errors that arise from other sources (environment modeling, precision, etc.) are not directly represented in a traditional, theoretical state error covariance matrix. First, consider that an actual observation contains only measurement error and that an estimated observation contains all other errors, known and unknown. Then it follows that a measurement residual (the difference between expected and observed measurements) contains all errors for that measurement. Therefore, a direct and appropriate inclusion of the actual measurement residuals in the state error covariance matrix of the estimate will result in an empirical state error covariance matrix. This empirical state error covariance matrix will fully include all of the errors in the state estimate. The empirical error covariance matrix is determined from a literal reinterpretation of the equations involved in the weighted least squares estimation algorithm. It is a formally correct, empirical state error covariance matrix obtained through use of the average form of the weighted measurement residual variance performance index rather than the usual total weighted residual form. Based on its formulation, this matrix will contain the total uncertainty in the state estimate, regardless as to the source of the uncertainty and whether the source is anticipated or not. It is expected that the empirical error covariance matrix will give a better, statistical representation of the state error in poorly modeled systems or when sensor performance is suspect. In its most straight forward form, the technique only requires supplemental calculations to be added to existing batch estimation algorithms. In the current problem being studied a truth model making use of gravity with spherical, J2 and J4 terms plus a standard exponential type atmosphere with simple diurnal and random walk components is used. The ability of the empirical state error covariance matrix to account for errors is investigated under four scenarios during orbit estimation. These scenarios are: exact modeling under known measurement errors, exact modeling under corrupted measurement errors, inexact modeling under known measurement errors, and inexact modeling under corrupted measurement errors. For this problem a simple analog of a distributed space surveillance network is used. The sensors in this network make only range measurements and with simple normally distributed measurement errors. The sensors are assumed to have full horizon to horizon viewing at any azimuth. For definiteness, an orbit at the approximate altitude and inclination of the International Space Station is used for the study. The comparison analyses of the data involve only total vectors. No investigation of specific orbital elements is undertaken. The total vector analyses will look at the chisquare values of the error in the difference between the estimated state and the true modeled state using both the empirical and theoretical error covariance matrices for each of scenario.

  6. Source imaging of potential fields through a matrix space-domain algorithm

    NASA Astrophysics Data System (ADS)

    Baniamerian, Jamaledin; Oskooi, Behrooz; Fedi, Maurizio

    2017-01-01

    Imaging of potential fields yields a fast 3D representation of the source distribution of potential fields. Imaging methods are all based on multiscale methods allowing the source parameters of potential fields to be estimated from a simultaneous analysis of the field at various scales or, in other words, at many altitudes. Accuracy in performing upward continuation and differentiation of the field has therefore a key role for this class of methods. We here describe an accurate method for performing upward continuation and vertical differentiation in the space-domain. We perform a direct discretization of the integral equations for upward continuation and Hilbert transform; from these equations we then define matrix operators performing the transformation, which are symmetric (upward continuation) or anti-symmetric (differentiation), respectively. Thanks to these properties, just the first row of the matrices needs to be computed, so to decrease dramatically the computation cost. Our approach allows a simple procedure, with the advantage of not involving large data extension or tapering, as due instead in case of Fourier domain computation. It also allows level-to-drape upward continuation and a stable differentiation at high frequencies; finally, upward continuation and differentiation kernels may be merged into a single kernel. The accuracy of our approach is shown to be important for multi-scale algorithms, such as the continuous wavelet transform or the DEXP (depth from extreme point method), because border errors, which tend to propagate largely at the largest scales, are radically reduced. The application of our algorithm to synthetic and real-case gravity and magnetic data sets confirms the accuracy of our space domain strategy over FFT algorithms and standard convolution procedures.

  7. The global distribution of ammonia emissions from seabird colonies

    NASA Astrophysics Data System (ADS)

    Riddick, S. N.; Dragosits, U.; Blackall, T. D.; Daunt, F.; Wanless, S.; Sutton, M. A.

    2012-08-01

    Seabird colonies represent a significant source of atmospheric ammonia (NH3) in remote maritime systems, producing a source of nitrogen that may encourage plant growth, alter terrestrial plant community composition and affect the surrounding marine ecosystem. To investigate seabird NH3 emissions on a global scale, we developed a contemporary seabird database including a total seabird population of 261 million breeding pairs. We used this in conjunction with a bioenergetics model to estimate the mass of nitrogen excreted by all seabirds at each breeding colony. The results combined with the findings of mid-latitude field studies of volatilization rates estimate the global distribution of NH3 emissions from seabird colonies on an annual basis. The largest uncertainty in our emission estimate concerns the potential temperature dependence of NH3 emission. To investigate this we calculated and compared temperature independent emission estimates with a maximum feasible temperature dependent emission, based on the thermodynamic dissociation and solubility equilibria. Using the temperature independent approach, we estimate global NH3 emissions from seabird colonies at 404 Gg NH3 per year. By comparison, since most seabirds are located in relatively cold circumpolar locations, the thermodynamically dependent estimate is 136 Gg NH3 per year. Actual global emissions are expected to be within these bounds, as other factors, such as non-linear interactions with water availability and surface infiltration, moderate the theoretical temperature response. Combining sources of error from temperature (±49%), seabird population estimates (±36%), variation in diet composition (±23%) and non-breeder attendance (±13%), gives a mid estimate with an overall uncertainty range of NH3 emission from seabird colonies of 270 [97-442] Gg NH3 per year. These emissions are environmentally relevant as they primarily occur as "hot-spots" in otherwise pristine environments with low anthropogenic emissions.

  8. Backus Effect on a Perpendicular Errors in Harmonic Models of Real vs. Synthetic Data

    NASA Technical Reports Server (NTRS)

    Voorhies, C. V.; Santana, J.; Sabaka, T.

    1999-01-01

    Measurements of geomagnetic scalar intensity on a thin spherical shell alone are not enough to separate internal from external source fields; moreover, such scalar data are not enough for accurate modeling of the vector field from internal sources because of unmodeled fields and small data errors. Spherical harmonic models of the geomagnetic potential fitted to scalar data alone therefore suffer from well-understood Backus effect and perpendicular errors. Curiously, errors in some models of simulated 'data' are very much less than those in models of real data. We analyze select Magsat vector and scalar measurements separately to illustrate Backus effect and perpendicular errors in models of real scalar data. By using a model to synthesize 'data' at the observation points, and by adding various types of 'noise', we illustrate such errors in models of synthetic 'data'. Perpendicular errors prove quite sensitive to the maximum degree in the spherical harmonic expansion of the potential field model fitted to the scalar data. Small errors in models of synthetic 'data' are found to be an artifact of matched truncation levels. For example, consider scalar synthetic 'data' computed from a degree 14 model. A degree 14 model fitted to such synthetic 'data' yields negligible error, but amplifies 4 nT (rmss) added noise into a 60 nT error (rmss); however, a degree 12 model fitted to the noisy 'data' suffers a 492 nT error (rmms through degree 12). Geomagnetic measurements remain unaware of model truncation, so the small errors indicated by some simulations cannot be realized in practice. Errors in models fitted to scalar data alone approach 1000 nT (rmss) and several thousand nT (maximum).

  9. Treatment of growth failure in the absence of GH signaling: The Ecuadorian experience.

    PubMed

    Guevara-Aguirre, Jaime; Guevara, Alexandra; Guevara, Carolina

    2018-02-01

    Recombinant human insulin-like growth factor-1 (rhIGF-1) treatment studies of growth failure in absence of growth hormone (GH) signaling (GH insensitivity -GHI, Laron syndrome -LS, GH Receptor deficiency -GHRD) have taken place in many locations around the globe. Results from these trials are comparable, and slight differences reported can be attributed to specific circumstances at different research sites. rhIGF-I treatment studies of GHI in Ecuador included various trials performed on children belonging to the largest and only homogeneous cohort of subjects with this condition in the world. All trials were performed by the same team of investigators and, during study periods, subjects received similar nutritional, physical activity and medical advice. Combination of these inherent conditions most likely creates less sources of variability during the research process. Indeed, diagnosis, selection and inclusion of research subjects; methodology used; transport, storage and delivery of study drug; data collection, monitoring and auditing; data analysis, discussion of results, conclusion inferences and reporting, etc., were submitted to the same sources of error. For the above-mentioned reasons, we are hereby mainly covering conclusions derived from rhIGF-I treatment studies of Ecuadorian children whit GHRD due to homozygosity of a splice site mutation occurring at GHR gene, whose unaffected parents were both heterozygous for the same mutation. We also describe studies of rhIGF-I administration in adolescent and adult subjects with GHRD, from the same cohort and with the same genetic anomaly. Copyright © 2017 Elsevier Ltd. All rights reserved.

  10. Developing a treatment planning process and software for improved translation of photodynamic therapy

    NASA Astrophysics Data System (ADS)

    Cassidy, J.; Zheng, Z.; Xu, Y.; Betz, V.; Lilge, L.

    2017-04-01

    Background: The majority of de novo cancers are diagnosed in low and middle-income countries, which often lack the resources to provide adequate therapeutic options. None or minimally invasive therapies such as Photodynamic Therapy (PDT) or photothermal therapies could become part of the overall treatment options in these countries. However, widespread acceptance is hindered by the current empirical training of surgeons in these optical techniques and a lack of easily usable treatment optimizing tools. Methods: Based on image processing programs, ITK-SNAP, and the publicly available FullMonte light propagation software, a work plan is proposed that allows for personalized PDT treatment planning. Starting with, contoured clinical CT or MRI images, the generation of 3D tetrahedral models in silico, execution of the Monte Carlo simulation and presentation of the 3D fluence rate, Φ, [mWcm-2] distribution a treatment plan optimizing photon source placement is developed. Results: Permitting 1-2 days for the installation of the required programs, novices can generate their first fluence, H [Jcm-2] or Φ distribution in a matter of hours. This is reduced to 10th of minutes with some training. Executing the photon simulation calculations is rapid and not the performance limiting process. Largest sources of errors are uncertainties in the contouring and unknown tissue optical properties. Conclusions: The presented FullMonte simulation is the fastest tetrahedral based photon propagation program and provides the basis for PDT treatment planning processes, enabling a faster proliferation of low cost, minimal invasive personalized cancer therapies.

  11. Acquiring Research-grade ALSM Data in the Commercial Marketplace

    NASA Astrophysics Data System (ADS)

    Haugerud, R. A.; Harding, D. J.; Latypov, D.; Martinez, D.; Routh, S.; Ziegler, J.

    2003-12-01

    The Puget Sound Lidar Consortium, working with TerraPoint, LLC, has procured a large volume of ALSM (topographic lidar) data for scientific research. Research-grade ALSM data can be characterized by their completeness, density, and accuracy. Complete data include-at a minimum-X, Y, Z, time, and classification (ground, vegetation, structure, blunder) for each laser reflection. Off-nadir angle and return number for multiple returns are also useful. We began with a pulse density of 1/sq m, and after limited experiments still find this density satisfactory in the dense second-growth forests of western Washington. Lower pulse densities would have produced unacceptably limited sampling in forested areas and aliased some topographic features. Higher pulse densities do not produce markedly better topographic models, in part because of limitations of reproducibility between the overlapping survey swaths used to achieve higher density. Our experience in a variety of forest types demonstrates that the fraction of pulses that produce ground returns varies with vegetation cover, laser beam divergence, laser power, and detector sensitivity, but have not quantified this relationship. The most significant operational limits on vertical accuracy of ALSM appear to be instrument calibration and the accuracy with which returns are classified as ground or vegetation. TerraPoint has recently implemented in-situ calibration using overlapping swaths (Latypov and Zosse, 2002, see http://www.terrapoint.com/News_damirACSM_ASPRS2002.html). On the consumer side, we routinely perform a similar overlap analysis to produce maps of relative Z error between swaths; we find that in bare, low-slope regions the in-situ calibration has reduced this internal Z error to 6-10 cm RMSE. Comparison with independent ground control points commonly illuminates inconsistencies in how GPS heights have been reduced to orthometric heights. Once these inconsistencies are resolved, it appears that the internal errors are the bulk of the error of the survey. The error maps suggest that with in-situ calibration, minor time-varying errors with a period of circa 1 sec are the largest remaining source of survey error. For forested terrain, limited ground penetration and errors in return classification can severely limit the accuracy of resulting topographic models. Initial work by Haugerud and Harding demonstrated the feasibility of fully-automatic return classification; however, TerraPoint has found that better results can be obtained more effectively with 3rd-party classification software that allows a mix of automated routines and human intervention. Our relationship has been evolving since early 2000. Important aspects of this relationship include close communication between data producer and consumer, a willingness to learn from each other, significant technical expertise and resources on the consumer side, and continued refinement of achievable, quantitative performance and accuracy specifications. Most recently we have instituted a slope-dependent Z accuracy specification that TerraPoint first developed as a heuristic for surveying mountainous terrain in Switzerland. We are now working on quantifying the internal consistency of topographic models in forested areas, using a variant of overlap analysis, and standards for the spatial distribution of internal errors.

  12. Soil pH Errors Propagation from Measurements to Spatial Predictions - Cost Benefit Analysis and Risk Assessment Implications for Practitioners and Modelers

    NASA Astrophysics Data System (ADS)

    Owens, P. R.; Libohova, Z.; Seybold, C. A.; Wills, S. A.; Peaslee, S.; Beaudette, D.; Lindbo, D. L.

    2017-12-01

    The measurement errors and spatial prediction uncertainties of soil properties in the modeling community are usually assessed against measured values when available. However, of equal importance is the assessment of errors and uncertainty impacts on cost benefit analysis and risk assessments. Soil pH was selected as one of the most commonly measured soil properties used for liming recommendations. The objective of this study was to assess the error size from different sources and their implications with respect to management decisions. Error sources include measurement methods, laboratory sources, pedotransfer functions, database transections, spatial aggregations, etc. Several databases of measured and predicted soil pH were used for this study including the United States National Cooperative Soil Survey Characterization Database (NCSS-SCDB), the US Soil Survey Geographic (SSURGO) Database. The distribution of errors among different sources from measurement methods to spatial aggregation showed a wide range of values. The greatest RMSE of 0.79 pH units was from spatial aggregation (SSURGO vs Kriging), while the measurement methods had the lowest RMSE of 0.06 pH units. Assuming the order of data acquisition based on the transaction distance i.e. from measurement method to spatial aggregation the RMSE increased from 0.06 to 0.8 pH units suggesting an "error propagation". This has major implications for practitioners and modeling community. Most soil liming rate recommendations are based on 0.1 pH unit increments, while the desired soil pH level increments are based on 0.4 to 0.5 pH units. Thus, even when the measured and desired target soil pH are the same most guidelines recommend 1 ton ha-1 lime, which translates in 111 ha-1 that the farmer has to factor in the cost-benefit analysis. However, this analysis need to be based on uncertainty predictions (0.5-1.0 pH units) rather than measurement errors (0.1 pH units) which would translate in 555-1,111 investment that need to be assessed against the risk. The modeling community can benefit from such analysis, however, error size and spatial distribution for global and regional predictions need to be assessed against the variability of other drivers and impact on management decisions.

  13. Economic analysis of electronic waste recycling: modeling the cost and revenue of a materials recovery facility in California.

    PubMed

    Kang, Hai-Yong; Schoenung, Julie M

    2006-03-01

    The objectives of this study are to identify the various techniques used for treating electronic waste (e-waste) at material recovery facilities (MRFs) in the state of California and to investigate the costs and revenue drivers for these techniques. The economics of a representative e-waste MRF are evaluated by using technical cost modeling (TCM). MRFs are a critical element in the infrastructure being developed within the e-waste recycling industry. At an MRF, collected e-waste can become marketable output products including resalable systems/components and recyclable materials such as plastics, metals, and glass. TCM has two main constituents, inputs and outputs. Inputs are process-related and economic variables, which are directly specified in each model. Inputs can be divided into two parts: inputs for cost estimation and for revenue estimation. Outputs are the results of modeling and consist of costs and revenues, distributed by unit operation, cost element, and revenue source. The results of the present analysis indicate that the largest cost driver for the operation of the defined California e-waste MRF is the materials cost (37% of total cost), which includes the cost to outsource the recycling of the cathode ray tubes (CRTs) (dollar 0.33/kg); the second largest cost driver is labor cost (28% of total cost without accounting for overhead). The other cost drivers are transportation, building, and equipment costs. The most costly unit operation is cathode ray tube glass recycling, and the next are sorting, collecting, and dismantling. The largest revenue source is the fee charged to the customer; metal recovery is the second largest revenue source.

  14. Nonspinning numerical relativity waveform surrogates: assessing the model

    NASA Astrophysics Data System (ADS)

    Field, Scott; Blackman, Jonathan; Galley, Chad; Scheel, Mark; Szilagyi, Bela; Tiglio, Manuel

    2015-04-01

    Recently, multi-modal gravitational waveform surrogate models have been built directly from data numerically generated by the Spectral Einstein Code (SpEC). I will describe ways in which the surrogate model error can be quantified. This task, in turn, requires (i) characterizing differences between waveforms computed by SpEC with those predicted by the surrogate model and (ii) estimating errors associated with the SpEC waveforms from which the surrogate is built. Both pieces can have numerous sources of numerical and systematic errors. We make an attempt to study the most dominant error sources and, ultimately, the surrogate model's fidelity. These investigations yield information about the surrogate model's uncertainty as a function of time (or frequency) and parameter, and could be useful in parameter estimation studies which seek to incorporate model error. Finally, I will conclude by comparing the numerical relativity surrogate model to other inspiral-merger-ringdown models. A companion talk will cover the building of multi-modal surrogate models.

  15. Small Atomic Orbital Basis Set First‐Principles Quantum Chemical Methods for Large Molecular and Periodic Systems: A Critical Analysis of Error Sources

    PubMed Central

    Sure, Rebecca; Brandenburg, Jan Gerit

    2015-01-01

    Abstract In quantum chemical computations the combination of Hartree–Fock or a density functional theory (DFT) approximation with relatively small atomic orbital basis sets of double‐zeta quality is still widely used, for example, in the popular B3LYP/6‐31G* approach. In this Review, we critically analyze the two main sources of error in such computations, that is, the basis set superposition error on the one hand and the missing London dispersion interactions on the other. We review various strategies to correct those errors and present exemplary calculations on mainly noncovalently bound systems of widely varying size. Energies and geometries of small dimers, large supramolecular complexes, and molecular crystals are covered. We conclude that it is not justified to rely on fortunate error compensation, as the main inconsistencies can be cured by modern correction schemes which clearly outperform the plain mean‐field methods. PMID:27308221

  16. Dynamic performance of an aero-assist spacecraft - AFE

    NASA Technical Reports Server (NTRS)

    Chang, Ho-Pen; French, Raymond A.

    1992-01-01

    Dynamic performance of the Aero-assist Flight Experiment (AFE) spacecraft was investigated using a high-fidelity 6-DOF simulation model. Baseline guidance logic, control logic, and a strapdown navigation system to be used on the AFE spacecraft are also modeled in the 6-DOF simulation. During the AFE mission, uncertainties in the environment and the spacecraft are described by an error space which includes both correlated and uncorrelated error sources. The principal error sources modeled in this study include navigation errors, initial state vector errors, atmospheric variations, aerodynamic uncertainties, center-of-gravity off-sets, and weight uncertainties. The impact of the perturbations on the spacecraft performance is investigated using Monte Carlo repetitive statistical techniques. During the Solid Rocket Motor (SRM) deorbit phase, a target flight path angle of -4.76 deg at entry interface (EI) offers very high probability of avoiding SRM casing skip-out from the atmosphere. Generally speaking, the baseline designs of the guidance, navigation, and control systems satisfy most of the science and mission requirements.

  17. New Methods for Assessing and Reducing Uncertainty in Microgravity Studies

    NASA Astrophysics Data System (ADS)

    Giniaux, J. M.; Hooper, A. J.; Bagnardi, M.

    2017-12-01

    Microgravity surveying, also known as dynamic or 4D gravimetry is a time-dependent geophysical method used to detect mass fluctuations within the shallow crust, by analysing temporal changes in relative gravity measurements. We present here a detailed uncertainty analysis of temporal gravity measurements, considering for the first time all possible error sources, including tilt, error in drift estimations and timing errors. We find that some error sources that are actually ignored, can have a significant impact on the total error budget and it is therefore likely that some gravity signals may have been misinterpreted in previous studies. Our analysis leads to new methods for reducing some of the uncertainties associated with residual gravity estimation. In particular, we propose different approaches for drift estimation and free air correction depending on the survey set up. We also provide formulae to recalculate uncertainties for past studies and lay out a framework for best practice in future studies. We demonstrate our new approach on volcanic case studies, which include Kilauea in Hawaii and Askja in Iceland.

  18. Dynamically correcting two-qubit gates against any systematic logical error

    NASA Astrophysics Data System (ADS)

    Calderon Vargas, Fernando Antonio

    The reliability of quantum information processing depends on the ability to deal with noise and error in an efficient way. A significant source of error in many settings is coherent, systematic gate error. This work introduces a set of composite pulse sequences that generate maximally entangling gates and correct all systematic errors within the logical subspace to arbitrary order. These sequences are applica- ble for any two-qubit interaction Hamiltonian, and make no assumptions about the underlying noise mechanism except that it is constant on the timescale of the opera- tion. The prime use for our results will be in cases where one has limited knowledge of the underlying physical noise and control mechanisms, highly constrained control, or both. In particular, we apply these composite pulse sequences to the quantum system formed by two capacitively coupled singlet-triplet qubits, which is charac- terized by having constrained control and noise sources that are low frequency and of a non-Markovian nature.

  19. Signal location using generalized linear constraints

    NASA Astrophysics Data System (ADS)

    Griffiths, Lloyd J.; Feldman, D. D.

    1992-01-01

    This report has presented a two-part method for estimating the directions of arrival of uncorrelated narrowband sources when there are arbitrary phase errors and angle independent gain errors. The signal steering vectors are estimated in the first part of the method; in the second part, the arrival directions are estimated. It should be noted that the second part of the method can be tailored to incorporate additional information about the nature of the phase errors. For example, if the phase errors are known to be caused solely by element misplacement, the element locations can be estimated concurrently with the DOA's by trying to match the theoretical steering vectors to the estimated ones. Simulation results suggest that, for general perturbation, the method can resolve closely spaced sources under conditions for which a standard high-resolution DOA method such as MUSIC fails.

  20. Geodetic positioning using a global positioning system of satellites

    NASA Technical Reports Server (NTRS)

    Fell, P. J.

    1980-01-01

    Geodetic positioning using range, integrated Doppler, and interferometric observations from a constellation of twenty-four Global Positioning System satellites is analyzed. A summary of the proposals for geodetic positioning and baseline determination is given which includes a description of measurement techniques and comments on rank deficiency and error sources. An analysis of variance comparison of range, Doppler, and interferometric time delay to determine their relative geometric strength for baseline determination is included. An analytic examination to the effect of a priori constraints on positioning using simultaneous observations from two stations is presented. Dynamic point positioning and baseline determination using range and Doppler is examined in detail. Models for the error sources influencing dynamic positioning are developed. Included is a discussion of atomic clock stability, and range and Doppler observation error statistics based on random correlated atomic clock error are derived.

  1. Variable mid-latitude X-ray source 3U 0042+32

    NASA Technical Reports Server (NTRS)

    Rappaport, S.; Clark, G. W.; Dower, R.; Doxsey, R.; Jernigan, G.; Li, F.

    1977-01-01

    A celestial location with an error circle of radius one minute is reported for the mid-latitude X-ray source 3U 0042+32; comparison of observations from the Ariel-5 and Uhuru satellites with data obtained from two independent rotation modulation collimators yields the precise position. Studies to detect regular pulsations and energy spectra of the X-ray source are also discussed. Analysis of the peak X-ray flux in the error circle, as well as certain distance constraints, suggests that the source of the flux may be a neutron star in a distant galactic binary system having a companion that undergoes episodes of mass transfer due to eruption or orbital eccentricity.

  2. The Brera Multiscale Wavelet ROSAT HRI Source Catalog. I. The Algorithm

    NASA Astrophysics Data System (ADS)

    Lazzati, Davide; Campana, Sergio; Rosati, Piero; Panzera, Maria Rosa; Tagliaferri, Gianpiero

    1999-10-01

    We present a new detection algorithm based on the wavelet transform for the analysis of high-energy astronomical images. The wavelet transform, because of its multiscale structure, is suited to the optimal detection of pointlike as well as extended sources, regardless of any loss of resolution with the off-axis angle. Sources are detected as significant enhancements in the wavelet space, after the subtraction of the nonflat components of the background. Detection thresholds are computed through Monte Carlo simulations in order to establish the expected number of spurious sources per field. The source characterization is performed through a multisource fitting in the wavelet space. The procedure is designed to correctly deal with very crowded fields, allowing for the simultaneous characterization of nearby sources. To obtain a fast and reliable estimate of the source parameters and related errors, we apply a novel decimation technique that, taking into account the correlation properties of the wavelet transform, extracts a subset of almost independent coefficients. We test the performance of this algorithm on synthetic fields, analyzing with particular care the characterization of sources in poor background situations, where the assumption of Gaussian statistics does not hold. In these cases, for which standard wavelet algorithms generally provide underestimated errors, we infer errors through a procedure that relies on robust basic statistics. Our algorithm is well suited to the analysis of images taken with the new generation of X-ray instruments equipped with CCD technology, which will produce images with very low background and/or high source density.

  3. In vivo quantitative imaging of point-like bioluminescent and fluorescent sources: Validation studies in phantoms and small animals post mortem

    NASA Astrophysics Data System (ADS)

    Comsa, Daria Craita

    2008-10-01

    There is a real need for improved small animal imaging techniques to enhance the development of therapies in which animal models of disease are used. Optical methods for imaging have been extensively studied in recent years, due to their high sensitivity and specificity. Methods like bioluminescence and fluorescence tomography report promising results for 3D reconstructions of source distributions in vivo. However, no standard methodology exists for optical tomography, and various groups are pursuing different approaches. In a number of studies on small animals, the bioluminescent or fluorescent sources can be reasonably approximated as point or line sources. Examples include images of bone metastases confined to the bone marrow. Starting with this premise, we propose a simpler, faster, and inexpensive technique to quantify optical images of point-like sources. The technique avoids the computational burden of a tomographic method by using planar images and a mathematical model based on diffusion theory. The model employs in situ optical properties estimated from video reflectometry measurements. Modeled and measured images are compared iteratively using a Levenberg-Marquardt algorithm to improve estimates of the depth and strength of the bioluminescent or fluorescent inclusion. The performance of the technique to quantify bioluminescence images was first evaluated on Monte Carlo simulated data. Simulated data also facilitated a methodical investigation of the effect of errors in tissue optical properties on the retrieved source depth and strength. It was found that, for example, an error of 4 % in the effective attenuation coefficient led to 4 % error in the retrieved depth for source depths of up to 12mm, while the error in the retrieved source strength increased from 5.5 % at 2mm depth, to 18 % at 12mm depth. Experiments conducted on images from homogeneous tissue-simulating phantoms showed that depths up to 10mm could be estimated within 8 %, and the relative source strength within 20 %. For sources 14mm deep, the inaccuracy in determining the relative source strength increased to 30 %. Measurements on small animals post mortem showed that the use of measured in situ optical properties to characterize heterogeneous tissue resulted in a superior estimation of the source strength and depth compared to when literature optical properties for organs or tissues were used. Moreover, it was found that regardless of the heterogeneity of the implant location or depth, our algorithm consistently showed an advantage over the simple assessment of the source strength based on the signal strength in the emission image. Our bioluminescence algorithm was generally able to predict the source strength within a factor of 2 of the true strength, but the performance varied with the implant location and depth. In fluorescence imaging a more complex technique is required, including knowledge of tissue optical properties at both the excitation and emission wavelengths. A theoretical study using simulated fluorescence data showed that, for example, for a source 5 mm deep in tissue, errors of up to 15 % in the optical properties would give rise to errors of +/-0.7 mm in the retrieved depth and the source strength would be over- or under-estimated by a factor ranging from 1.25 to 2. Fluorescent sources implanted in rats post mortem at the same depth were localized with an error just slightly higher than predicted theoretically: a root-mean-square value of 0.8 mm was obtained for all implants 5 mm deep. However, for this source depth, the source strength was assessed within a factor ranging from 1.3 to 4.2 from the value estimated in a controlled medium. Nonetheless, similarly to the bioluminescence study, the fluorescence quantification algorithm consistently showed an advantage over the simple assessment of the source strength based on the signal strength in the fluorescence image. Few studies have been reported in the literature that reconstruct known sources of bioluminescence or fluorescence in vivo or in heterogeneous phantoms. The few reported results show that the 3D tomographic methods have not yet reached their full potential. In this context, the simplicity of our technique emerges as a strong advantage.

  4. Reed-Solomon error-correction as a software patch mechanism.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pendley, Kevin D.

    This report explores how error-correction data generated by a Reed-Solomon code may be used as a mechanism to apply changes to an existing installed codebase. Using the Reed-Solomon code to generate error-correction data for a changed or updated codebase will allow the error-correction data to be applied to an existing codebase to both validate and introduce changes or updates from some upstream source to the existing installed codebase.

  5. LANDSAT 4 band 6 data evaluation

    NASA Technical Reports Server (NTRS)

    1985-01-01

    Comparison of underflight data with satellite estimates of temperature revealed significant gain calibration errors. The source of the LANDSAT 5 band 6 error and its reproducibility is not yet adequately defined. The error can be accounted for using underflight or ground truth data. When underflight data are used to correct the satellite data, the residual error for the scene studied was 1.3K when the predicted temperatures were compared to measured surface temperature.

  6. Seismic Yield Estimates of UTTR Surface Explosions

    NASA Astrophysics Data System (ADS)

    Hayward, C.; Park, J.; Stump, B. W.

    2016-12-01

    Since 2007 the Utah Test and Training Range (UTTR) has used explosive demolition as a method to destroy excess solid rocket motors ranging in size from 19 tons to less than 2 tons. From 2007 to 2014, 20 high quality seismic stations within 180 km recorded most of the more than 200 demolitions. This provides an interesting dataset to examine seismic source scaling for surface explosions. Based upon observer records, shots were of 4 sizes, corresponding to the size of the rocket motors. Instrument corrections for the stations were quality controlled by examining the P-wave amplitudes of all magnitude 6.5-8 earthquakes from 30 to 90 degrees away. For each station recording, the instrument corrected RMS seismic amplitude in the first 10 seconds after the P-onset was calculated. Waveforms at any given station for all the observed explosions are nearly identical. The observed RMS amplitudes were fit to a model including a term for combined distance and station correction, a term for observed RMS amplitude, and an error term for the actual demolition size. The observed seismic yield relationship is RMS=k*Weight2/3 . Estimated yields for the largest shots vary by about 50% from the stated weights, with a nearly normal distribution.

  7. Using PS1 and Type Ia Supernovae To Make Most Precise Measurement of Dark Energy To Date

    NASA Astrophysics Data System (ADS)

    Scolnic, Daniel; Pan-STARRS

    2018-01-01

    I will review recent results that present optical light curves, redshifts, and classifications for 361 spectroscopically confirmed Type Ia supernovae (SNeIa) discovered by the Pan-STARRS1 (PS1) Medium Deep Survey. I will go over improvements to the PS1 SN photometry, astrometry and calibration that reduce the systematic uncertainties in the PS1 SN Ia distances. We combined distances of PS1 SNe with distance estimates of SNIa from SDSS, SNLS, various low-z and HST samples to form the largest combined sample of SN Ia consisting of a total of ~1050 SN Ia ranging from 0.01 < z < 2.3, which we call the ‘Pantheon Sample’. Photometric calibration uncertainties have long dominated the systematic error budget of every major analysis of cosmological parameters with SNIa. Using the PS1 relative calibration, we have reduced these calibration systematics to the point where they are similar in magnitude to the other major sources of known systematic uncertainties: the nature of the intrinsic scatter of SNIa and modeling of selection effects. I will present measurements of dark energy which are now the most precise measurements of dark energy to date.

  8. Analysis of the phase control of the ITER ICRH antenna array. Influence on the load resilience and radiated power spectrum

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Messiaen, A., E-mail: a.messiaen@fz-juelich.de; Ongena, J.; Vervier, M.

    2015-12-10

    The paper analyses how the phasing of the ITER ICRH 24 strap array evolves from the power sources up to the strap currents of the antenna. The study of the phasing control and coherence through the feeding circuits with prematching and automatic matching and decoupling network is made by modeling starting from the TOPICA matrix of the antenna array for a low coupling plasma profile and for current drive phasing (worst case for mutual coupling effects). The main results of the analysis are: (i) the strap current amplitude is well controlled by the antinode V{sub max} amplitude of the feedingmore » lines, (ii) the best toroidal phasing control is done by the adjustment of the mean phase of V{sub max} of each poloidal straps column, (iii) with well adjusted system the largest strap current phasing error is ±20°, (iv) the effect on load resilience remains well below the maximum affordable VSWR of the generators, (v) the effect on the radiated power spectrum versus k{sub //} computed by means of the coupling code ANTITER II remains small for the considered cases.« less

  9. The Angular Power Spectrum of BATSE 3B Gamma-Ray Bursts

    NASA Technical Reports Server (NTRS)

    Tegmark, Max; Hartmann, Dieter H.; Briggs, Michael S.; Meegan, Charles A.

    1996-01-01

    We compute the angular power spectrum C(sub l) from the BATSE 3B catalog of 1122 gamma-ray bursts and find no evidence for clustering on any scale. These constraints bridge the entire range from small scales (which probe source clustering and burst repetition) to the largest scales (which constrain possible anisotropics from the Galactic halo or from nearby cosmological large-scale structures). We develop an analysis technique that takes the angular position errors into account. For specific clustering or repetition models, strong upper limits can be obtained down to scales l approx. equal to 30, corresponding to a couple of degrees on the sky. The minimum-variance burst weighting that we employ is visualized graphically as an all-sky map in which each burst is smeared out by an amount corresponding to its position uncertainty. We also present separate bandpass-filtered sky maps for the quadrupole term and for the multipole ranges l = 3-10 and l = 11-30, so that the fluctuations on different angular scales can be inspected separately for visual features such as localized 'hot spots' or structures aligned with the Galactic plane. These filtered maps reveal no apparent deviations from isotropy.

  10. Dietary intake assessment using integrated sensors and software

    NASA Astrophysics Data System (ADS)

    Shang, Junqing; Pepin, Eric; Johnson, Eric; Hazel, David; Teredesai, Ankur; Kristal, Alan; Mamishev, Alexander

    2012-02-01

    The area of dietary assessment is becoming increasingly important as obesity rates soar, but valid measurement of the food intake in free-living persons is extraordinarily challenging. Traditional paper-based dietary assessment methods have limitations due to bias, user burden and cost, and therefore improved methods are needed to address important hypotheses related to diet and health. In this paper, we will describe the progress of our mobile Diet Data Recorder System (DDRS), where an electronic device is used for objective measurement on dietary intake in real time and at moderate cost. The DDRS consists of (1) a mobile device that integrates a smartphone and an integrated laser package, (2) software on the smartphone for data collection and laser control, (3) an algorithm to process acquired data for food volume estimation, which is the largest source of error in calculating dietary intake, and (4) database and interface for data storage and management. The estimated food volume, together with direct entries of food questionnaires and voice recordings, could provide dietitians and nutritional epidemiologists with more complete food description and more accurate food portion sizes. In this paper, we will describe the system design of DDRS and initial results of dietary assessment.

  11. Conditional Entropy and Location Error in Indoor Localization Using Probabilistic Wi-Fi Fingerprinting.

    PubMed

    Berkvens, Rafael; Peremans, Herbert; Weyn, Maarten

    2016-10-02

    Localization systems are increasingly valuable, but their location estimates are only useful when the uncertainty of the estimate is known. This uncertainty is currently calculated as the location error given a ground truth, which is then used as a static measure in sometimes very different environments. In contrast, we propose the use of the conditional entropy of a posterior probability distribution as a complementary measure of uncertainty. This measure has the advantage of being dynamic, i.e., it can be calculated during localization based on individual sensor measurements, does not require a ground truth, and can be applied to discrete localization algorithms. Furthermore, for every consistent location estimation algorithm, both the location error and the conditional entropy measures must be related, i.e., a low entropy should always correspond with a small location error, while a high entropy can correspond with either a small or large location error. We validate this relationship experimentally by calculating both measures of uncertainty in three publicly available datasets using probabilistic Wi-Fi fingerprinting with eight different implementations of the sensor model. We show that the discrepancy between these measures, i.e., many location estimates having a high location error while simultaneously having a low conditional entropy, is largest for the least realistic implementations of the probabilistic sensor model. Based on the results presented in this paper, we conclude that conditional entropy, being dynamic, complementary to location error, and applicable to both continuous and discrete localization, provides an important extra means of characterizing a localization method.

  12. Conditional Entropy and Location Error in Indoor Localization Using Probabilistic Wi-Fi Fingerprinting

    PubMed Central

    Berkvens, Rafael; Peremans, Herbert; Weyn, Maarten

    2016-01-01

    Localization systems are increasingly valuable, but their location estimates are only useful when the uncertainty of the estimate is known. This uncertainty is currently calculated as the location error given a ground truth, which is then used as a static measure in sometimes very different environments. In contrast, we propose the use of the conditional entropy of a posterior probability distribution as a complementary measure of uncertainty. This measure has the advantage of being dynamic, i.e., it can be calculated during localization based on individual sensor measurements, does not require a ground truth, and can be applied to discrete localization algorithms. Furthermore, for every consistent location estimation algorithm, both the location error and the conditional entropy measures must be related, i.e., a low entropy should always correspond with a small location error, while a high entropy can correspond with either a small or large location error. We validate this relationship experimentally by calculating both measures of uncertainty in three publicly available datasets using probabilistic Wi-Fi fingerprinting with eight different implementations of the sensor model. We show that the discrepancy between these measures, i.e., many location estimates having a high location error while simultaneously having a low conditional entropy, is largest for the least realistic implementations of the probabilistic sensor model. Based on the results presented in this paper, we conclude that conditional entropy, being dynamic, complementary to location error, and applicable to both continuous and discrete localization, provides an important extra means of characterizing a localization method. PMID:27706099

  13. PROGRESS TOWARDS NEXT GENERATION, WAVEFORM BASED THREE-DIMENSIONAL MODELS AND METRICS TO IMPROVE NUCLEAR EXPLOSION MONITORING IN THE MIDDLE EAST

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Savage, B; Peter, D; Covellone, B

    2009-07-02

    Efforts to update current wave speed models of the Middle East require a thoroughly tested database of sources and recordings. Recordings of seismic waves traversing the region from Tibet to the Red Sea will be the principal metric in guiding improvements to the current wave speed model. Precise characterizations of the earthquakes, specifically depths and faulting mechanisms, are essential to avoid mapping source errors into the refined wave speed model. Errors associated with the source are manifested in amplitude and phase changes. Source depths and paths near nodal planes are particularly error prone as small changes may severely affect themore » resulting wavefield. Once sources are quantified, regions requiring refinement will be highlighted using adjoint tomography methods based on spectral element simulations [Komatitsch and Tromp (1999)]. An initial database of 250 regional Middle Eastern events from 1990-2007, was inverted for depth and focal mechanism using teleseismic arrivals [Kikuchi and Kanamori (1982)] and regional surface and body waves [Zhao and Helmberger (1994)]. From this initial database, we reinterpreted a large, well recorded subset of 201 events through a direct comparison between data and synthetics based upon a centroid moment tensor inversion [Liu et al. (2004)]. Evaluation was done using both a 1D reference model [Dziewonski and Anderson (1981)] at periods greater than 80 seconds and a 3D model [Kustowski et al. (2008)] at periods of 25 seconds and longer. The final source reinterpretations will be within the 3D model, as this is the initial starting point for the adjoint tomography. Transitioning from a 1D to 3D wave speed model shows dramatic improvements when comparisons are done at shorter periods, (25 s). Synthetics from the 1D model were created through mode summations while those from the 3D simulations were created using the spectral element method. To further assess errors in source depth and focal mechanism, comparisons between the three methods were made. These comparisons help to identify problematic stations and sources which may bias the final solution. Estimates of standard errors were generated for each event's source depth and focal mechanism to identify poorly constrained events. A final, well characterized set of sources and stations will be then used to iteratively improve the wave speed model of the Middle East. After a few iterations during the adjoint inversion process, the sources will be reexamined and relocated to further reduce mapping of source errors into structural features. Finally, efforts continue in developing the infrastructure required to 'quickly' generate event kernels at the n-th iteration and invert for a new, (n+1)-th, wave speed model of the Middle East. While development of the infrastructure proceeds, initial tests using a limited number of events shows the 3D model, while showing vast improvement compared to the 1D model, still requires substantial modifications. Employing our new, full source set and iterating the adjoint inversions at successively shorter periods will lead to significant changes and refined wave speed structures of the Middle East.« less

  14. The effects of center of rotation errors on cardiac SPECT imaging

    NASA Astrophysics Data System (ADS)

    Bai, Chuanyong; Shao, Ling; Ye, Jinghan; Durbin, M.

    2003-10-01

    In SPECT imaging, center of rotation (COR) errors lead to the misalignment of projection data and can potentially degrade the quality of the reconstructed images. In this work, we study the effects of COR errors on cardiac SPECT imaging using simulation, point source, cardiac phantom, and patient studies. For simulation studies, we generate projection data using a uniform MCAT phantom first without modeling any physical effects (NPH), then with the modeling of detector response effect (DR) alone. We then corrupt the projection data with simulated sinusoid and step COR errors. For other studies, we introduce sinusoid COR errors to projection data acquired on SPECT systems. An OSEM algorithm is used for image reconstruction without detector response correction, but with nonuniform attenuation correction when needed. The simulation studies show that, when COR errors increase from 0 to 0.96 cm: 1) sinusoid COR errors in axial direction lead to intensity decrease in the inferoapical region; 2) step COR errors in axial direction lead to intensity decrease in the distal anterior region. The intensity decrease is more severe in images reconstructed from projection data with NPH than with DR; and 3) the effects of COR errors in transaxial direction seem to be insignificant. In other studies, COR errors slightly degrade point source resolution; COR errors of 0.64 cm or above introduce visible but insignificant nonuniformity in the images of uniform cardiac phantom; COR errors up to 0.96 cm in transaxial direction affect the lesion-to-background contrast (LBC) insignificantly in the images of cardiac phantom with defects, and COR errors up to 0.64 cm in axial direction only slightly decrease the LBC. For the patient studies with COR errors up to 0.96 cm, images have the same diagnostic/prognostic values as those without COR errors. This work suggests that COR errors of up to 0.64 cm are not likely to change the clinical applications of cardiac SPECT imaging when using iterative reconstruction algorithm without detector response correction.

  15. Swift follow-up of 1RXS J194211.9+255552

    NASA Astrophysics Data System (ADS)

    Sidoli, L.; Fiocchi, M.; Bird, A. J.; Drave, S. P.; Bazzano, A.; Persi, P.; Tarana, A.; Sguera, V.; Chenevez, J.; Kuulkers, E.

    2011-12-01

    Following the INTEGRAL/JEM-X detection of the unidentified source 1RXS J194211.9+255552 (ATel #3816) on December 18, we asked for a Swift/XRT follow-up observation. Swift observed the source field on December 21, 2011 at 06:10:09.7 (UTC), with a net exposure of 1756 s. Within the ROSAT error circle there is only one pointlike source, at the following position (J2000): RA(hh mm ss.s) = 19h42m11.13s, Dec(dd mm ss.s) = +25:56:07.32 (3.6 arcsec error radius).

  16. A comparative study of spherical and flat-Earth geopotential modeling at satellite elevations

    NASA Technical Reports Server (NTRS)

    Parrott, M. H.; Hinze, W. J.; Braile, L. W.

    1985-01-01

    Flat-Earth and spherical-Earth geopotential modeling of crustal anomaly sources at satellite elevations are compared by computing gravity and scalar magnetic anomalies perpendicular to the strike of variably dimensioned rectangular prisms at altitudes of 150, 300, and 450 km. Results indicate that the error caused by the flat-Earth approximation is less than 10% in most geometric conditions. Generally, error increase with larger and wider anomaly sources at higher altitudes. For most crustal source modeling applications at conventional satellite altitudes, flat-Earth modeling can be justified and is numerically efficient.

  17. Estimation of sampling error uncertainties in observed surface air temperature change in China

    NASA Astrophysics Data System (ADS)

    Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun

    2017-08-01

    This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.

  18. Error estimation in multitemporal InSAR deformation time series, with application to Lanzarote, Canary Islands

    NASA Astrophysics Data System (ADS)

    GonzáLez, Pablo J.; FernáNdez, José

    2011-10-01

    Interferometric Synthetic Aperture Radar (InSAR) is a reliable technique for measuring crustal deformation. However, despite its long application in geophysical problems, its error estimation has been largely overlooked. Currently, the largest problem with InSAR is still the atmospheric propagation errors, which is why multitemporal interferometric techniques have been successfully developed using a series of interferograms. However, none of the standard multitemporal interferometric techniques, namely PS or SB (Persistent Scatterers and Small Baselines, respectively) provide an estimate of their precision. Here, we present a method to compute reliable estimates of the precision of the deformation time series. We implement it for the SB multitemporal interferometric technique (a favorable technique for natural terrains, the most usual target of geophysical applications). We describe the method that uses a properly weighted scheme that allows us to compute estimates for all interferogram pixels, enhanced by a Montecarlo resampling technique that properly propagates the interferogram errors (variance-covariances) into the unknown parameters (estimated errors for the displacements). We apply the multitemporal error estimation method to Lanzarote Island (Canary Islands), where no active magmatic activity has been reported in the last decades. We detect deformation around Timanfaya volcano (lengthening of line-of-sight ˜ subsidence), where the last eruption in 1730-1736 occurred. Deformation closely follows the surface temperature anomalies indicating that magma crystallization (cooling and contraction) of the 300-year shallow magmatic body under Timanfaya volcano is still ongoing.

  19. Partitioning error components for accuracy-assessment of near-neighbor methods of imputation

    Treesearch

    Albert R. Stage; Nicholas L. Crookston

    2007-01-01

    Imputation is applied for two quite different purposes: to supply missing data to complete a data set for subsequent modeling analyses or to estimate subpopulation totals. Error properties of the imputed values have different effects in these two contexts. We partition errors of imputation derived from similar observation units as arising from three sources:...

  20. Correction to: Antibiotic resistance pattern and virulence genes content in avian pathogenic Escherichia coli (APEC) from broiler chickens in Chitwan, Nepal.

    PubMed

    Subedi, Manita; Bhattarai, Rebanta Kumar; Devkota, Bhuminand; Phuyal, Sarita; Luitel, Himal

    2018-05-22

    The original article [1] contains errors in author panels and their contributions, errors in both the Methodology and the Results sections, and errors with respect to funding sources. The affected sections of the manuscript and their respective regions of corrected text can be viewed ahead.

Top