Hess, G.W.; Bohman, L.R.
1996-01-01
Techniques for estimating monthly mean streamflow at gaged sites and monthly streamflow duration characteristics at ungaged sites in central Nevada were developed using streamflow records at six gaged sites and basin physical and climatic characteristics. Streamflow data at gaged sites were related by regression techniques to concurrent flows at nearby gaging stations so that monthly mean streamflows for periods of missing or no record can be estimated for gaged sites in central Nevada. The standard error of estimate for relations at these sites ranged from 12 to 196 percent. Also, monthly streamflow data for selected percent exceedence levels were used in regression analyses with basin and climatic variables to determine relations for ungaged basins for annual and monthly percent exceedence levels. Analyses indicate that the drainage area and percent of drainage area at altitudes greater than 10,000 feet are the most significant variables. For the annual percent exceedence, the standard error of estimate of the relations for ungaged sites ranged from 51 to 96 percent and standard error of prediction for ungaged sites ranged from 96 to 249 percent. For the monthly percent exceedence values, the standard error of estimate of the relations ranged from 31 to 168 percent, and the standard error of prediction ranged from 115 to 3,124 percent. Reliability and limitations of the estimating methods are described.
Telemetry Standards, RCC Standard 106-17, Annex A.1, Pulse Amplitude Modulation Standards
2017-07-01
conform to either Figure Error! No text of specified style in document.-1 or Figure Error! No text of specified style in document.-2. Figure Error...No text of specified style in document.-1. 50 percent duty cycle PAM with amplitude synchronization A 20-25 percent deviation reserved for pulse...synchronization is recommended. Telemetry Standards, RCC Standard 106-17 Annex A.1, July 2017 A.1.2 Figure Error! No text of specified style
Cost-effectiveness of the stream-gaging program in Nebraska
Engel, G.B.; Wahl, K.L.; Boohar, J.A.
1984-01-01
This report documents the results of a study of the cost-effectiveness of the streamflow information program in Nebraska. Presently, 145 continuous surface-water stations are operated in Nebraska on a budget of $908,500. Data uses and funding sources are identified for each of the 145 stations. Data from most stations have multiple uses. All stations have sufficient justification for continuation, but two stations primarily are used in short-term research studies; their continued operation needs to be evaluated when the research studies end. The present measurement frequency produces an average standard error for instantaneous discharges of about 12 percent, including periods when stage data are missing. Altering the travel routes and the measurement frequency will allow a reduction in standard error of about 1 percent with the present budget. Standard error could be reduced to about 8 percent if lost record could be eliminated. A minimum budget of $822,000 is required to operate the present network, but operations at that funding level would result in an increase in standard error to about 16 percent. The maximum budget analyzed was $1,363,000, which would result in an average standard error of 6 percent. (USGS)
Methods for estimating flood frequency in Montana based on data through water year 1998
Parrett, Charles; Johnson, Dave R.
2004-01-01
Annual peak discharges having recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years (T-year floods) were determined for 660 gaged sites in Montana and in adjacent areas of Idaho, Wyoming, and Canada, based on data through water year 1998. The updated flood-frequency information was subsequently used in regression analyses, either ordinary or generalized least squares, to develop equations relating T-year floods to various basin and climatic characteristics, equations relating T-year floods to active-channel width, and equations relating T-year floods to bankfull width. The equations can be used to estimate flood frequency at ungaged sites. Montana was divided into eight regions, within which flood characteristics were considered to be reasonably homogeneous, and the three sets of regression equations were developed for each region. A measure of the overall reliability of the regression equations is the average standard error of prediction. The average standard errors of prediction for the equations based on basin and climatic characteristics ranged from 37.4 percent to 134.1 percent. Average standard errors of prediction for the equations based on active-channel width ranged from 57.2 percent to 141.3 percent. Average standard errors of prediction for the equations based on bankfull width ranged from 63.1 percent to 155.5 percent. In most regions, the equations based on basin and climatic characteristics generally had smaller average standard errors of prediction than equations based on active-channel or bankfull width. An exception was the Southeast Plains Region, where all equations based on active-channel width had smaller average standard errors of prediction than equations based on basin and climatic characteristics or bankfull width. Methods for weighting estimates derived from the basin- and climatic-characteristic equations and the channel-width equations also were developed. The weights were based on the cross correlation of residuals from the different methods and the average standard errors of prediction. When all three methods were combined, the average standard errors of prediction ranged from 37.4 percent to 120.2 percent. Weighting of estimates reduced the standard errors of prediction for all T-year flood estimates in four regions, reduced the standard errors of prediction for some T-year flood estimates in two regions, and provided no reduction in average standard error of prediction in two regions. A computer program for solving the regression equations, weighting estimates, and determining reliability of individual estimates was developed and placed on the USGS Montana District World Wide Web page. A new regression method, termed Region of Influence regression, also was tested. Test results indicated that the Region of Influence method was not as reliable as the regional equations based on generalized least squares regression. Two additional methods for estimating flood frequency at ungaged sites located on the same streams as gaged sites also are described. The first method, based on a drainage-area-ratio adjustment, is intended for use on streams where the ungaged site of interest is located near a gaged site. The second method, based on interpolation between gaged sites, is intended for use on streams that have two or more streamflow-gaging stations.
Wiley, Jeffrey B.
2012-01-01
Base flows were compared with published streamflow statistics to assess climate variability and to determine the published statistics that can be substituted for annual and seasonal base flows of unregulated streams in West Virginia. The comparison study was done by the U.S. Geological Survey, in cooperation with the West Virginia Department of Environmental Protection, Division of Water and Waste Management. The seasons were defined as winter (January 1-March 31), spring (April 1-June 30), summer (July 1-September 30), and fall (October 1-December 31). Differences in mean annual base flows for five record sub-periods (1930-42, 1943-62, 1963-69, 1970-79, and 1980-2002) range from -14.9 to 14.6 percent when compared to the values for the period 1930-2002. Differences between mean seasonal base flows and values for the period 1930-2002 are less variable for winter and spring, -11.2 to 11.0 percent, than for summer and fall, -47.0 to 43.6 percent. Mean summer base flows (July-September) and mean monthly base flows for July, August, September, and October are approximately equal, within 7.4 percentage points of mean annual base flow. The mean of each of annual, spring, summer, fall, and winter base flows are approximately equal to the annual 50-percent (standard error of 10.3 percent), 45-percent (error of 14.6 percent), 75-percent (error of 11.8 percent), 55-percent (error of 11.2 percent), and 35-percent duration flows (error of 11.1 percent), respectively. The mean seasonal base flows for spring, summer, fall, and winter are approximately equal to the spring 50- to 55-percent (standard error of 6.8 percent), summer 45- to 50-percent (error of 6.7 percent), fall 45-percent (error of 15.2 percent), and winter 60-percent duration flows (error of 8.5 percent), respectively. Annual and seasonal base flows representative of the period 1930-2002 at unregulated streamflow-gaging stations and ungaged locations in West Virginia can be estimated using previously published values of statistics and procedures.
A method for estimating mean and low flows of streams in national forests of Montana
Parrett, Charles; Hull, J.A.
1985-01-01
Equations were developed for estimating mean annual discharge, 80-percent exceedance discharge, and 95-percent exceedance discharge for streams on national forest lands in Montana. The equations for mean annual discharge used active-channel width, drainage area and mean annual precipitation as independent variables, with active-channel width being most significant. The equations for 80-percent exceedance discharge and 95-percent exceedance discharge used only active-channel width as an independent variable. The standard error or estimate for the best equation for estimating mean annual discharge was 27 percent. The standard errors of estimate for the equations were 67 percent for estimating 80-percent exceedance discharge and 75 percent for estimating 95-percent exceedance discharge. (USGS)
Laboratory errors and patient safety.
Miligy, Dawlat A
2015-01-01
Laboratory data are extensively used in medical practice; consequently, laboratory errors have a tremendous impact on patient safety. Therefore, programs designed to identify and reduce laboratory errors, as well as, setting specific strategies are required to minimize these errors and improve patient safety. The purpose of this paper is to identify part of the commonly encountered laboratory errors throughout our practice in laboratory work, their hazards on patient health care and some measures and recommendations to minimize or to eliminate these errors. Recording the encountered laboratory errors during May 2008 and their statistical evaluation (using simple percent distribution) have been done in the department of laboratory of one of the private hospitals in Egypt. Errors have been classified according to the laboratory phases and according to their implication on patient health. Data obtained out of 1,600 testing procedure revealed that the total number of encountered errors is 14 tests (0.87 percent of total testing procedures). Most of the encountered errors lay in the pre- and post-analytic phases of testing cycle (representing 35.7 and 50 percent, respectively, of total errors). While the number of test errors encountered in the analytic phase represented only 14.3 percent of total errors. About 85.7 percent of total errors were of non-significant implication on patients health being detected before test reports have been submitted to the patients. On the other hand, the number of test errors that have been already submitted to patients and reach the physician represented 14.3 percent of total errors. Only 7.1 percent of the errors could have an impact on patient diagnosis. The findings of this study were concomitant with those published from the USA and other countries. This proves that laboratory problems are universal and need general standardization and bench marking measures. Original being the first data published from Arabic countries that evaluated the encountered laboratory errors and launch the great need for universal standardization and bench marking measures to control the laboratory work.
Performance Evaluation of Five Turbidity Sensors in Three Primary Standards
Snazelle, Teri T.
2015-10-28
Open-File Report 2015-1172 is temporarily unavailable.Five commercially available turbidity sensors were evaluated by the U.S. Geological Survey, Hydrologic Instrumentation Facility (HIF) for accuracy and precision in three types of turbidity standards; formazin, StablCal, and AMCO Clear (AMCO–AEPA). The U.S. Environmental Protection Agency (EPA) recognizes all three turbidity standards as primary standards, meaning they are acceptable for reporting purposes. The Forrest Technology Systems (FTS) DTS-12, the Hach SOLITAX sc, the Xylem EXO turbidity sensor, the Yellow Springs Instrument (YSI) 6136 turbidity sensor, and the Hydrolab Series 5 self-cleaning turbidity sensor were evaluated to determine if turbidity measurements in the three primary standards are comparable to each other, and to ascertain if the primary standards are truly interchangeable. A formazin 4000 nephelometric turbidity unit (NTU) stock was purchased and dilutions of 40, 100, 400, 800, and 1000 NTU were made fresh the day of testing. StablCal and AMCO Clear (for Hach 2100N) standards with corresponding concentrations were also purchased for the evaluation. Sensor performance was not evaluated in turbidity levels less than 40 NTU due to the unavailability of polymer-bead turbidity standards rated for general use. The percent error was calculated as the true (not absolute) difference between the measured turbidity and the standard value, divided by the standard value.The sensors that demonstrated the best overall performance in the evaluation were the Hach SOLITAX and the Hydrolab Series 5 turbidity sensor when the operating range (0.001–4000 NTU for the SOLITAX and 0.1–3000 NTU for the Hydrolab) was considered in addition to sensor accuracy and precision. The average percent error in the three standards was 3.80 percent for the SOLITAX and -4.46 percent for the Hydrolab. The DTS-12 also demonstrated good accuracy with an average percent error of 2.02 percent and a maximum relative standard deviation of 0.51 percent for the operating range, which was limited to 0.01–1600 NTU at the time of this report. Test results indicated an average percent error of 19.81 percent in the three standards for the EXO turbidity sensor and 9.66 percent for the YSI 6136. The significant variability in sensor performance in the three primary standards suggests that although all three types are accepted as primary calibration standards, they are not interchangeable, and sensor results in the three types of standards are not directly comparable.
Improving estimates of streamflow characteristics by using Landsat-1 imagery
Hollyday, Este F.
1976-01-01
Imagery from the first Earth Resources Technology Satellite (renamed Landsat-1) was used to discriminate physical features of drainage basins in an effort to improve equations used to estimate streamflow characteristics at gaged and ungaged sites. Records of 20 gaged basins in the Delmarva Peninsula of Maryland, Delaware, and Virginia were analyzed for 40 statistical streamflow characteristics. Equations relating these characteristics to basin characteristics were obtained by a technique of multiple linear regression. A control group of equations contains basin characteristics derived from maps. An experimental group of equations contains basin characteristics derived from maps and imagery. Characteristics from imagery were forest, riparian (streambank) vegetation, water, and combined agricultural and urban land use. These basin characteristics were isolated photographically by techniques of film-density discrimination. The area of each characteristic in each basin was measured photometrically. Comparison of equations in the control group with corresponding equations in the experimental group reveals that for 12 out of 40 equations the standard error of estimate was reduced by more than 10 percent. As an example, the standard error of estimate of the equation for the 5-year recurrence-interval flood peak was reduced from 46 to 32 percent. Similarly, the standard error of the equation for the mean monthly flow for September was reduced from 32 to 24 percent, the standard error for the 7-day, 2-year recurrence low flow was reduced from 136 to 102 percent, and the standard error for the 3-day, 2-year flood volume was reduced from 30 to 12 percent. It is concluded that data from Landsat imagery can substantially improve the accuracy of estimates of some streamflow characteristics at sites in the Delmarva Peninsula.
Evaluation of lens distortion errors in video-based motion analysis
NASA Technical Reports Server (NTRS)
Poliner, Jeffrey; Wilmington, Robert; Klute, Glenn K.; Micocci, Angelo
1993-01-01
In an effort to study lens distortion errors, a grid of points of known dimensions was constructed and videotaped using a standard and a wide-angle lens. Recorded images were played back on a VCR and stored on a personal computer. Using these stored images, two experiments were conducted. Errors were calculated as the difference in distance from the known coordinates of the points to the calculated coordinates. The purposes of this project were as follows: (1) to develop the methodology to evaluate errors introduced by lens distortion; (2) to quantify and compare errors introduced by use of both a 'standard' and a wide-angle lens; (3) to investigate techniques to minimize lens-induced errors; and (4) to determine the most effective use of calibration points when using a wide-angle lens with a significant amount of distortion. It was seen that when using a wide-angle lens, errors from lens distortion could be as high as 10 percent of the size of the entire field of view. Even with a standard lens, there was a small amount of lens distortion. It was also found that the choice of calibration points influenced the lens distortion error. By properly selecting the calibration points and avoidance of the outermost regions of a wide-angle lens, the error from lens distortion can be kept below approximately 0.5 percent with a standard lens and 1.5 percent with a wide-angle lens.
Flow interference in a variable porosity trisonic wind tunnel.
NASA Technical Reports Server (NTRS)
Davis, J. W.; Graham, R. F.
1972-01-01
Pressure data from a 20-degree cone-cylinder in a variable porosity wind tunnel for the Mach range 0.2 to 5.0 are compared to an interference free standard in order to determine wall interference effects. Four 20-degree cone-cylinder models representing an approximate range of percent blockage from one to six were compared to curve-fits of the interference free standard at each Mach number and errors determined at each pressure tap location. The average of the absolute values of the percent error over the length of the model was determined and used as the criterion for evaluating model blockage interference effects. The results are presented in the form of the percent error as a function of model blockage and Mach number.
Evaluation of Acoustic Doppler Current Profiler measurements of river discharge
Morlock, S.E.
1996-01-01
The standard deviations of the ADCP measurements ranged from approximately 1 to 6 percent and were generally higher than the measurement errors predicted by error-propagation analysis of ADCP instrument performance. These error-prediction methods assume that the largest component of ADCP discharge measurement error is instrument related. The larger standard deviations indicate that substantial portions of measurement error may be attributable to sources unrelated to ADCP electronics or signal processing and are functions of the field environment.
Parrett, Charles; Omang, R.J.; Hull, J.A.
1983-01-01
Equations for estimating mean annual runoff and peak discharge from measurements of channel geometry were developed for western and northeastern Montana. The study area was divided into two regions for the mean annual runoff analysis, and separate multiple-regression equations were developed for each region. The active-channel width was determined to be the most important independent variable in each region. The standard error of estimate for the estimating equation using active-channel width was 61 percent in the Northeast Region and 38 percent in the West region. The study area was divided into six regions for the peak discharge analysis, and multiple regression equations relating channel geometry and basin characteristics to peak discharges having recurrence intervals of 2, 5, 10, 25, 50 and 100 years were developed for each region. The standard errors of estimate for the regression equations using only channel width as an independent variable ranged from 35 to 105 percent. The standard errors improved in four regions as basin characteristics were added to the estimating equations. (USGS)
Hoos, Anne B.; Patel, Anant R.
1996-01-01
Model-adjustment procedures were applied to the combined data bases of storm-runoff quality for Chattanooga, Knoxville, and Nashville, Tennessee, to improve predictive accuracy for storm-runoff quality for urban watersheds in these three cities and throughout Middle and East Tennessee. Data for 45 storms at 15 different sites (five sites in each city) constitute the data base. Comparison of observed values of storm-runoff load and event-mean concentration to the predicted values from the regional regression models for 10 constituents shows prediction errors, as large as 806,000 percent. Model-adjustment procedures, which combine the regional model predictions with local data, are applied to improve predictive accuracy. Standard error of estimate after model adjustment ranges from 67 to 322 percent. Calibration results may be biased due to sampling error in the Tennessee data base. The relatively large values of standard error of estimate for some of the constituent models, although representing significant reduction (at least 50 percent) in prediction error compared to estimation with unadjusted regional models, may be unacceptable for some applications. The user may wish to collect additional local data for these constituents and repeat the analysis, or calibrate an independent local regression model.
ALT space shuttle barometric altimeter altitude analysis
NASA Technical Reports Server (NTRS)
Killen, R.
1978-01-01
The accuracy was analyzed of the barometric altimeters onboard the space shuttle orbiter. Altitude estimates from the air data systems including the operational instrumentation and the developmental flight instrumentation were obtained for each of the approach and landing test flights. By comparing the barometric altitude estimates to altitudes derived from radar tracking data filtered through a Kalman filter and fully corrected for atmospheric refraction, the errors in the barometric altitudes were shown to be 4 to 5 percent of the Kalman altitudes. By comparing the altitude determined from the true atmosphere derived from weather balloon data to the altitude determined from the U.S. Standard Atmosphere of 1962, it was determined that the assumption of the Standard Atmosphere equations contributes roughly 75 percent of the total error in the baro estimates. After correcting the barometric altitude estimates using an average summer model atmosphere computed for the average latitude of the space shuttle landing sites, the residual error in the altitude estimates was reduced to less than 373 feet. This corresponds to an error of less than 1.5 percent for altitudes above 4000 feet for all flights.
NASA Technical Reports Server (NTRS)
Lienert, Barry R.
1991-01-01
Monte Carlo perturbations of synthetic tensors to evaluate the Hext/Jelinek elliptical confidence regions for anisotropy of magnetic susceptibility (AMS) eigenvectors are used. When the perturbations are 33 percent of the minimum anisotropy, both the shapes and probability densities of the resulting eigenvector distributions agree with the elliptical distributions predicted by the Hext/Jelinek equations. When the perturbation size is increased to 100 percent of the minimum eigenvalue difference, the major axis of the 95 percent confidence ellipse underestimates the observed eigenvector dispersion by about 10 deg. The observed distributions of the principal susceptibilities (eigenvalues) are close to being normal, with standard errors that agree well with the calculated Hext/Jelinek errors. The Hext/Jelinek ellipses are also able to describe the AMS dispersions due to instrumental noise and provide reasonable limits for the AMS dispersions observed in two Hawaiian basaltic dikes. It is concluded that the Hext/Jelinek method provides a satisfactory description of the errors in AMS data and should be a standard part of any AMS data analysis.
Estimation of clear-sky insolation using satellite and ground meteorological data
NASA Technical Reports Server (NTRS)
Staylor, W. F.; Darnell, W. L.; Gupta, S. K.
1983-01-01
Ground based pyranometer measurements were combined with meteorological data from the Tiros N satellite in order to estimate clear-sky insolations at five U.S. sites for five weeks during the spring of 1979. The estimates were used to develop a semi-empirical model of clear-sky insolation for the interpretation of input data from the Tiros Operational Vertical Sounder (TOVS). Using only satellite data, the estimated standard errors in the model were about 2 percent. The introduction of ground based data reduced errors to around 1 percent. It is shown that although the errors in the model were reduced by only 1 percent, TOVS data products are still adequate for estimating clear-sky insolation.
Cost-effectiveness of the stream-gaging program in New Jersey
Schopp, R.D.; Ulery, R.L.
1984-01-01
The results of a study of the cost-effectiveness of the stream-gaging program in New Jersey are documented. This study is part of a 5-year nationwide analysis undertaken by the U.S. Geological Survey to define and document the most cost-effective means of furnishing streamflow information. This report identifies the principal uses of the data and relates those uses to funding sources, applies, at selected stations, alternative less costly methods (that is flow routing, regression analysis) for furnishing the data, and defines a strategy for operating the program which minimizes uncertainty in the streamflow data for specific operating budgets. Uncertainty in streamflow data is primarily a function of the percentage of missing record and the frequency of discharge measurements. In this report, 101 continuous stream gages and 73 crest-stage or stage-only gages are analyzed. A minimum budget of $548,000 is required to operate the present stream-gaging program in New Jersey with an average standard error of 27.6 percent. The maximum budget analyzed was $650,000, which resulted in an average standard error of 17.8 percent. The 1983 budget of $569,000 resulted in a standard error of 24.9 percent under present operating policy. (USGS)
Tree and impervious cover in the United States
David J. Nowak; Eric J. Greenfield
2012-01-01
Using aerial photograph interpretation of circa 2005 imagery, percent tree canopy and impervious surface cover in the conterminous United States are estimated at 34.2% (standard error (SE) = 0.2%) and 2.4% (SE = 0.1%), respectively. Within urban/community areas, percent tree cover (35.1%, SE = 0.4%) is similar to the national value, but percent impervious cover is...
Cost-effectiveness of the stream-gaging program in Missouri
Waite, L.A.
1987-01-01
This report documents the results of an evaluation of the cost effectiveness of the 1986 stream-gaging program in Missouri. Alternative methods of developing streamflow information and cost-effective resource allocation were used to evaluate the Missouri program. Alternative methods were considered statewide, but the cost effective resource allocation study was restricted to the area covered by the Rolla field headquarters. The average standard error of estimate for records of instantaneous discharge was 17 percent; assuming the 1986 budget and operating schedule, it was shown that this overall degree of accuracy could be improved to 16 percent by altering the 1986 schedule of station visitations. A minimum budget of $203,870, with a corresponding average standard error of estimate 17 percent, is required to operate the 1986 program for the Rolla field headquarters; a budget of less than this would not permit proper service and maintenance of the stations or adequate definition of stage-discharge relations. The maximum budget analyzed was $418,870, which resulted in an average standard error of estimate of 14 percent. Improved instrumentation can have a positive effect on streamflow uncertainties by decreasing lost records. An earlier study of data uses found that data uses were sufficient to justify continued operation of all stations. One of the stations investigated, Current River at Doniphan (07068000) was suitable for the application of alternative methods for simulating discharge records. However, the station was continued because of data use requirements. (Author 's abstract)
Cost effectiveness of the stream-gaging program in Ohio
Shindel, H.L.; Bartlett, W.P.
1986-01-01
This report documents the results of the cost effectiveness of the stream-gaging program in Ohio. Data uses and funding sources were identified for 107 continuous stream gages currently being operated by the U.S. Geological Survey in Ohio with a budget of $682,000; this budget includes field work for other projects and excludes stations jointly operated with the Miami Conservancy District. No stream gage were identified as having insufficient reason to continue their operation; nor were any station identified as having uses specifically only for short-term studies. All 107 station should be maintained in the program for the foreseeable future. The average standard error of estimation of stream flow records is 29.2 percent at its present level of funding. A minimum budget of $679,000 is required to operate the 107-gage program; a budget less than this does no permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 31.1 percent The maximum budget analyzed was $1,282,000, which resulted in an average standard error of 11.1 percent. A need for additional gages has been identified by the other agencies that cooperate in the program. It is suggested that these gage be installed as funds can be made available.
Cost-effectiveness of the stream-gaging program in North Carolina
Mason, R.R.; Jackson, N.M.
1985-01-01
This report documents the results of a study of the cost-effectiveness of the stream-gaging program in North Carolina. Data uses and funding sources are identified for the 146 gaging stations currently operated in North Carolina with a budget of $777,600 (1984). As a result of the study, eleven stations are nominated for discontinuance and five for conversion from recording to partial-record status. Large parts of North Carolina 's Coastal Plain are identified as having sparse streamflow data. This sparsity should be remedied as funds become available. Efforts should also be directed toward defining the efforts of drainage improvements on local hydrology and streamflow characteristics. The average standard error of streamflow records in North Carolina is 18.6 percent. This level of accuracy could be improved without increasing cost by increasing the frequency of field visits and streamflow measurements at stations with high standard errors and reducing the frequency at stations with low standard errors. A minimum budget of $762,000 is required to operate the 146-gage program. A budget less than this does not permit proper service and maintenance of the gages and recorders. At the minimum budget, and with the optimum allocation of field visits, the average standard error is 17.6 percent.
Regionalization of harmonic-mean streamflows in Kentucky
Martin, Gary R.; Ruhl, Kevin J.
1993-01-01
Harmonic-mean streamflow (Qh), defined as the reciprocal of the arithmetic mean of the reciprocal daily streamflow values, was determined for selected stream sites in Kentucky. Daily mean discharges for the available period of record through the 1989 water year at 230 continuous record streamflow-gaging stations located in and adjacent to Kentucky were used in the analysis. Periods of record affected by regulation were identified and analyzed separately from periods of record unaffected by regulation. Record-extension procedures were applied to short-term stations to reducetime-sampling error and, thus, improve estimates of the long-term Qh. Techniques to estimate the Qh at ungaged stream sites in Kentucky were developed. A regression model relating Qh to total drainage area and streamflow-variability index was presented with example applications. The regression model has a standard error of estimate of 76 percent and a standard error of prediction of 78 percent.
A rocket ozonesonde for geophysical research and satellite intercomparison
NASA Technical Reports Server (NTRS)
Hilsenrath, E.; Coley, R. L.; Kirschner, P. T.; Gammill, B.
1979-01-01
The in-situ rocketsonde for ozone profile measurements developed and flown for geophysical research and satellite comparison is reviewed. The measurement principle involves the chemiluminescence caused by ambient ozone striking a detector and passive pumping as a means of sampling the atmosphere as the sonde descends through the atmosphere on a parachute. The sonde is flown on a meteorological sounding rocket, and flight data are telemetered via the standard meteorological GMD ground receiving system. The payload operation, sensor performance, and calibration procedures simulating flight conditions are described. An error analysis indicated an absolute accuracy of about 12 percent and a precision of about 8 percent. These are combined to give a measurement error of 14 percent.
Smith, S. Jerrod; Lewis, Jason M.; Graves, Grant M.
2015-09-28
Generalized-least-squares multiple-linear regression analysis was used to formulate regression relations between peak-streamflow frequency statistics and basin characteristics. Contributing drainage area was the only basin characteristic determined to be statistically significant for all percentage of annual exceedance probabilities and was the only basin characteristic used in regional regression equations for estimating peak-streamflow frequency statistics on unregulated streams in and near the Oklahoma Panhandle. The regression model pseudo-coefficient of determination, converted to percent, for the Oklahoma Panhandle regional regression equations ranged from about 38 to 63 percent. The standard errors of prediction and the standard model errors for the Oklahoma Panhandle regional regression equations ranged from about 84 to 148 percent and from about 76 to 138 percent, respectively. These errors were comparable to those reported for regional peak-streamflow frequency regression equations for the High Plains areas of Texas and Colorado. The root mean square errors for the Oklahoma Panhandle regional regression equations (ranging from 3,170 to 92,000 cubic feet per second) were less than the root mean square errors for the Oklahoma statewide regression equations (ranging from 18,900 to 412,000 cubic feet per second); therefore, the Oklahoma Panhandle regional regression equations produce more accurate peak-streamflow statistic estimates for the irrigated period of record in the Oklahoma Panhandle than do the Oklahoma statewide regression equations. The regression equations developed in this report are applicable to streams that are not substantially affected by regulation, impoundment, or surface-water withdrawals. These regression equations are intended for use for stream sites with contributing drainage areas less than or equal to about 2,060 square miles, the maximum value for the independent variable used in the regression analysis.
Flood-frequency prediction methods for unregulated streams of Tennessee, 2000
Law, George S.; Tasker, Gary D.
2003-01-01
Up-to-date flood-frequency prediction methods for unregulated, ungaged rivers and streams of Tennessee have been developed. Prediction methods include the regional-regression method and the newer region-of-influence method. The prediction methods were developed using stream-gage records from unregulated streams draining basins having from 1 percent to about 30 percent total impervious area. These methods, however, should not be used in heavily developed or storm-sewered basins with impervious areas greater than 10 percent. The methods can be used to estimate 2-, 5-, 10-, 25-, 50-, 100-, and 500-year recurrence-interval floods of most unregulated rural streams in Tennessee. A computer application was developed that automates the calculation of flood frequency for unregulated, ungaged rivers and streams of Tennessee. Regional-regression equations were derived by using both single-variable and multivariable regional-regression analysis. Contributing drainage area is the explanatory variable used in the single-variable equations. Contributing drainage area, main-channel slope, and a climate factor are the explanatory variables used in the multivariable equations. Deleted-residual standard error for the single-variable equations ranged from 32 to 65 percent. Deleted-residual standard error for the multivariable equations ranged from 31 to 63 percent. These equations are included in the computer application to allow easy comparison of results produced by the different methods. The region-of-influence method calculates multivariable regression equations for each ungaged site and recurrence interval using basin characteristics from 60 similar sites selected from the study area. Explanatory variables that may be used in regression equations computed by the region-of-influence method include contributing drainage area, main-channel slope, a climate factor, and a physiographic-region factor. Deleted-residual standard error for the region-of-influence method tended to be only slightly smaller than those for the regional-regression method and ranged from 27 to 62 percent.
Ahearn, Elizabeth A.
2010-01-01
Multiple linear regression equations for determining flow-duration statistics were developed to estimate select flow exceedances ranging from 25- to 99-percent for six 'bioperiods'-Salmonid Spawning (November), Overwinter (December-February), Habitat Forming (March-April), Clupeid Spawning (May), Resident Spawning (June), and Rearing and Growth (July-October)-in Connecticut. Regression equations also were developed to estimate the 25- and 99-percent flow exceedances without reference to a bioperiod. In total, 32 equations were developed. The predictive equations were based on regression analyses relating flow statistics from streamgages to GIS-determined basin and climatic characteristics for the drainage areas of those streamgages. Thirty-nine streamgages (and an additional 6 short-term streamgages and 28 partial-record sites for the non-bioperiod 99-percent exceedance) in Connecticut and adjacent areas of neighboring States were used in the regression analysis. Weighted least squares regression analysis was used to determine the predictive equations; weights were assigned based on record length. The basin characteristics-drainage area, percentage of area with coarse-grained stratified deposits, percentage of area with wetlands, mean monthly precipitation (November), mean seasonal precipitation (December, January, and February), and mean basin elevation-are used as explanatory variables in the equations. Standard errors of estimate of the 32 equations ranged from 10.7 to 156 percent with medians of 19.2 and 55.4 percent to predict the 25- and 99-percent exceedances, respectively. Regression equations to estimate high and median flows (25- to 75-percent exceedances) are better predictors (smaller variability of the residual values around the regression line) than the equations to estimate low flows (less than 75-percent exceedance). The Habitat Forming (March-April) bioperiod had the smallest standard errors of estimate, ranging from 10.7 to 20.9 percent. In contrast, the Rearing and Growth (July-October) bioperiod had the largest standard errors, ranging from 30.9 to 156 percent. The adjusted coefficient of determination of the equations ranged from 77.5 to 99.4 percent with medians of 98.5 and 90.6 percent to predict the 25- and 99-percent exceedances, respectively. Descriptive information on the streamgages used in the regression, measured basin and climatic characteristics, and estimated flow-duration statistics are provided in this report. Flow-duration statistics and the 32 regression equations for estimating flow-duration statistics in Connecticut are stored on the U.S. Geological Survey World Wide Web application ?StreamStats? (http://water.usgs.gov/osw/streamstats/index.html). The regression equations developed in this report can be used to produce unbiased estimates of select flow exceedances statewide.
Streamflow simulation studies of the Hillsborough, Alafia, and Anclote Rivers, west-central Florida
Turner, J.F.
1979-01-01
A modified version of the Georgia Tech Watershed Model was applied for the purpose of flow simulation in three large river basins of west-central Florida. Calibrations were evaluated by comparing the following synthesized and observed data: annual hydrographs for the 1959, 1960, 1973 and 1974 water years, flood hydrographs (maximum daily discharge and flood volume), and long-term annual flood-peak discharges (1950-72). Annual hydrographs, excluding the 1973 water year, were compared using average absolute error in annual runoff and daily flows and correlation coefficients of monthly and daily flows. Correlations coefficients for simulated and observed maximum daily discharges and flood volumes used for calibrating range from 0.91 to 0.98 and average standard errors of estimate range from 18 to 45 percent. Correlation coefficients for simulated and observed annual flood-peak discharges range from 0.60 to 0.74 and average standard errors of estimate range from 33 to 44 percent. (Woodard-USGS)
Method for estimating low-flow characteristics of ungaged streams in Indiana
Arihood, Leslie D.; Glatfelter, Dale R.
1991-01-01
Equations for estimating the 7-day, 2-year and 7oday, 10-year low flows at sites on ungaged streams are presented. Regression analysis was used to develop equations relating basin characteristics and low-flow characteristics at 82 gaging stations. Significant basin characteristics in the equations are contributing drainage area and flow-duration ratio, which is the 20-percent flow duration divided by the 90-percent flow duration. Flow-duration ratio has been regionalized for Indiana on a plate. Ratios for use in the equations are obtained from the plate. Drainage areas are determined from maps or are obtained from reports. The predictive capability of the method was determined by tests of the equations and of the flow-duration ratios on the plate. The accuracy of the equations alone was tested by estimating the low-flow characteristics at 82 gaging stations where flow-duration ratio is already known. In this case, the standard errors of estimate for 7-day, 2-year and 7-day, 10-year low flows are 19 and 28 percent. When flow-duration ratios for the 82 gaging stations are obtained from the map, the standard errors are 46 and 61 percent. However, when stations having drainage areas of less than 10 square miles are excluded from the test, the standard errors decrease to 38 and 49 percent. Standard errors increase when stations with small basins are included, probably because some of the flow-duration ratios obtained for these small basins are incorrect. Local geology and its effect on the ratio are not adequately reflected on the plate, which shows the regional variation in flow-duration ratio. In all the tests, no bias is apparent areally, with increasing drainage area or with increasing ratio. Guidelines and limitations should be considered when using the method. The method can be applied only at sites in the northern and central physiographic zones of the State. Low-flow characteristics cannot be estimated for regulated streams unless the amount of regulation is known so that the estimated low-flow characteristic can be adjusted. The method is most accurate for sites having drainage areas ranging from 10 to 1,000 square miles and for predictions of 7-day, 10-year low flows ranging from 0.5 to 340 cubic feet per second.
The effects of multiple aerospace environmental stressors on human performance
NASA Technical Reports Server (NTRS)
Popper, S. E.; Repperger, D. W.; Mccloskey, K.; Tripp, L. D.
1992-01-01
An extended Fitt's law paradigm reaction time (RT) task was used to evaluate the effects of acceleration on human performance in the Dynamic Environment Simulator (DES) at Armstrong Laboratory, Wright-Patterson AFB, Ohio. This effort was combined with an evaluation of the standard CSU-13 P anti-gravity suit versus three configurations of a 'retrograde inflation anti-G suit'. Results indicated that RT and error rates increased 17 percent and 14 percent respectively from baseline to the end of the simulated aerial combat maneuver and that the most common error was pressing too few buttons.
Cost effectiveness of stream-gaging program in Michigan
Holtschlag, D.J.
1985-01-01
This report documents the results of a study of the cost effectiveness of the stream-gaging program in Michigan. Data uses and funding sources were identified for the 129 continuous gaging stations being operated in Michigan as of 1984. One gaging station was identified as having insufficient reason to continue its operation. Several stations were identified for reactivation, should funds become available, because of insufficiencies in the data network. Alternative methods of developing streamflow information based on routing and regression analyses were investigated for 10 stations. However, no station records were reproduced with sufficient accuracy to replace conventional gaging practices. A cost-effectiveness analysis of the data-collection procedure for the ice-free season was conducted using a Kalman-filter analysis. To define missing-record characteristics, cross-correlation coefficients and coefficients of variation were computed at stations on the basis of daily mean discharge. Discharge-measurement data were used to describe the gage/discharge rating stability at each station. The results of the cost-effectiveness analysis for a 9-month ice-free season show that the current policy of visiting most stations on a fixed servicing schedule once every 6 weeks results in an average standard error of 12.1 percent for the current $718,100 budget. By adopting a flexible servicing schedule, the average standard error could be reduced to 11.1 percent. Alternatively, the budget could be reduced to $700,200 while maintaining the current level of accuracy. A minimum budget of $680,200 is needed to operate the 129-gaging-station program; a budget less than this would not permit proper service and maintenance of stations. At the minimum budget, the average standard error would be 14.4 percent. A budget of $789,900 (the maximum analyzed) would result in a decrease in the average standard error to 9.07 percent. Owing to continual changes in the composition of the network and the changes in the uncertainties of streamflow accuracy at individual stations, the cost-effectiveness analysis will need to be updated regularly if it is to be used as a management tool. Cost of these updates need to be considered in decisions concerning the feasibility of flexible servicing schedules.
TOMS total ozone data compared with northern latitude Dobson ground stations
NASA Technical Reports Server (NTRS)
Heese, B.; Barthel, K.; Hov, O.
1994-01-01
Ozone measurements from the Total Ozone Mapping Spectrometer on the Nimbus 7 satellite are compared with ground-based measurements from five Dobson stations at northern latitudes to evaluate the accuracy of the TOMS data, particularly in regions north of 50 deg N. The measurements from the individual stations show mean differences from -2.5 percent up to plus 8.3 percent relative to TOMS measurements and two of the ground stations, Oslo and Longyearbyen, show a significant drift of plus 1.2 percent and plus 3.7 percent per year, respectively. It can be shown from nearly simultaneous measurements in two different wavelength double pairs at Oslo that at least 2 percent of the differences result from the use of the CC' wavelength double pair instead of the standard AD wavelength double pair. Since all Norwegian stations used the CC' wavelength double pair exclusively a similar error can be assumed for Tromso and Longyearbyren. A comparison between the tropospheric ozone content in TOMS data and from ECC ozonesonde measurements at Ny-Alesund and Bear Island shows that the amount of tropospheric ozone in the standard profiles used in the TOMS algorithm is too low, which leads to an error of about 2 percent in total ozone. Particularly at high solar zenith angles (greater than 80 deg), Dobson measurements become unreliable. They are up to 20 percent lower than TOMS measurements averaged over solar zenith angles of 88 deg to 89 deg.
June and August median streamflows estimated for ungaged streams in southern Maine
Lombard, Pamela J.
2010-01-01
Methods for estimating June and August median streamflows were developed for ungaged, unregulated streams in southern Maine. The methods apply to streams with drainage areas ranging in size from 0.4 to 74 square miles, with percentage of basin underlain by a sand and gravel aquifer ranging from 0 to 84 percent, and with distance from the centroid of the basin to a Gulf of Maine line paralleling the coast ranging from 14 to 94 miles. Equations were developed with data from 4 long-term continuous-record streamgage stations and 27 partial-record streamgage stations. Estimates of median streamflows at the continuous-record and partial-record stations are presented. A mathematical technique for estimating standard low-flow statistics, such as June and August median streamflows, at partial-record streamgage stations was applied by relating base-flow measurements at these stations to concurrent daily streamflows at nearby long-term (at least 10 years of record) continuous-record streamgage stations (index stations). Weighted least-squares regression analysis (WLS) was used to relate estimates of June and August median streamflows at streamgage stations to basin characteristics at these same stations to develop equations that can be used to estimate June and August median streamflows on ungaged streams. WLS accounts for different periods of record at the gaging stations. Three basin characteristics-drainage area, percentage of basin underlain by a sand and gravel aquifer, and distance from the centroid of the basin to a Gulf of Maine line paralleling the coast-are used in the final regression equation to estimate June and August median streamflows for ungaged streams. The three-variable equation to estimate June median streamflow has an average standard error of prediction from -35 to 54 percent. The three-variable equation to estimate August median streamflow has an average standard error of prediction from -45 to 83 percent. Simpler one-variable equations that use only drainage area to estimate June and August median streamflows were developed for use when less accuracy is acceptable. These equations have average standard errors of prediction from -46 to 87 percent and from -57 to 133 percent, respectively.
Assessing dental caries prevalence in African-American youth and adults.
Seibert, Wilda; Farmer-Dixon, Cherae; Bolden, Theodore E; Stewart, James H
2004-01-01
It has been well documented that dental caries affect millions of children in the USA with the majority experiencing decay by the late teens. This is especially true for low-income minorities. The objective of this descriptive study was to determine dental caries prevalence in a sample of low-income African-American youth and adults. A total of 1034 individuals were examined. They were divided into two age groups: youth, 9-19 years and adults, 20-39 years. Females comprised approximately 65 percent (64.5) of the study group. The DMFT Index was used to determine caries prevalence in this study population. The DMFT findings showed that approximately 73 percent (72.9 percent) of the youth had either decayed, missing or filled teeth. Male youth had slightly higher DMFT mean scores than female youth: male mean = 7.93, standard error = 0.77, female mean = 7.52, standard error = 0.36; however, as females reached adulthood their DMFT scores increased substantially, mean = 15.18, standard error = 0.36. Caries prevalence was much lower in male adults, DMFT, mean = 7.22, standard error of 0.33. The decayed component for female adults mean score was 6.81, a slight increase over adult males, mean = 6.58. Although there were few filled teeth in both age groups, female adults had slightly more filled teeth than male adults, females mean = 2.91 vs. males; however, adult males experienced slightly more missing teeth, mean = 5.62 as compared to adult females, mean = 5.46. n = 2.20. Both female and male adults had an increase in missing teeth. As age increased there was a significant correlation among decayed, missing and filled teeth as tested by Analysis of Variance (ANOVA), p < 0.01. A significant correlation was found between filled teeth by sex, p < .005. We conclude that caries prevalence was higher in female and male youth, but dental caries increased more rapidly in females as they reached adulthood.
Towards a plot size for Canada's national forest inventory
Steen Magnussen; P. Boudewyn; M. Gillis
2000-01-01
A proposed national forest inventory for Canada is to report on the state and trends of resource attributes gathered mainly from aerial photos of sample plots located on a national grid. A pilot project in New Brunswick indicates it takes about 2,800 square 400-ha plots (10 percent inventoried) to achieve a relative standard error of 10 percent or less on 14 out of 17...
Trommer, J.T.; Loper, J.E.; Hammett, K.M.
1996-01-01
Several traditional techniques have been used for estimating stormwater runoff from ungaged watersheds. Applying these techniques to water- sheds in west-central Florida requires that some of the empirical relationships be extrapolated beyond tested ranges. As a result, there is uncertainty as to the accuracy of these estimates. Sixty-six storms occurring in 15 west-central Florida watersheds were initially modeled using the Rational Method, the U.S. Geological Survey Regional Regression Equations, the Natural Resources Conservation Service TR-20 model, the U.S. Army Corps of Engineers Hydrologic Engineering Center-1 model, and the Environmental Protection Agency Storm Water Management Model. The techniques were applied according to the guidelines specified in the user manuals or standard engineering textbooks as though no field data were available and the selection of input parameters was not influenced by observed data. Computed estimates were compared with observed runoff to evaluate the accuracy of the techniques. One watershed was eliminated from further evaluation when it was determined that the area contributing runoff to the stream varies with the amount and intensity of rainfall. Therefore, further evaluation and modification of the input parameters were made for only 62 storms in 14 watersheds. Runoff ranged from 1.4 to 99.3 percent percent of rainfall. The average runoff for all watersheds included in this study was about 36 percent of rainfall. The average runoff for the urban, natural, and mixed land-use watersheds was about 41, 27, and 29 percent, respectively. Initial estimates of peak discharge using the rational method produced average watershed errors that ranged from an underestimation of 50.4 percent to an overestimation of 767 percent. The coefficient of runoff ranged from 0.20 to 0.60. Calibration of the technique produced average errors that ranged from an underestimation of 3.3 percent to an overestimation of 1.5 percent. The average calibrated coefficient of runoff for each watershed ranged from 0.02 to 0.72. The average values of the coefficient of runoff necessary to calibrate the urban, natural, and mixed land-use watersheds were 0.39, 0.16, and 0.08, respectively. The U.S. Geological Survey regional regression equations for determining peak discharge produced errors that ranged from an underestimation of 87.3 percent to an over- estimation of 1,140 percent. The regression equations for determining runoff volume produced errors that ranged from an underestimation of 95.6 percent to an overestimation of 324 percent. Regression equations developed from data used for this study produced errors that ranged between an underestimation of 82.8 percent and an over- estimation of 328 percent for peak discharge, and from an underestimation of 71.2 percent to an overestimation of 241 percent for runoff volume. Use of the equations developed for west-central Florida streams produced average errors for each type of watershed that were lower than errors associated with use of the U.S. Geological Survey equations. Initial estimates of peak discharges and runoff volumes using the Natural Resources Conservation Service TR-20 model, produced average errors of 44.6 and 42.7 percent respectively, for all the watersheds. Curve numbers and times of concentration were adjusted to match estimated and observed peak discharges and runoff volumes. The average change in the curve number for all the watersheds was a decrease of 2.8 percent. The average change in the time of concentration was an increase of 59.2 percent. The shape of the input dimensionless unit hydrograph also had to be adjusted to match the shape and peak time of the estimated and observed flood hydrographs. Peak rate factors for the modified input dimensionless unit hydrographs ranged from 162 to 454. The mean errors for peak discharges and runoff volumes were reduced to 18.9 and 19.5 percent, respectively, using the average calibrated input parameters for ea
Code of Federal Regulations, 2010 CFR
2010-01-01
... defined in section 1 of this appendix is as follows: (a) The standard deviation of lateral track errors shall be less than 6.3 NM (11.7 Km). Standard deviation is a statistical measure of data about a mean... standard deviation about the mean encompasses approximately 68 percent of the data and plus or minus 2...
Peak-flow characteristics of Wyoming streams
Miller, Kirk A.
2003-01-01
Peak-flow characteristics for unregulated streams in Wyoming are described in this report. Frequency relations for annual peak flows through water year 2000 at 364 streamflow-gaging stations in and near Wyoming were evaluated and revised or updated as needed. Analyses of historical floods, temporal trends, and generalized skew were included in the evaluation. Physical and climatic basin characteristics were determined for each gaging station using a geographic information system. Gaging stations with similar peak-flow and basin characteristics were grouped into six hydrologic regions. Regional statistical relations between peak-flow and basin characteristics were explored using multiple-regression techniques. Generalized least squares regression equations for estimating magnitudes of annual peak flows with selected recurrence intervals from 1.5 to 500 years were developed for each region. Average standard errors of estimate range from 34 to 131 percent. Average standard errors of prediction range from 35 to 135 percent. Several statistics for evaluating and comparing the errors in these estimates are described. Limitations of the equations are described. Methods for applying the regional equations for various circumstances are listed and examples are given.
Cost effectiveness of the U.S. Geological Survey's stream-gaging program in Illinois
Mades, D.M.; Oberg, K.A.
1984-01-01
Data uses and funding sources were identified for 138 continuous-record discharge-gaging stations currently (1983) operated as part of the stream-gaging program in Illinois. Streamflow data from five of those stations are used only for regional hydrology studies. Most streamflow data are used for defining regional hydrology, defining rainfall-runoff relations, flood forecasting, regulating navigation systems, and water-quality sampling. Based on the evaluations of data use and of alternative methods for determining streamflow in place of stream gaging, no stations in the 1983 stream-gaging program should be deactivated. The current budget (in 1983 dollars) for operating the 138-station program is $768,000 per year. The average standard error of instantaneous discharge for the current practice for visiting the gaging stations is 36.5 percent. Missing stage record accounts for one-third of the 36.5 percent average standard error. (USGS)
Vogel, P; Rüschoff, J; Kümmel, S; Zirngibl, H; Hofstädter, F; Hohenberger, W; Jauch, K W
2000-01-01
We evaluated the incidence and prognostic relevance of microscopic intraperitoneal tumor cell dissemination of colon cancer in comparison with dissemination of gastric cancer as a rational for additive intraperitoneal therapy. Peritoneal washouts of 90 patients with colon and 111 patients with gastric cancer were investigated prospectively. Sixty patients with benign diseases and 8 patients with histologically proven gross visible peritoneal carcinomatosis served as controls. Intraoperatively, 100 ml of warm NaCl 0.9 percent were instilled and 20 ml were reaspirated. In all patients hematoxylin and eosin staining (conventional cytology) was performed. Additionally, in 36 patients with colon cancer and 47 patients with gastric cancer, immunostaining with the HEA-125 antibody (immunocytology) was prepared. The results of cytology were assessed for an association with TNM category and cancer grade, based on all patients, and with patient survival, among the R0 resected patients. In conventional cytology 35.5 percent (32/90) of patients with colon cancer and 42.3 percent (47/111) of patients with gastric cancer had a positive cytology. In immunocytology 47.2 percent (17/36) of patients with colon cancer and 46.8 percent (22/47) of patients with gastric cancer were positive. In colon cancer, positive conventional cytology was associated with pT and M category (P = 0.044 and P = 0.0002), whereas immunocytology was only associated with M category (P = 0.007). No association was found between nodal status and immunocytology in colon cancer and with the grading. There was a statistically significant correlation between pT M category and conventional and immunocytology in gastric cancer (P < 0.0015/P = 0.007 and P < 0.001/P = 0.009, respectively). Positive immunocytology was additionally associated with pN category (P = 0.05). In a univariate analysis of R0 resected patients (no residual tumor), positive immunocytology was significantly related to an unfavorable prognosis in patients with gastric cancer only (n = 30). Mean survival time was significantly increased in patients with gastric cancer with negative cytology compared with positive cytology (1,205 (standard error of the mean, 91) vs. 771 (standard error of the mean, 147) days; P = 0.007) but not in patients with colon cancer (1,215 (standard error of the mean, 95) vs. 1,346 (standard error of the mean, 106) days; P = 0.55). Because microscopic peritoneal dissemination influences survival time after R0 resections only in patients with gastric but not with colon cancer, our results may provide a basis for a decision on additive, prophylactic (intraperitoneal) therapy in gastric but not colon cancer.
Nau, Amy Catherine; Pintar, Christine; Fisher, Christopher; Jeong, Jong-Hyeon; Jeong, KwonHo
2014-01-01
We describe an indoor, portable, standardized course that can be used to evaluate obstacle avoidance in persons who have ultralow vision. Six sighted controls and 36 completely blind but otherwise healthy adult male (n=29) and female (n=13) subjects (age range 19-85 years), were enrolled in one of three studies involving testing of the BrainPort sensory substitution device. Subjects were asked to navigate the course prior to, and after, BrainPort training. They completed a total of 837 course runs in two different locations. Means and standard deviations were calculated across control types, courses, lights, and visits. We used a linear mixed effects model to compare different categories in the PPWS (percent preferred walking speed) and error percent data to show that the course iterations were properly designed. The course is relatively inexpensive, simple to administer, and has been shown to be a feasible way to test mobility function. Data analysis demonstrates that for the outcome of percent error as well as for percentage preferred walking speed, that each of the three courses is different, and that within each level, each of the three iterations are equal. This allows for randomization of the courses during administration. Abbreviations: preferred walking speed (PWS) course speed (CS) percentage preferred walking speed (PPWS) PMID:24561717
Computer simulation of storm runoff for three watersheds in Albuquerque, New Mexico
Knutilla, R.L.; Veenhuis, J.E.
1994-01-01
Rainfall-runoff data from three watersheds were selected for calibration and verification of the U.S. Geological Survey's Distributed Routing Rainfall-Runoff Model. The watersheds chosen are residentially developed. The conceptually based model uses an optimization process that adjusts selected parameters to achieve the best fit between measured and simulated runoff volumes and peak discharges. Three of these optimization parameters represent soil-moisture conditions, three represent infiltration, and one accounts for effective impervious area. Each watershed modeled was divided into overland-flow segments and channel segments. The overland-flow segments were further subdivided to reflect pervious and impervious areas. Each overland-flow and channel segment was assigned representative values of area, slope, percentage of imperviousness, and roughness coefficients. Rainfall-runoff data for each watershed were separated into two sets for use in calibration and verification. For model calibration, seven input parameters were optimized to attain a best fit of the data. For model verification, parameter values were set using values from model calibration. The standard error of estimate for calibration of runoff volumes ranged from 19 to 34 percent, and for peak discharge calibration ranged from 27 to 44 percent. The standard error of estimate for verification of runoff volumes ranged from 26 to 31 percent, and for peak discharge verification ranged from 31 to 43 percent.
Cost-effectiveness of the stream-gaging program in Maine; a prototype for nationwide implementation
Fontaine, Richard A.; Moss, M.E.; Smath, J.A.; Thomas, W.O.
1984-01-01
This report documents the results of a cost-effectiveness study of the stream-gaging program in Maine. Data uses and funding sources were identified for the 51 continuous stream gages currently being operated in Maine with a budget of $211,000. Three stream gages were identified as producing data no longer sufficiently needed to warrant continuing their operation. Operation of these stations should be discontinued. Data collected at three other stations were identified as having uses specific only to short-term studies; it is recommended that these stations be discontinued at the end of the data-collection phases of the studies. The remaining 45 stations should be maintained in the program for the foreseeable future. The current policy for operation of the 45-station program would require a budget of $180,300 per year. The average standard error of estimation of streamflow records is 17.7 percent. It was shown that this overall level of accuracy at the 45 sites could be maintained with a budget of approximately $170,000 if resources were redistributed among the gages. A minimum budget of $155,000 is required to operate the 45-gage program; a smaller budget would not permit proper service and maintenance of the gages and recorders. At the minimum budget, the average standard error is 25.1 percent. The maximum budget analyzed was $350,000, which resulted in an average standard error of 8.7 percent. Large parts of Maine's interior were identified as having sparse streamflow data. It was determined that this sparsity be remedied as funds become available.
Calculating sediment discharge from a highway construction site in central Pennsylvania
Reed, L.A.; Ward, J.R.; Wetzel, K.L.
1985-01-01
The Pennsylvania Department of Transportation, the Federal Highway Administration, and the U.S. Geological Survey have cooperated in a study to evaluate two methods of predicting sediment yields during highway construction. Sediment yields were calculated using the Universal Soil Loss and the Younkin Sediment Prediction Equations. Results were compared to the actual measured values, and standard errors and coefficients of correlation were calculated. Sediment discharge from the construction area was determined for storms that occurred during construction of Interstate 81 in a 0.38-square mile basin near Harrisburg, Pennsylvania. Precipitation data tabulated included total rainfall, maximum 30-minute rainfall, kinetic energy, and the erosive index of the precipitation. Highway construction data tabulated included the area disturbed by clearing and grubbing, the area in cuts and fills, the average depths of cuts and fills, the area seeded and mulched, and the area paved. Using the Universal Soil Loss Equation, sediment discharge from the construction area was calculated for storms. The standard error of estimate was 0.40 (about 105 percent), and the coefficient of correlation was 0.79. Sediment discharge from the construction area was also calculated using the Younkin Equation. The standard error of estimate of 0.42 (about 110 percent), and the coefficient of correlation of 0.77 are comparable to those from the Universal Soil Loss Equation.
Influence of Lipid Composition in Amplifying or Ameliorating Toxicant Effects on Phytoplankton.
1992-04-30
since they often have a high lipid content and high concentrations of eicosapentaenoic acid (Sicko-Goad et al. 1988; Volkman et al. 1989; Ahlgren et al...in percent composition of total saturated and unsaturated fatty acids with respect to sampling period in the light/dark cycle .................. A2-3...diatom species ................ A1-4 A2 1 Fatty acid identification and percent composition with standard errors of all Cyclotella meneghiniana
Rodríguez-Cerrillo, Matilde; Fernández-Diaz, Eddita; Iñurrieta-Romero, Amaia; Poza-Montoro, Ana
2012-01-01
The purpose of this paper is to describe changes and results obtained after implementation of a quality management system (QMS) according to ISO standards in a Hospital in the Home (HIH) Unit. The paper describes changes made and outcomes achieved. This took part in the HiH Unit, Clinico Hospital, Madrid, Spain, and looked at admissions, mean stay, patient satisfaction, adverse events, returns to hospital, no admitted referrals, complaints, compliance to protocols, equipment failures and resolution of urgent consultations. In June 2008, HiH Unit, Clinico Hospital obtained ISO certification. The main results achieved are as follows. There was an increase in patients' satisfaction--in June 2008, assessment of the quality of care provided by staff was scored at 4.7 (on a scale of 1 to 5); in 2010 it has been scored at 4.96. Patient satisfaction rate has increased from 92 percent to 98.8 percent. No complaints from patients were received. Unscheduled returns to hospital have decreased from 7 percent to 3 percent. There were no medical equipment failures. External suppliers' performance has improved. Material and medication needed by staff was available when necessary. The number of admissions has increased. Compliance to protocols has reached 97 percent. Inappropriate referrals have decreased by 8 percent. Six medications-related incidents were detected; in two cases the incident was not due to an error. In the other four cases error could have been detected before reaching the patient. Implementations of an ISO quality management system allow improved quality of care and patient satisfaction in a HIH Unit.
Determination of fiber volume in graphite/epoxy materials using computer image analysis
NASA Technical Reports Server (NTRS)
Viens, Michael J.
1990-01-01
The fiber volume of graphite/epoxy specimens was determined by analyzing optical images of cross sectioned specimens using image analysis software. Test specimens were mounted and polished using standard metallographic techniques and examined at 1000 times magnification. Fiber volume determined using the optical imaging agreed well with values determined using the standard acid digestion technique. The results were found to agree within 5 percent over a fiber volume range of 45 to 70 percent. The error observed is believed to arise from fiber volume variations within the graphite/epoxy panels themselves. The determination of ply orientation using image analysis techniques is also addressed.
NASA Astrophysics Data System (ADS)
Nogawa, Masamichi; Ching, Chong Thong; Ida, Takeyuki; Itakura, Keiko; Takatani, Setsuo
1997-06-01
A new reflectance pulse oximeter sensor for lower arterial oxygen saturation (Sa)2) measurement has been designed and evaluated in animals prior to clinical trials. The new sensor incorporates ten light emitting diode chips for each wavelength of 730 and 880 nm mounted symmetrically and at the radial separation distance of 7 mm around a photodiode chip. The separation distance of 7 mm was chosen to maximize the ratio of the pulsatile to the average plethysmographic signal level at each wavelength. The 730 and 880 wavelength combination was determined to obtain a linear relationship between the reflectance ratio of the 730 and 880 nm wavelengths and Sa)2. In addition to these features of the sensor, the Fast Fourier Transform method was employed to compute the pulsatile and average signal level at each wavelength. The performance of the new reflectance pulse oximeter sensor was evaluated in dogs in comparison to the 665/910 nm sensor. As predicted by the theoretical simulation based on a 3D photon diffusion theory, the 730/880 nm sensor demonstrated an excellent linearity over the SaO2 range from 100 to 30 percent. For the SaO2 range between 100 and 70 percent, the 665/910 and 730/880 sensors showed the standard error of around 3.5 percent and 2.1 percent, respectively, in comparison to the blood samples. For the range between 70 and 30 percent, the standard error of the 730/880 nm sensor was only 2.7 percent, while that of the 665/910 nm sensor was 9.5 percent. The 730/880 sensor showed improved accuracy for a wide range of SaO2 particularly over the range between 70 and 30 percent. This new reflectance sensor can provide noninvasive measurement of SaO2 accurately over the wide saturation range from 100 to 30 percent.
Just, Beth Haenke; Marc, David; Munns, Megan; Sandefer, Ryan
2016-01-01
Patient identification matching problems are a major contributor to data integrity issues within electronic health records. These issues impede the improvement of healthcare quality through health information exchange and care coordination, and contribute to deaths resulting from medical errors. Despite best practices in the area of patient access and medical record management to avoid duplicating patient records, duplicate records continue to be a significant problem in healthcare. This study examined the underlying causes of duplicate records using a multisite data set of 398,939 patient records with confirmed duplicates and analyzed multiple reasons for data discrepancies between those record matches. The field that had the greatest proportion of mismatches (nondefault values) was the middle name, accounting for 58.30 percent of mismatches. The Social Security number was the second most frequent mismatch, occurring in 53.54 percent of the duplicate pairs. The majority of the mismatches in the name fields were the result of misspellings (53.14 percent in first name and 33.62 percent in last name) or swapped last name/first name, first name/middle name, or last name/middle name pairs. The use of more sophisticated technologies is critical to improving patient matching. However, no amount of advanced technology or increased data capture will completely eliminate human errors. Thus, the establishment of policies and procedures (such as standard naming conventions or search routines) for front-end and back-end staff to follow is foundational for the overall data integrity process. Training staff on standard policies and procedures will result in fewer duplicates created on the front end and more accurate duplicate record matching and merging on the back end. Furthermore, monitoring, analyzing trends, and identifying errors that occur are proactive ways to identify data integrity issues. PMID:27134610
Slow progress on meeting hospital safety standards: learning from the Leapfrog Group's efforts.
Moran, John; Scanlon, Dennis
2013-01-01
In response to the Institute of Medicine's To Err Is Human report on the prevalence of medical errors, the Leapfrog Group, an organization that promotes hospital safety and quality, established a voluntary hospital survey assessing compliance with several safety standards. Using data from the period 2002-07, we conducted the first longitudinal assessment of how hospitals in specific cities and states initially selected by Leapfrog progressed on public reporting and adoption of standards requiring the use of computerized drug order entry and hospital intensivists. Overall, little progress was observed. Reporting rates were unchanged over the study period. Adoption of computerized drug order entry increased from 2.94 percent to 8.13 percent, and intensivist staffing increased from 14.74 percent to 21.40 percent. These findings should not be viewed as an indictment of Leapfrog but may reflect various challenges. For example, hospitals faced no serious threats to their market share if purchasers shifted business away from those that either didn't report data or didn't meet the standards. In the absence of mandatory reporting, policy makers might need to act to address these challenges to ensure improvements in quality.
Methods for estimating magnitude and frequency of peak flows for natural streams in Utah
Kenney, Terry A.; Wilkowske, Chris D.; Wright, Shane J.
2007-01-01
Estimates of the magnitude and frequency of peak streamflows is critical for the safe and cost-effective design of hydraulic structures and stream crossings, and accurate delineation of flood plains. Engineers, planners, resource managers, and scientists need accurate estimates of peak-flow return frequencies for locations on streams with and without streamflow-gaging stations. The 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows were estimated for 344 unregulated U.S. Geological Survey streamflow-gaging stations in Utah and nearby in bordering states. These data along with 23 basin and climatic characteristics computed for each station were used to develop regional peak-flow frequency and magnitude regression equations for 7 geohydrologic regions of Utah. These regression equations can be used to estimate the magnitude and frequency of peak flows for natural streams in Utah within the presented range of predictor variables. Uncertainty, presented as the average standard error of prediction, was computed for each developed equation. Equations developed using data from more than 35 gaging stations had standard errors of prediction that ranged from 35 to 108 percent, and errors for equations developed using data from less than 35 gaging stations ranged from 50 to 357 percent.
Methods for estimating streamflow at mountain fronts in southern New Mexico
Waltemeyer, S.D.
1994-01-01
The infiltration of streamflow is potential recharge to alluvial-basin aquifers at or near mountain fronts in southern New Mexico. Data for 13 streamflow-gaging stations were used to determine a relation between mean annual stream- flow and basin and climatic conditions. Regression analysis was used to develop an equation that can be used to estimate mean annual streamflow on the basis of drainage areas and mean annual precipi- tation. The average standard error of estimate for this equation is 46 percent. Regression analysis also was used to develop an equation to estimate mean annual streamflow on the basis of active- channel width. Measurements of the width of active channels were determined for 6 of the 13 gaging stations. The average standard error of estimate for this relation is 29 percent. Stream- flow estimates made using a regression equation based on channel geometry are considered more reliable than estimates made from an equation based on regional relations of basin and climatic conditions. The sample size used to develop these relations was small, however, and the reported standard error of estimate may not represent that of the entire population. Active-channel-width measurements were made at 23 ungaged sites along the Rio Grande upstream from Elephant Butte Reservoir. Data for additional sites would be needed for a more comprehensive assessment of mean annual streamflow in southern New Mexico.
Wood, Molly S.; Fosness, Ryan L.; Skinner, Kenneth D.; Veilleux, Andrea G.
2016-06-27
The U.S. Geological Survey, in cooperation with the Idaho Transportation Department, updated regional regression equations to estimate peak-flow statistics at ungaged sites on Idaho streams using recent streamflow (flow) data and new statistical techniques. Peak-flow statistics with 80-, 67-, 50-, 43-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities (1.25-, 1.50-, 2.00-, 2.33-, 5.00-, 10.0-, 25.0-, 50.0-, 100-, 200-, and 500-year recurrence intervals, respectively) were estimated for 192 streamgages in Idaho and bordering States with at least 10 years of annual peak-flow record through water year 2013. The streamgages were selected from drainage basins with little or no flow diversion or regulation. The peak-flow statistics were estimated by fitting a log-Pearson type III distribution to records of annual peak flows and applying two additional statistical methods: (1) the Expected Moments Algorithm to help describe uncertainty in annual peak flows and to better represent missing and historical record; and (2) the generalized Multiple Grubbs Beck Test to screen out potentially influential low outliers and to better fit the upper end of the peak-flow distribution. Additionally, a new regional skew was estimated for the Pacific Northwest and used to weight at-station skew at most streamgages. The streamgages were grouped into six regions (numbered 1_2, 3, 4, 5, 6_8, and 7, to maintain consistency in region numbering with a previous study), and the estimated peak-flow statistics were related to basin and climatic characteristics to develop regional regression equations using a generalized least squares procedure. Four out of 24 evaluated basin and climatic characteristics were selected for use in the final regional peak-flow regression equations.Overall, the standard error of prediction for the regional peak-flow regression equations ranged from 22 to 132 percent. Among all regions, regression model fit was best for region 4 in west-central Idaho (average standard error of prediction=46.4 percent; pseudo-R2>92 percent) and region 5 in central Idaho (average standard error of prediction=30.3 percent; pseudo-R2>95 percent). Regression model fit was poor for region 7 in southern Idaho (average standard error of prediction=103 percent; pseudo-R2<78 percent) compared to other regions because few streamgages in region 7 met the criteria for inclusion in the study, and the region’s semi-arid climate and associated variability in precipitation patterns causes substantial variability in peak flows.A drainage area ratio-adjustment method, using ratio exponents estimated using generalized least-squares regression, was presented as an alternative to the regional regression equations if peak-flow estimates are desired at an ungaged site that is close to a streamgage selected for inclusion in this study. The alternative drainage area ratio-adjustment method is appropriate for use when the drainage area ratio between the ungaged and gaged sites is between 0.5 and 1.5.The updated regional peak-flow regression equations had lower total error (standard error of prediction) than all regression equations presented in a 1982 study and in four of six regions presented in 2002 and 2003 studies in Idaho. A more extensive streamgage screening process used in the current study resulted in fewer streamgages used in the current study than in the 1982, 2002, and 2003 studies. Fewer streamgages used and the selection of different explanatory variables were likely causes of increased error in some regions compared to previous studies, but overall, regional peak‑flow regression model fit was generally improved for Idaho. The revised statistical procedures and increased streamgage screening applied in the current study most likely resulted in a more accurate representation of natural peak-flow conditions.The updated, regional peak-flow regression equations will be integrated in the U.S. Geological Survey StreamStats program to allow users to estimate basin and climatic characteristics and peak-flow statistics at ungaged locations of interest. StreamStats estimates peak-flow statistics with quantifiable certainty only when used at sites with basin and climatic characteristics within the range of input variables used to develop the regional regression equations. Both the regional regression equations and StreamStats should be used to estimate peak-flow statistics only in naturally flowing, relatively unregulated streams without substantial local influences to flow, such as large seeps, springs, or other groundwater-surface water interactions that are not widespread or characteristic of the respective region.
Hess, Glen W.
2002-01-01
Techniques for estimating monthly streamflow-duration characteristics at ungaged and partial-record sites in central Nevada have been updated. These techniques were developed using streamflow records at six continuous-record sites, basin physical and climatic characteristics, and concurrent streamflow measurements at four partial-record sites. Two methods, the basin-characteristic method and the concurrent-measurement method, were developed to provide estimating techniques for selected streamflow characteristics at ungaged and partial-record sites in central Nevada. In the first method, logarithmic-regression analyses were used to relate monthly mean streamflows (from all months and by month) from continuous-record gaging sites of various percent exceedence levels or monthly mean streamflows (by month) to selected basin physical and climatic variables at ungaged sites. Analyses indicate that the total drainage area and percent of drainage area at altitudes greater than 10,000 feet are the most significant variables. For the equations developed from all months of monthly mean streamflow, the coefficient of determination averaged 0.84 and the standard error of estimate of the relations for the ungaged sites averaged 72 percent. For the equations derived from monthly means by month, the coefficient of determination averaged 0.72 and the standard error of estimate of the relations averaged 78 percent. If standard errors are compared, the relations developed in this study appear generally to be less accurate than those developed in a previous study. However, the new relations are based on additional data and the slight increase in error may be due to the wider range of streamflow for a longer period of record, 1995-2000. In the second method, streamflow measurements at partial-record sites were correlated with concurrent streamflows at nearby gaged sites by the use of linear-regression techniques. Statistical measures of results using the second method typically indicated greater accuracy than for the first method. However, to make estimates for individual months, the concurrent-measurement method requires several years additional streamflow data at more partial-record sites. Thus, exceedence values for individual months are not yet available due to the low number of concurrent-streamflow-measurement data available. Reliability, limitations, and applications of both estimating methods are described herein.
Verification of unfold error estimates in the unfold operator code
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fehl, D.L.; Biggs, F.
Spectral unfolding is an inverse mathematical operation that attempts to obtain spectral source information from a set of response functions and data measurements. Several unfold algorithms have appeared over the past 30 years; among them is the unfold operator (UFO) code written at Sandia National Laboratories. In addition to an unfolded spectrum, the UFO code also estimates the unfold uncertainty (error) induced by estimated random uncertainties in the data. In UFO the unfold uncertainty is obtained from the error matrix. This built-in estimate has now been compared to error estimates obtained by running the code in a Monte Carlo fashionmore » with prescribed data distributions (Gaussian deviates). In the test problem studied, data were simulated from an arbitrarily chosen blackbody spectrum (10 keV) and a set of overlapping response functions. The data were assumed to have an imprecision of 5{percent} (standard deviation). One hundred random data sets were generated. The built-in estimate of unfold uncertainty agreed with the Monte Carlo estimate to within the statistical resolution of this relatively small sample size (95{percent} confidence level). A possible 10{percent} bias between the two methods was unresolved. The Monte Carlo technique is also useful in underdetermined problems, for which the error matrix method does not apply. UFO has been applied to the diagnosis of low energy x rays emitted by Z-pinch and ion-beam driven hohlraums. {copyright} {ital 1997 American Institute of Physics.}« less
Claims, errors, and compensation payments in medical malpractice litigation.
Studdert, David M; Mello, Michelle M; Gawande, Atul A; Gandhi, Tejal K; Kachalia, Allen; Yoon, Catherine; Puopolo, Ann Louise; Brennan, Troyen A
2006-05-11
In the current debate over tort reform, critics of the medical malpractice system charge that frivolous litigation--claims that lack evidence of injury, substandard care, or both--is common and costly. Trained physicians reviewed a random sample of 1452 closed malpractice claims from five liability insurers to determine whether a medical injury had occurred and, if so, whether it was due to medical error. We analyzed the prevalence, characteristics, litigation outcomes, and costs of claims that lacked evidence of error. For 3 percent of the claims, there were no verifiable medical injuries, and 37 percent did not involve errors. Most of the claims that were not associated with errors (370 of 515 [72 percent]) or injuries (31 of 37 [84 percent]) did not result in compensation; most that involved injuries due to error did (653 of 889 [73 percent]). Payment of claims not involving errors occurred less frequently than did the converse form of inaccuracy--nonpayment of claims associated with errors. When claims not involving errors were compensated, payments were significantly lower on average than were payments for claims involving errors (313,205 dollars vs. 521,560 dollars, P=0.004). Overall, claims not involving errors accounted for 13 to 16 percent of the system's total monetary costs. For every dollar spent on compensation, 54 cents went to administrative expenses (including those involving lawyers, experts, and courts). Claims involving errors accounted for 78 percent of total administrative costs. Claims that lack evidence of error are not uncommon, but most are denied compensation. The vast majority of expenditures go toward litigation over errors and payment of them. The overhead costs of malpractice litigation are exorbitant. Copyright 2006 Massachusetts Medical Society.
NASA Technical Reports Server (NTRS)
Otterson, D. A.; Seng, G. T.
1985-01-01
An high performance liquid chromatography (HPLC) method to estimate four aromatic classes in middistillate fuels is presented. Average refractive indices are used in a correlation to obtain the concentrations of each of the aromatic classes from HPLC data. The aromatic class concentrations can be obtained in about 15 min when the concentration of the aromatic group is known. Seven fuels with a wide range of compositions were used to test the method. Relative errors in the concentration of the two major aromatic classes were not over 10 percent. Absolute errors of the minor classes were all less than 0.3 percent. The data show that errors in group-type analyses using sulfuric acid derived standards are greater for fuels containing high concentrations of polycyclic aromatics. Corrections are based on the change in refractive index of the aromatic fraction which can occur when sulfuric acid and the fuel react. These corrections improved both the precision and the accuracy of the group-type results.
Effects of Rifle Handling, Target Acquisition, and Trigger Control on Simulated Shooting Performance
2014-05-06
qualification task, and covers all of the training requirements listed in the Soldier’s Manual of Common Tasks: Warrior Skills Level 1 handbook...allow for more direct and standardized training based on common Soldier errors. If discernible patterns in these core elements of marksmanship were...more than 50 percent of variance in marksmanship performance on a standard EST weapons qualification task for participants whose 3 Snellen acuity
Bankfull characteristics of Ohio streams and their relation to peak streamflows
Sherwood, James M.; Huitger, Carrie A.
2005-01-01
Regional curves, simple-regression equations, and multiple-regression equations were developed to estimate bankfull width, bankfull mean depth, bankfull cross-sectional area, and bankfull discharge of rural, unregulated streams in Ohio. The methods are based on geomorphic, basin, and flood-frequency data collected at 50 study sites on unregulated natural alluvial streams in Ohio, of which 40 sites are near streamflow-gaging stations. The regional curves and simple-regression equations relate the bankfull characteristics to drainage area. The multiple-regression equations relate the bankfull characteristics to drainage area, main-channel slope, main-channel elevation index, median bed-material particle size, bankfull cross-sectional area, and local-channel slope. Average standard errors of prediction for bankfull width equations range from 20.6 to 24.8 percent; for bankfull mean depth, 18.8 to 20.6 percent; for bankfull cross-sectional area, 25.4 to 30.6 percent; and for bankfull discharge, 27.0 to 78.7 percent. The simple-regression (drainage-area only) equations have the highest average standard errors of prediction. The multiple-regression equations in which the explanatory variables included drainage area, main-channel slope, main-channel elevation index, median bed-material particle size, bankfull cross-sectional area, and local-channel slope have the lowest average standard errors of prediction. Field surveys were done at each of the 50 study sites to collect the geomorphic data. Bankfull indicators were identified and evaluated, cross-section and longitudinal profiles were surveyed, and bed- and bank-material were sampled. Field data were analyzed to determine various geomorphic characteristics such as bankfull width, bankfull mean depth, bankfull cross-sectional area, bankfull discharge, streambed slope, and bed- and bank-material particle-size distribution. The various geomorphic characteristics were analyzed by means of a combination of graphical and statistical techniques. The logarithms of the annual peak discharges for the 40 gaged study sites were fit by a Pearson Type III frequency distribution to develop flood-peak discharges associated with recurrence intervals of 2, 5, 10, 25, 50, and 100 years. The peak-frequency data were related to geomorphic, basin, and climatic variables by multiple-regression analysis. Simple-regression equations were developed to estimate 2-, 5-, 10-, 25-, 50-, and 100-year flood-peak discharges of rural, unregulated streams in Ohio from bankfull channel cross-sectional area. The average standard errors of prediction are 31.6, 32.6, 35.9, 41.5, 46.2, and 51.2 percent, respectively. The study and methods developed are intended to improve understanding of the relations between geomorphic, basin, and flood characteristics of streams in Ohio and to aid in the design of hydraulic structures, such as culverts and bridges, where stability of the stream and structure is an important element of the design criteria. The study was done in cooperation with the Ohio Department of Transportation and the U.S. Department of Transportation, Federal Highway Administration.
Gingerich, Stephen B.
2005-01-01
Flow-duration statistics under natural (undiverted) and diverted flow conditions were estimated for gaged and ungaged sites on 21 streams in northeast Maui, Hawaii. The estimates were made using the optimal combination of continuous-record gaging-station data, low-flow measurements, and values determined from regression equations developed as part of this study. Estimated 50- and 95-percent flow duration statistics for streams are presented and the analyses done to develop and evaluate the methods used in estimating the statistics are described. Estimated streamflow statistics are presented for sites where various amounts of streamflow data are available as well as for locations where no data are available. Daily mean flows were used to determine flow-duration statistics for continuous-record stream-gaging stations in the study area following U.S. Geological Survey established standard methods. Duration discharges of 50- and 95-percent were determined from total flow and base flow for each continuous-record station. The index-station method was used to adjust all of the streamflow records to a common, long-term period. The gaging station on West Wailuaiki Stream (16518000) was chosen as the index station because of its record length (1914-2003) and favorable geographic location. Adjustments based on the index-station method resulted in decreases to the 50-percent duration total flow, 50-percent duration base flow, 95-percent duration total flow, and 95-percent duration base flow computed on the basis of short-term records that averaged 7, 3, 4, and 1 percent, respectively. For the drainage basin of each continuous-record gaged site and selected ungaged sites, morphometric, geologic, soil, and rainfall characteristics were quantified using Geographic Information System techniques. Regression equations relating the non-diverted streamflow statistics to basin characteristics of the gaged basins were developed using ordinary-least-squares regression analyses. Rainfall rate, maximum basin elevation, and the elongation ratio of the basin were the basin characteristics used in the final regression equations for 50-percent duration total flow and base flow. Rainfall rate and maximum basin elevation were used in the final regression equations for the 95-percent duration total flow and base flow. The relative errors between observed and estimated flows ranged from 10 to 20 percent for the 50-percent duration total flow and base flow, and from 29 to 56 percent for the 95-percent duration total flow and base flow. The regression equations developed for this study were used to determine the 50-percent duration total flow, 50-percent duration base flow, 95-percent duration total flow, and 95-percent duration base flow at selected ungaged diverted and undiverted sites. Estimated streamflow, prediction intervals, and standard errors were determined for 48 ungaged sites in the study area and for three gaged sites west of the study area. Relative errors were determined for sites where measured values of 95-percent duration discharge of total flow were available. East of Keanae Valley, the 95-percent duration discharge equation generally underestimated flow, and within and west of Keanae Valley, the equation generally overestimated flow. Reduction in 50- and 95-percent flow-duration values in stream reaches affected by diversions throughout the study area average 58 to 60 percent.
Estimating Flow-Duration and Low-Flow Frequency Statistics for Unregulated Streams in Oregon
Risley, John; Stonewall, Adam J.; Haluska, Tana
2008-01-01
Flow statistical datasets, basin-characteristic datasets, and regression equations were developed to provide decision makers with surface-water information needed for activities such as water-quality regulation, water-rights adjudication, biological habitat assessment, infrastructure design, and water-supply planning and management. The flow statistics, which included annual and monthly period of record flow durations (5th, 10th, 25th, 50th, and 95th percent exceedances) and annual and monthly 7-day, 10-year (7Q10) and 7-day, 2-year (7Q2) low flows, were computed at 466 streamflow-gaging stations at sites with unregulated flow conditions throughout Oregon and adjacent areas of neighboring States. Regression equations, created from the flow statistics and basin characteristics of the stations, can be used to estimate flow statistics at ungaged stream sites in Oregon. The study area was divided into 10 regression modeling regions based on ecological, topographic, geologic, hydrologic, and climatic criteria. In total, 910 annual and monthly regression equations were created to predict the 7 flow statistics in the 10 regions. Equations to predict the five flow-duration exceedance percentages and the two low-flow frequency statistics were created with Ordinary Least Squares and Generalized Least Squares regression, respectively. The standard errors of estimate of the equations created to predict the 5th and 95th percent exceedances had medians of 42.4 and 64.4 percent, respectively. The standard errors of prediction of the equations created to predict the 7Q2 and 7Q10 low-flow statistics had medians of 51.7 and 61.2 percent, respectively. Standard errors for regression equations for sites in western Oregon were smaller than those in eastern Oregon partly because of a greater density of available streamflow-gaging stations in western Oregon than eastern Oregon. High-flow regression equations (such as the 5th and 10th percent exceedances) also generally were more accurate than the low-flow regression equations (such as the 95th percent exceedance and 7Q10 low-flow statistic). The regression equations predict unregulated flow conditions in Oregon. Flow estimates need to be adjusted if they are used at ungaged sites that are regulated by reservoirs or affected by water-supply and agricultural withdrawals if actual flow conditions are of interest. The regression equations are installed in the USGS StreamStats Web-based tool (http://water.usgs.gov/osw/streamstats/index.html, accessed July 16, 2008). StreamStats provides users with a set of annual and monthly flow-duration and low-flow frequency estimates for ungaged sites in Oregon in addition to the basin characteristics for the sites. Prediction intervals at the 90-percent confidence level also are automatically computed.
Blood collection techniques, heparin and quinidine protein binding.
Kessler, K M; Leech, R C; Spann, J F
1979-02-01
With the use of glass syringes without heparin and all glass equipment, the percent of unbound quinidine was measured by ultrafiltration and a double-extraction assay method after addition of 2 microgram/ml of quinidine sulfate. Compared to the all-glass method, collection of blood using Vacutainers resulted in an erroneous and variable decrease in quinidine binding related to blood to rubber-stopper contact. With glass, the unbound quinidine fraction was (mean +/- standard error) 10 +/- 1% in 10 normal volunteers, 8.5 +/- 1.5% in 10 patients with congestive heart failure, and 11 +/- 2% in 11 patients with chronic renal failure (although in 8 of the latter 11 patients the percent of unbound quinidine was 4 or more standard errors from the mean of the normal group). During cardiac catheterization, patients had markedly elevated unbound quinidine fractions: 24 +/- 2% (p less than 0.001). This abnormality coincided with the addition of heparin in vivo and was less apparent after the addition of up to 10 U/ml of heparin in vitro (120% and 29% increase in unbound quinidine fractions, respectively). Quinidine binding should be measured with all glass or equivalent equipment.
After the Medication Error: Recent Nursing Graduates' Reflections on Adequacy of Education.
Treiber, Linda A; Jones, Jackie H
2018-05-01
The purpose of this study was to better understand individual- and system-level factors surrounding making a medication error from the perspective of recent Bachelor of Science in Nursing graduates. Online survey mixed-methods items included perceptions of adequacy of preparatory nursing education, contributory variables, emotional responses, and treatment by employer following the error. Of the 168 respondents, 55% had made a medication error. Errors resulted from inexperience, rushing, technology, staffing, and patient acuity. Twenty-four percent did not report their errors. Key themes for improving education included more practice in varied clinical areas, intensive pharmacological preparation, practical instruction in functioning within the health care environment, and coping after making medication errors. Errors generally caused emotional distress in the error maker. Overall, perceived treatment after the error reflected supportive environments, where nurses were generally treated with respect, fair treatment, and understanding. Opportunities for nursing education include second victim awareness and reinforcing professional practice standards. [J Nurs Educ. 2018;57(5):275-280.]. Copyright 2018, SLACK Incorporated.
Cost-effectiveness of the stream-gaging program in Kentucky
Ruhl, K.J.
1989-01-01
This report documents the results of a study of the cost-effectiveness of the stream-gaging program in Kentucky. The total surface-water program includes 97 daily-discharge stations , 12 stage-only stations, and 35 crest-stage stations and is operated on a budget of $950,700. One station used for research lacks adequate source of funding and should be discontinued when the research ends. Most stations in the network are multiple-use with 65 stations operated for the purpose of defining hydrologic systems, 48 for project operation, 47 for definition of regional hydrology, and 43 for hydrologic forecasting purposes. Eighteen stations support water quality monitoring activities, one station is used for planning and design, and one station is used for research. The average standard error of estimation of streamflow records was determined only for stations in the Louisville Subdistrict. Under current operating policy, with a budget of $223,500, the average standard error of estimation is 28.5%. Altering the travel routes and measurement frequency to reduce the amount of lost stage record would allow a slight decrease in standard error to 26.9%. The results indicate that the collection of streamflow records in the Louisville Subdistrict is cost effective in its present mode of operation. In the Louisville Subdistrict, a minimum budget of $214,200 is required to operate the current network at an average standard error of 32.7%. A budget less than this does not permit proper service and maintenance of the gages and recorders. The maximum budget analyzed was $268,200, which would result in an average standard error of 16.9% indicating that if the budget was increased by 20%, the percent standard error would be reduced 40 %. (USGS)
Curran, Christopher A.; Eng, Ken; Konrad, Christopher P.
2012-01-01
Regional low-flow regression models for estimating Q7,10 at ungaged stream sites are developed from the records of daily discharge at 65 continuous gaging stations (including 22 discontinued gaging stations) for the purpose of evaluating explanatory variables. By incorporating the base-flow recession time constant τ as an explanatory variable in the regression model, the root-mean square error for estimating Q7,10 at ungaged sites can be lowered to 72 percent (for known values of τ), which is 42 percent less than if only basin area and mean annual precipitation are used as explanatory variables. If partial-record sites are included in the regression data set, τ must be estimated from pairs of discharge measurements made during continuous periods of declining low flows. Eight measurement pairs are optimal for estimating τ at partial-record sites, and result in a lowering of the root-mean square error by 25 percent. A low-flow survey strategy that includes paired measurements at partial-record sites requires additional effort and planning beyond a standard strategy, but could be used to enhance regional estimates of τ and potentially reduce the error of regional regression models for estimating low-flow characteristics at ungaged sites.
Field and laboratory procedures used in a soil chronosequence study
Singer, Michael J.; Janitzky, Peter
1986-01-01
In 1978, the late Denis Marchand initiated a research project entitled "Soil Correlation and Dating at the U.S. Geological Survey" to determine the usefulness of soils in solving geologic problems. Marchand proposed to establish soil chronosequences that could be dated independently of soil development by using radiometric and other numeric dating methods. In addition, by comparing dated chronosequences in different environments, rates of soil development could be studied and compared among varying climates and mineralogical conditions. The project was fundamental in documenting the value of soils in studies of mapping, correlating, and dating late Cenozoic deposits and in studying soil genesis. All published reports by members of the project are included in the bibliography.The project demanded that methods be adapted or developed to ensure comparability over a wide variation in soil types. Emphasis was placed on obtaining professional expertise and on establishing consistent techniques, especially for the field, laboratory, and data-compilation methods. Since 1978, twelve chronosequences have been sampled and analyzed by members of this project, and methods have been established and used consistently for analysis of the samples.The goals of this report are to:Document the methods used for the study on soil chronosequences,Present the results of tests that were run for precision, accuracy, and effectiveness, andDiscuss our modifications to standard procedures.Many of the methods presented herein are standard and have been reported elsewhere. However, we assume less prior analytical knowledge in our descriptions; thus, the manual should be easy to follow for the inexperienced analyst. Each chapter presents one or more references of the basic principle, an equipment and reagents list, and the detailed procedure. In some chapters this is followed by additional remarks or example calculations.The flow diagram in figure 1 outlines the step-by-step procedures used to obtain and analyze soil samples for this study. The soils analyzed had a wide range of characteristics (such as clay content, mineralogy, salinity, and acidity). Initially, a major task was to test and select methods that could be applied and interpreted similarly for the various types of soils. Tests were conducted to establish the effectiveness and comparability of analytical techniques, and the data for such tests are included in figures, tables, and discussions. In addition, many replicate analyses of samples have established a "standard error" or "coefficient of variance" which indicates the average reproducibility of each laboratory procedure. These averaged errors are reported as percentage of a given value. For example, in particle-size determination, 3 percent error for 10 percent clay content equals 10 ± 0.3 percent clay. The error sources were examined to determine, for example, if the error in particle-size determination was dependent on clay content. No such biases were found, and data are reported as percent error in the text and in tables of reproducibility.
Experimental comparison of icing cloud instruments
NASA Technical Reports Server (NTRS)
Olsen, W.; Takeuchi, D. M.; Adams, K.
1983-01-01
Icing cloud instruments were tested in the spray cloud Icing Research Tunnel (IRT) in order to determine their relative accuracy and their limitations over a broad range of conditions. It was found that the average of the readings from each of the liquid water content (LWC) instruments tested agreed closely with each other and with the IRT calibration; but all have a data scatter (+ or - one standard deviation) of about + or - 20 percent. The effect of this + or - 20 percent uncertainty is probably acceptable in aero-penalty and deicer experiments. Existing laser spectrometers proved to be too inaccurate for LWC measurements. The error due to water runoff was the same for all ice accretion LWC instruments. Any given laser spectrometer proved to be highly repeatable in its indications of volume median drop size (DVM), LWC and drop size distribution. However, there was a significant disagreement between different spectrometers of the same model, even after careful standard calibration and data analysis. The scatter about the mean of the DVM data from five Axial Scattering Spectrometer Probes was + or - 20 percent (+ or - one standard deviation) and the average was 20 percent higher than the old IRT calibration. The + or - 20 percent uncertainty in DVM can cause an unacceptable variation in the drag coefficient of an airfoil with ice; however, the variation in a deicer performance test may be acceptable.
Kenny, Sarah J; Palacios-Derflingher, Luz; Owoeye, Oluwatoyosi B A; Whittaker, Jackie L; Emery, Carolyn A
2018-03-15
Critical appraisal of research investigating risk factors for musculoskeletal injury in dancers suggests high quality reliability studies are lacking. The purpose of this study was to determine between-day reliability of pre-participation screening (PPS) components in pre-professional ballet and contemporary dancers. Thirty-eight dancers (35 female, 3 male; median age; 18 years; range: 11 to 30 years) participated. Screening components (Athletic Coping Skills Inventory-28, body mass index, percent total body fat, total bone mineral density, Foot Posture Index-6, hip and ankle range of motion, three lumbopelvic control tasks, unipedal dynamic balance, and the Y-Balance Test) were conducted one week apart. Intra-class correlation coefficients (ICCs: 95% confidence intervals), standard error of measurement, minimal detectable change (MDC), Bland-Altman methods of agreement [95% limits of agreement (LOA)], Cohen's kappa coefficients, standard error, and percent agreements were calculated. Depending on the screening component, ICC estimates ranged from 0.51 to 0.98, kappa coefficients varied between -0.09 and 0.47, and percent agreement spanned 71% to 95%. Wide 95% LOA were demonstrated by Foot Posture Index-6 (right: -6.06, 7.31), passive hip external rotation (right: -9.89, 16.54), and passive supine turnout (left: -15.36, 17.58). The PPS components examined demonstrated moderate to excellent relative reliability with mean between-day differences less than MDC, or sufficient percent agreement, across all assessments. However, due to wide 95% limits of agreement, the Foot Posture Index-6 and passive hip range of motion are not recommended for screening injury risk in pre-professional dancers.
Data integrity, reliability and fraud in medical research.
Baerlocher, Mark Otto; O'Brien, Jeremy; Newton, Marshall; Gautam, Tina; Noble, Jason
2010-02-01
Data reliability in original research requires collective trust from the academic community. Standards exist to ensure data integrity, but these safeguards are applied non-uniformly so errors or even fraud may still exist in the literature. To examine the prevalence and consequences of data errors, data reliability safeguards and fraudulent data among medical academics. Corresponding authors of every fourth primary research paper published in the Journal of the American Medical Association (2001-2003), Canadian Medical Association Journal (2001-2003), British Medical Journal (1998-2000), and Lancet (1998-2000) were surveyed electronically. Questions focused on each author's personal experience with data reliability, data errors and data interpretation. Sixty-five percent (127/195) of corresponding authors responded. Ninety-four percent of respondents accepted full responsibility for the integrity of the last manuscript on which they were listed as co-author; however, 21% had discovered incorrect data after publication in previous manuscripts they had co-authored. Fraudulent data was discovered by 4% of respondents in their previous work. Four percent also noted 'smudged' data. Eighty-seven percent of respondents used data reliability safeguards in their last published manuscript, typically data review by multiple authors or double data entry. Twenty-one percent were involved in a paper that was submitted despite disagreement about the interpretation of the results, although the disagreeing author commonly withdrew from authorship. Data reliability remains a difficult issue in medical literature. A significant proportion of respondents did not use data reliability safeguards. Research fraud does exist in academia; however, it was not reported to be highly prevalent. Copyright 2009 European Federation of Internal Medicine. Published by Elsevier B.V. All rights reserved.
Large scale Wyoming transportation data: a resource planning tool
O'Donnell, Michael S.; Fancher, Tammy S.; Freeman, Aaron T.; Ziegler, Abra E.; Bowen, Zachary H.; Aldridge, Cameron L.
2014-01-01
The U.S. Geological Survey Fort Collins Science Center created statewide roads data for the Bureau of Land Management Wyoming State Office using 2009 aerial photography from the National Agriculture Imagery Program. The updated roads data resolves known concerns of omission, commission, and inconsistent representation of map scale, attribution, and ground reference dates which were present in the original source data. To ensure a systematic and repeatable approach of capturing roads on the landscape using on-screen digitizing from true color National Agriculture Imagery Program imagery, we developed a photogrammetry key and quality assurance/quality control protocols. Therefore, the updated statewide roads data will support the Bureau of Land Management’s resource management requirements with a standardized map product representing 2009 ground conditions. The updated Geographic Information System roads data set product, represented at 1:4,000 and +/- 10 meters spatial accuracy, contains 425,275 kilometers within eight attribute classes. The quality control of these products indicated a 97.7 percent accuracy of aspatial information and 98.0 percent accuracy of spatial locations. Approximately 48 percent of the updated roads data was corrected for spatial errors of greater than 1 meter relative to the pre-existing road data. Twenty-six percent of the updated roads involved correcting spatial errors of greater than 5 meters and 17 percent of the updated roads involved correcting spatial errors of greater than 9 meters. The Bureau of Land Management, other land managers, and researchers can use these new statewide roads data set products to support important studies and management decisions regarding land use changes, transportation and planning needs, transportation safety, wildlife applications, and other studies.
Error in total ozone measurements arising from aerosol attenuation
NASA Technical Reports Server (NTRS)
Thomas, R. W. L.; Basher, R. E.
1979-01-01
A generalized least squares method for deducing both total ozone and aerosol extinction spectrum parameters from Dobson spectrophotometer measurements was developed. An error analysis applied to this system indicates that there is little advantage to additional measurements once a sufficient number of line pairs have been employed to solve for the selected detail in the attenuation model. It is shown that when there is a predominance of small particles (less than about 0.35 microns in diameter) the total ozone from the standard AD system is too high by about one percent. When larger particles are present the derived total ozone may be an overestimate or an underestimate but serious errors occur only for narrow polydispersions.
Understanding Risk Tolerance and Building an Effective Safety Culture
NASA Technical Reports Server (NTRS)
Loyd, David
2018-01-01
Estimates range from 65-90 percent of catastrophic mishaps are due to human error. NASA's human factors-related mishaps causes are estimated at approximately 75 percent. As much as we'd like to error-proof our work environment, even the most automated and complex technical endeavors require human interaction... and are vulnerable to human frailty. Industry and government are focusing not only on human factors integration into hazardous work environments, but also looking for practical approaches to cultivating a strong Safety Culture that diminishes risk. Industry and government organizations have recognized the value of monitoring leading indicators to identify potential risk vulnerabilities. NASA has adapted this approach to assess risk controls associated with hazardous, critical, and complex facilities. NASA's facility risk assessments integrate commercial loss control, OSHA (Occupational Safety and Health Administration) Process Safety, API (American Petroleum Institute) Performance Indicator Standard, and NASA Operational Readiness Inspection concepts to identify risk control vulnerabilities.
Skin Friction at Very High Reynolds Numbers in the National Transonic Facility
NASA Technical Reports Server (NTRS)
Watson, Ralph D.; Anders, John B.; Hall, Robert M.
2006-01-01
Skin friction coefficients were derived from measurements using standard measurement technologies on an axisymmetric cylinder in the NASA Langley National Transonic Facility (NTF) at Mach numbers from 0.2 to 0.85. The pressure gradient was nominally zero, the wall temperature was nominally adiabatic, and the ratio of boundary layer thickness to model diameter within the measurement region was 0.10 to 0.14, varying with distance along the model. Reynolds numbers based on momentum thicknesses ranged from 37,000 to 605,000. The measurements approximately doubled the range of available data for flat plate skin friction coefficients. Three different techniques were used to measure surface shear. The maximum error of Preston tube measurements was estimated to be 2.5 percent, while that of Clauser derived measurements was estimated to be approximately 5 percent. Direct measurements by skin friction balance proved to be subject to large errors and were not considered reliable.
Accuracy of acoustic velocity metering systems for measurement of low velocity in open channels
Laenen, Antonius; Curtis, R. E.
1989-01-01
Acoustic velocity meter (AVM) accuracy depends on equipment limitations, the accuracy of acoustic-path length and angle determination, and the stability of the mean velocity to acoustic-path velocity relation. Equipment limitations depend on path length and angle, transducer frequency, timing oscillator frequency, and signal-detection scheme. Typically, the velocity error from this source is about +or-1 to +or-10 mms/sec. Error in acoustic-path angle or length will result in a proportional measurement bias. Typically, an angle error of one degree will result in a velocity error of 2%, and a path-length error of one meter in 100 meter will result in an error of 1%. Ray bending (signal refraction) depends on path length and density gradients present in the stream. Any deviation from a straight acoustic path between transducer will change the unique relation between path velocity and mean velocity. These deviations will then introduce error in the mean velocity computation. Typically, for a 200-meter path length, the resultant error is less than one percent, but for a 1,000 meter path length, the error can be greater than 10%. Recent laboratory and field tests have substantiated assumptions of equipment limitations. Tow-tank tests of an AVM system with a 4.69-meter path length yielded an average standard deviation error of 9.3 mms/sec, and the field tests of an AVM system with a 20.5-meter path length yielded an average standard deviation error of a 4 mms/sec. (USGS)
Low-flow characteristics of streams in Virginia
Hayes, Donald C.
1991-01-01
Streamflow data were collected and low-flow characteristics computed for 715 gaged sites in Virginia Annual minimum average 7-consecutive-day flows range from 0 to 2,195 cubic feet per second for a 2-year recurrence interval and from 0 to 1,423 cubic feet per second for a 10-year recurrence interval. Drainage areas range from 0.17 to 7,320 square miles. Existing and discontinued gaged sites are separated into three types: long-term continuous-record sites, short-term continuous-record sites, and partial-record sites. Low-flow characteristics for long-term continuous-record sites are determined from frequency curves of annual minimum average 7-consecutive-day flows . Low-flow characteristics for short-term continuous-record sites are estimated by relating daily mean base-flow discharge values at a short-term site to concurrent daily mean discharge values at nearby long-term continuous-record sites having similar basin characteristics . Low-flow characteristics for partial-record sites are estimated by relating base-flow measurements to daily mean discharge values at long-term continuous-record sites. Information from the continuous-record sites and partial-record sites in Virginia are used to develop two techniques for estimating low-flow characteristics at ungaged sites. A flow-routing method is developed to estimate low-flow values at ungaged sites on gaged streams. Regional regression equations are developed for estimating low-flow values at ungaged sites on ungaged streams. The flow-routing method consists of transferring low-flow characteristics from a gaged site, either upstream or downstream, to a desired ungaged site. A simple drainage-area proration is used to transfer values when there are no major tributaries between the gaged and ungaged sites. Standard errors of estimate for108 test sites are 19 percent of the mean for estimates of low-flow characteristics having a 2-year recurrence interval and 52 percent of the mean for estimates of low-flow characteristics having a 10-year recurrence interval . A more complex transfer method must be used when major tributaries enter the stream between the gaged and ungaged sites. Twenty-four stream networks are analyzed, and predictions are made for 84 sites. Standard errors of estimate are 15 percent of the mean for estimates of low-flow characteristics having a 2-year recurrence interval and 22 percent of the mean for estimates of low-flow characteristics having a 10-year recurrence interval. Regional regression equations were developed for estimating low-flow values at ungaged sites on ungaged streams. The State was divided into eight regions on the basis of physiography and geographic grouping of the residuals computed in regression analyses . Basin characteristics that were significant in the regression analysis were drainage area, rock type, and strip-mined area. Standard errors of prediction range from 60 to139 percent for estimates of low-flow characteristics having a 2-year recurrence interval and 90 percent to 172 percent for estimates of low-flow characteristics having a 10-year recurrence interval.
Machine Learned Replacement of N-Labels for Basecalled Sequences in DNA Barcoding.
Ma, Eddie Y T; Ratnasingham, Sujeevan; Kremer, Stefan C
2018-01-01
This study presents a machine learning method that increases the number of identified bases in Sanger Sequencing. The system post-processes a KB basecalled chromatogram. It selects a recoverable subset of N-labels in the KB-called chromatogram to replace with basecalls (A,C,G,T). An N-label correction is defined given an additional read of the same sequence, and a human finished sequence. Corrections are added to the dataset when an alignment determines the additional read and human agree on the identity of the N-label. KB must also rate the replacement with quality value of in the additional read. Corrections are only available during system training. Developing the system, nearly 850,000 N-labels are obtained from Barcode of Life Datasystems, the premier database of genetic markers called DNA Barcodes. Increasing the number of correct bases improves reference sequence reliability, increases sequence identification accuracy, and assures analysis correctness. Keeping with barcoding standards, our system maintains an error rate of percent. Our system only applies corrections when it estimates low rate of error. Tested on this data, our automation selects and recovers: 79 percent of N-labels from COI (animal barcode); 80 percent from matK and rbcL (plant barcodes); and 58 percent from non-protein-coding sequences (across eukaryotes).
The US Navy Coastal Surge and Inundation Prediction System (CSIPS): Making Forecasts Easier
2013-02-14
produced the best results Peak Water Level Percent Error CD Formulation LAWMA , Amerada Pass Freshwater Canal Locks Calcasieu Pass Sabine Pass...Conclusions Ongoing Work 16 Baseline Simulation Results Peak Water Level Percent Error LAWMA , Amerada Pass Freshwater Canal Locks Calcasieu Pass...Conclusions Ongoing Work 20 Sensitivity Studies Waves Run Water Level – Percent Error of Peak HWM MAPE Lawma , Armeda Pass Freshwater
Oak soil-site relationships in northwestern West Virginia
L.R. Auchmoody; H. Clay Smith
1979-01-01
An oak soil-site productivity equation was developed for the well-drained, upland soils in the northwestern portion of West Virginia adjacent to the Ohio River. The equation uses five easily measured soil and topographic variables and average precipitation to predict site index. It accounts for 69 percent of the variation in oak site index and has a standard error of 4...
Flood Plain Topography Affects Establishment Success of Direct-Seeded Bottomland Oaks
Emile S. Gardiner; John D. Hodges; T. Conner Fristoe
2004-01-01
Five bottomland oak species were direct seeded along a topographical gradient in a flood plain to determine if environmental factors related to relative position in the flood plain influenced seedling establishment and survival. Two years after installation of the plantation, seedling establishment rates ranged from 12±1.6 (mean ± standard error) percent for overcup...
NASA Astrophysics Data System (ADS)
Nichols, Brandon S.; Rajaram, Narasimhan; Tunnell, James W.
2012-05-01
Diffuse optical spectroscopy (DOS) provides a powerful tool for fast and noninvasive disease diagnosis. The ability to leverage DOS to accurately quantify tissue optical parameters hinges on the model used to estimate light-tissue interaction. We describe the accuracy of a lookup table (LUT)-based inverse model for measuring optical properties under different conditions relevant to biological tissue. The LUT is a matrix of reflectance values acquired experimentally from calibration standards of varying scattering and absorption properties. Because it is based on experimental values, the LUT inherently accounts for system response and probe geometry. We tested our approach in tissue phantoms containing multiple absorbers, different sizes of scatterers, and varying oxygen saturation of hemoglobin. The LUT-based model was able to extract scattering and absorption properties under most conditions with errors of less than 5 percent. We demonstrate the validity of the lookup table over a range of source-detector separations from 0.25 to 1.48 mm. Finally, we describe the rapid fabrication of a lookup table using only six calibration standards. This optimized LUT was able to extract scattering and absorption properties with average RMS errors of 2.5 and 4 percent, respectively.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Bishop, L.; Hill, W.J.
A method is proposed to estimate the effect of long-term variations in total ozone on the error incurred in determining a trend in total ozone due to man-made effects. When this method is applied to data from Arosa, Switzerland over the years 1932--1980, a component of the standard error of the trend estimate equal to 0.6 percent per decade is obtained. If this estimate of long-term trend variability at Arosa is not too different from global long-term trend variability, then the threshold ( +- 2 standard errors) for detecting an ozone trend in the 1970's that is outside of whatmore » could be expected from natural variation alone and hence be man-made would range from 1.35% (Reinsel et al, 1981) to 1.8%. The latter value is obtained by combining the Reinsel et al result with the result here, assuming that the error variations that both studies measure are independent and additive. Estimates for long-term trend variation over other time periods are also derived. Simulations that measure the precision of the estimate of long-term variability are reported.« less
NASA Astrophysics Data System (ADS)
Lohrmann, Carol A.
1990-03-01
Interoperability of commercial Land Mobile Radios (LMR) and the military's tactical LMR is highly desirable if the U.S. government is to respond effectively in a national emergency or in a joint military operation. This ability to talk securely and immediately across agency and military service boundaries is often overlooked. One way to ensure interoperability is to develop and promote Federal communication standards (FS). This thesis surveys one area of the proposed FS 1024 for LMRs; namely, the error detection and correction (EDAC) of the message indicator (MI) bits used for cryptographic synchronization. Several EDAC codes are examined (Hamming, Quadratic Residue, hard decision Golay and soft decision Golay), tested on three FORTRAN programmed channel simulations (INMARSAT, Gaussian and constant burst width), compared and analyzed (based on bit error rates and percent of error-free super-frame runs) so that a best code can be recommended. Out of the four codes under study, the soft decision Golay code (24,12) is evaluated to be the best. This finding is based on the code's ability to detect and correct errors as well as the relative ease of implementation of the algorithm.
Bodkin, James L.; Ballachey, Brenda E.; Esslinger, George G.
2011-01-01
Sea otters in western Prince William Sound (WPWS) and elsewhere in the Gulf of Alaska suffered widespread mortality as a result of oiling following the 1989 T/V Exxon Valdez oil spill. Following the spill, extensive efforts have been directed toward identifying and understanding long-term consequences of the spill and the process of recovery. We conducted annual aerial surveys of sea otter abundance from 1993 to 2009 (except for 2001 and 2006) in WPWS. We observed an increasing trend in population abundance at the scale of WPWS through 2000 at an average annual rate of 4 percent: however, at northern Knight Island where oiling was heaviest and sea otter mortality highest, no increase in abundance was evident by 2000. We continued to see significant increase in abundance at the scale of WPWS between 2001 and 2009, with an average annual rate of increase from 1993 to 2009 of 2.6 percent. We estimated the 2009 population size of WPWS to be 3,958 animals (standard error=653), nearly 2,000 animals more than the first post-spill estimate in 1993. Surveys since 2003 also have identified a significant increasing trend at the heavily oiled site in northern Knight Island, averaging about 25 percent annually and resulting in a 2009 estimated population size of 116 animals (standard error=19). Although the 2009 estimate for northern Knight Island remains about 30 percent less than the pre-spill estimate of 165 animals, we interpret this trend as strong evidence of a trajectory toward recovery of spill-affected sea otter populations in WPWS.
Gaonkar, Narayan; Vaidya, R G
2016-05-01
A simple method to estimate the density of biodiesel blend as simultaneous function of temperature and volume percent of biodiesel is proposed. Employing the Kay's mixing rule, we developed a model and investigated theoretically the density of different vegetable oil biodiesel blends as a simultaneous function of temperature and volume percent of biodiesel. Key advantage of the proposed model is that it requires only a single set of density values of components of biodiesel blends at any two different temperatures. We notice that the density of blend linearly decreases with increase in temperature and increases with increase in volume percent of the biodiesel. The lower values of standard estimate of error (SEE = 0.0003-0.0022) and absolute average deviation (AAD = 0.03-0.15 %) obtained using the proposed model indicate the predictive capability. The predicted values found good agreement with the recent available experimental data.
An optimized network for phosphorus load monitoring for Lake Okeechobee, Florida
Gain, W.S.
1997-01-01
Phosphorus load data were evaluated for Lake Okeechobee, Florida, for water years 1982 through 1991. Standard errors for load estimates were computed from available phosphorus concentration and daily discharge data. Components of error were associated with uncertainty in concentration and discharge data and were calculated for existing conditions and for 6 alternative load-monitoring scenarios for each of 48 distinct inflows. Benefit-cost ratios were computed for each alternative monitoring scenario at each site by dividing estimated reductions in load uncertainty by the 5-year average costs of each scenario in 1992 dollars. Absolute and marginal benefit-cost ratios were compared in an iterative optimization scheme to determine the most cost-effective combination of discharge and concentration monitoring scenarios for the lake. If the current (1992) discharge-monitoring network around the lake is maintained, the water-quality sampling at each inflow site twice each year is continued, and the nature of loading remains the same, the standard error of computed mean-annual load is estimated at about 98 metric tons per year compared to an absolute loading rate (inflows and outflows) of 530 metric tons per year. This produces a relative uncertainty of nearly 20 percent. The standard error in load can be reduced to about 20 metric tons per year (4 percent) by adopting an optimized set of monitoring alternatives at a cost of an additional $200,000 per year. The final optimized network prescribes changes to improve both concentration and discharge monitoring. These changes include the addition of intensive sampling with automatic samplers at 11 sites, the initiation of event-based sampling by observers at another 5 sites, the continuation of periodic sampling 12 times per year at 1 site, the installation of acoustic velocity meters to improve discharge gaging at 9 sites, and the improvement of a discharge rating at 1 site.
Technique for simulating peak-flow hydrographs in Maryland
Dillow, Jonathan J.A.
1998-01-01
The efficient design and management of many bridges, culverts, embankments, and flood-protection structures may require the estimation of time-of-inundation and (or) storage of floodwater relating to such structures. These estimates can be made on the basis of information derived from the peak-flow hydrograph. Average peak-flow hydrographs corresponding to a peak discharge of specific recurrence interval can be simulated for drainage basins having drainage areas less than 500 square miles in Maryland, using a direct technique of known accuracy. The technique uses dimensionless hydrographs in conjunction with estimates of basin lagtime and instantaneous peak flow. Ordinary least-squares regression analysis was used to develop an equation for estimating basin lagtime in Maryland. Drainage area, main channel slope, forest cover, and impervious area were determined to be the significant explanatory variables necessary to estimate average basin lagtime at the 95-percent confidence interval. Qualitative variables included in the equation adequately correct for geographic bias across the State. The average standard error of prediction associated with the equation is approximated as plus or minus (+/-) 37.6 percent. Volume correction factors may be applied to the basin lagtime on the basis of a comparison between actual and estimated hydrograph volumes prior to hydrograph simulation. Three dimensionless hydrographs were developed and tested using data collected during 278 significant rainfall-runoff events at 81 stream-gaging stations distributed throughout Maryland and Delaware. The data represent a range of drainage area sizes and basin conditions. The technique was verified by applying it to the simulation of 20 peak-flow events and comparing actual and simulated hydrograph widths at 50 and 75 percent of the observed peak-flow levels. The events chosen are considered extreme in that the average recurrence interval of the selected peak flows is 130 years. The average standard errors of prediction were +/- 61 and +/- 56 percent at the 50 and 75 percent of peak-flow hydrograph widths, respectively.
Fossum, Kenneth D.; O'Day, Christie M.; Wilson, Barbara J.; Monical, Jim E.
2001-01-01
Stormwater and streamflow in Maricopa County were monitored to (1) describe the physical, chemical, and toxicity characteristics of stormwater from areas having different land uses, (2) describe the physical, chemical, and toxicity characteristics of streamflow from areas that receive urban stormwater, and (3) estimate constituent loads in stormwater. Urban stormwater and streamflow had similar ranges in most constituent concentrations. The mean concentration of dissolved solids in urban stormwater was lower than in streamflow from the Salt River and Indian Bend Wash. Urban stormwater, however, had a greater chemical oxygen demand and higher concentrations of most nutrients. Mean seasonal loads and mean annual loads of 11 constituents and volumes of runoff were estimated for municipalities in the metropolitan Phoenix area, Arizona, by adjusting regional regression equations of loads. This adjustment procedure uses the original regional regression equation and additional explanatory variables that were not included in the original equation. The adjusted equations had standard errors that ranged from 161 to 196 percent. The large standard errors of the prediction result from the large variability of the constituent concentration data used in the regression analysis. Adjustment procedures produced unsatisfactory results for nine of the regressions?suspended solids, dissolved solids, total phosphorus, dissolved phosphorus, total recoverable cadmium, total recoverable copper, total recoverable lead, total recoverable zinc, and storm runoff. These equations had no consistent direction of bias and no other additional explanatory variables correlated with the observed loads. A stepwise-multiple regression or a three-variable regression (total storm rainfall, drainage area, and impervious area) and local data were used to develop local regression equations for these nine constituents. These equations had standard errors from 15 to 183 percent.
NASA Technical Reports Server (NTRS)
Long, E. R., Jr.
1986-01-01
Effects of specimen preparation on measured values of an acrylic's electomagnetic properties at X-band microwave frequencies, TE sub 1,0 mode, utilizing an automatic network analyzer have been studied. For 1 percent or less error, a gap between the specimen edge and the 0.901-in. wall of the specimen holder was the most significant parameter. The gap had to be less than 0.002 in. The thickness variation and alignment errors in the direction parallel to the 0.901-in. wall were equally second most significant and had to be less than 1 degree. Errors in the measurement f the thickness were third most significant. They had to be less than 3 percent. The following parameters caused errors of 1 percent or less: ratios of specimen-holder thicknesses of more than 15 percent, gaps between the specimen edge and the 0.401-in. wall less than 0.045 in., position errors less than 15 percent, surface roughness, hickness variation in the direction parallel to the 0.401-in. wall less than 35 percent, and specimen alignment in the direction parallel to the 0.401-in. wall mass than 5 degrees.
Technique for estimating depth of floods in Tennessee
Gamble, C.R.
1983-01-01
Estimates of flood depths are needed for design of roadways across flood plains and for other types of construction along streams. Equations for estimating flood depths in Tennessee were derived using data for 150 gaging stations. The equations are based on drainage basin size and can be used to estimate depths of the 10-year and 100-year floods for four hydrologic areas. A method also was developed for estimating depth of floods having recurrence intervals between 10 and 100 years. Standard errors range from 22 to 30 percent for the 10-year depth equations and from 23 to 30 percent for the 100-year depth equations. (USGS)
Evaluation of the cost effectiveness of the 1983 stream-gaging program in Kansas
Medina, K.D.; Geiger, C.O.
1984-01-01
The results of an evaluation of the cost effectiveness of the 1983 stream-gaging program in Kansas are documented. Data uses and funding sources were identified for the 140 complete record streamflow-gaging stations operated in Kansas during 1983 with a budget of $793,780. As a result of the evaluation of the needs and uses of data from the stream-gaging program, it was found that the 140 gaging stations were needed to meet these data requirements. The average standard error of estimation of streamflow records was 20.8 percent, assuming the 1983 budget and operating schedule of 6-week interval visitations and based on 85 of the 140 stations. It was shown that this overall level of accuracy could be improved to 18.9 percent by altering the 1983 schedule of station visitations. A minimum budget of $760 ,000, with a corresponding average error of estimation of 24.9 percent, is required to operate the 1983 program. None of the stations investigated were suitable for the application of alternative methods for simulating discharge records. Improved instrumentation can have a very positive impact on streamflow uncertainties by decreasing lost record. (USGS)
Nimbus-7 Total Ozone Mapping Spectrometer (TOMS) Data Products User's Guide
NASA Technical Reports Server (NTRS)
McPeters, Richard D.; Bhartia, P. K.; Krueger, Arlin J.; Herman, Jay R.; Schlesinger, Barry M.; Wellemeyer, Charles G.; Seftor, Colin J.; Jaross, Glen; Taylor, Steven L.; Swissler, Tom;
1996-01-01
Two data products from the Total Ozone Mapping Spectrometer (TOMS) onboard Nimbus-7 have been archived at the Distributed Active Archive Center, in the form of Hierarchical Data Format files. The instrument measures backscattered Earth radiance and incoming solar irradiance; their ratio is used in ozone retrievals. Changes in the instrument sensitivity are monitored by a spectral discrimination technique using measurements of the intrinsically stable wavelength dependence of derived surface reflectivity. The algorithm to retrieve total column ozone compares measured Earth radiances at sets of three wavelengths with radiances calculated for different total ozone values, solar zenith angles, and optical paths. The initial error in the absolute scale for TOMS total ozone is 3 percent, the one standard deviation random error is 2 percent, and drift is less than 1.0 percent per decade. The Level-2 product contains the measured radiances, the derived total ozone amount, and reflectivity information for each scan position. The Level-3 product contains daily total ozone amount and reflectivity in a I - degree latitude by 1.25 degrees longitude grid. The Level-3 product also is available on CD-ROM. Detailed descriptions of both HDF data files and the CD-ROM product are provided.
Nimbus-7 Total Ozone Mapping Spectrometer (TOMS) data products user's guide
NASA Technical Reports Server (NTRS)
Mcpeters, Richard D.; Krueger, Arlin J.; Bhartia, P. K.; Herman, Jay R.; Oaks, Arnold; Ahmad, Ziuddin; Cebula, Richard P.; Schlesinger, Barry M.; Swissler, Tom; Taylor, Steven L.
1993-01-01
Two tape products from the Total Ozone Mapping Spectrometer (TOMS) aboard the Nimbus-7 have been archived at the National Space Science Data Center. The instrument measures backscattered Earth radiance and incoming solar irradiance; their ratio -- the albedo -- is used in ozone retrievals. In-flight measurements are used to monitor changes in the instrument sensitivity. The algorithm to retrieve total column ozone compares the observed ratios of albedos at pairs of wavelengths with pair ratios calculated for different ozone values, solar zenith angles, and optical paths. The initial error in the absolute scale for TOMS total ozone is 3 percent, the one standard-deviation random error is 2 percent, and the drift is +/- 1.5 percent over 14.5 years. The High Density TOMS (HDTOMS) tape contains the measured albedos, the derived total ozone amount, reflectivity, and cloud-height information for each scan position. It also contains an index of SO2 contamination for each position. The Gridded TOMS (GRIDTOMS) tape contains daily total ozone and reflectivity in roughly equal area grids (110 km in latitude by about 100-150 km in longitude). Detailed descriptions of the tape structure and record formats are provided.
Region of influence regression for estimating the 50-year flood at ungaged sites
Tasker, Gary D.; Hodge, S.A.; Barks, C.S.
1996-01-01
Five methods of developing regional regression models to estimate flood characteristics at ungaged sites in Arkansas are examined. The methods differ in the manner in which the State is divided into subrogions. Each successive method (A to E) is computationally more complex than the previous method. Method A makes no subdivision. Methods B and C define two and four geographic subrogions, respectively. Method D uses cluster/discriminant analysis to define subrogions on the basis of similarities in watershed characteristics. Method E, the new region of influence method, defines a unique subregion for each ungaged site. Split-sample results indicate that, in terms of root-mean-square error, method E (38 percent error) is best. Methods C and D (42 and 41 percent error) were in a virtual tie for second, and methods B (44 percent error) and A (49 percent error) were fourth and fifth best.
Linhart, S. Mike; Nania, Jon F.; Sanders, Curtis L.; Archfield, Stacey A.
2012-01-01
The U.S. Geological Survey (USGS) maintains approximately 148 real-time streamgages in Iowa for which daily mean streamflow information is available, but daily mean streamflow data commonly are needed at locations where no streamgages are present. Therefore, the USGS conducted a study as part of a larger project in cooperation with the Iowa Department of Natural Resources to develop methods to estimate daily mean streamflow at locations in ungaged watersheds in Iowa by using two regression-based statistical methods. The regression equations for the statistical methods were developed from historical daily mean streamflow and basin characteristics from streamgages within the study area, which includes the entire State of Iowa and adjacent areas within a 50-mile buffer of Iowa in neighboring states. Results of this study can be used with other techniques to determine the best method for application in Iowa and can be used to produce a Web-based geographic information system tool to compute streamflow estimates automatically. The Flow Anywhere statistical method is a variation of the drainage-area-ratio method, which transfers same-day streamflow information from a reference streamgage to another location by using the daily mean streamflow at the reference streamgage and the drainage-area ratio of the two locations. The Flow Anywhere method modifies the drainage-area-ratio method in order to regionalize the equations for Iowa and determine the best reference streamgage from which to transfer same-day streamflow information to an ungaged location. Data used for the Flow Anywhere method were retrieved for 123 continuous-record streamgages located in Iowa and within a 50-mile buffer of Iowa. The final regression equations were computed by using either left-censored regression techniques with a low limit threshold set at 0.1 cubic feet per second (ft3/s) and the daily mean streamflow for the 15th day of every other month, or by using an ordinary-least-squares multiple linear regression method and the daily mean streamflow for the 15th day of every other month. The Flow Duration Curve Transfer method was used to estimate unregulated daily mean streamflow from the physical and climatic characteristics of gaged basins. For the Flow Duration Curve Transfer method, daily mean streamflow quantiles at the ungaged site were estimated with the parameter-based regression model, which results in a continuous daily flow-duration curve (the relation between exceedance probability and streamflow for each day of observed streamflow) at the ungaged site. By the use of a reference streamgage, the Flow Duration Curve Transfer is converted to a time series. Data used in the Flow Duration Curve Transfer method were retrieved for 113 continuous-record streamgages in Iowa and within a 50-mile buffer of Iowa. The final statewide regression equations for Iowa were computed by using a weighted-least-squares multiple linear regression method and were computed for the 0.01-, 0.05-, 0.10-, 0.15-, 0.20-, 0.30-, 0.40-, 0.50-, 0.60-, 0.70-, 0.80-, 0.85-, 0.90-, and 0.95-exceedance probability statistics determined from the daily mean streamflow with a reporting limit set at 0.1 ft3/s. The final statewide regression equation for Iowa computed by using left-censored regression techniques was computed for the 0.99-exceedance probability statistic determined from the daily mean streamflow with a low limit threshold and a reporting limit set at 0.1 ft3/s. For the Flow Anywhere method, results of the validation study conducted by using six streamgages show that differences between the root-mean-square error and the mean absolute error ranged from 1,016 to 138 ft3/s, with the larger value signifying a greater occurrence of outliers between observed and estimated streamflows. Root-mean-square-error values ranged from 1,690 to 237 ft3/s. Values of the percent root-mean-square error ranged from 115 percent to 26.2 percent. The logarithm (base 10) streamflow percent root-mean-square error ranged from 13.0 to 5.3 percent. Root-mean-square-error observations standard-deviation-ratio values ranged from 0.80 to 0.40. Percent-bias values ranged from 25.4 to 4.0 percent. Untransformed streamflow Nash-Sutcliffe efficiency values ranged from 0.84 to 0.35. The logarithm (base 10) streamflow Nash-Sutcliffe efficiency values ranged from 0.86 to 0.56. For the streamgage with the best agreement between observed and estimated streamflow, higher streamflows appear to be underestimated. For the streamgage with the worst agreement between observed and estimated streamflow, low flows appear to be overestimated whereas higher flows seem to be underestimated. Estimated cumulative streamflows for the period October 1, 2004, to September 30, 2009, are underestimated by -25.8 and -7.4 percent for the closest and poorest comparisons, respectively. For the Flow Duration Curve Transfer method, results of the validation study conducted by using the same six streamgages show that differences between the root-mean-square error and the mean absolute error ranged from 437 to 93.9 ft3/s, with the larger value signifying a greater occurrence of outliers between observed and estimated streamflows. Root-mean-square-error values ranged from 906 to 169 ft3/s. Values of the percent root-mean-square-error ranged from 67.0 to 25.6 percent. The logarithm (base 10) streamflow percent root-mean-square error ranged from 12.5 to 4.4 percent. Root-mean-square-error observations standard-deviation-ratio values ranged from 0.79 to 0.40. Percent-bias values ranged from 22.7 to 0.94 percent. Untransformed streamflow Nash-Sutcliffe efficiency values ranged from 0.84 to 0.38. The logarithm (base 10) streamflow Nash-Sutcliffe efficiency values ranged from 0.89 to 0.48. For the streamgage with the closest agreement between observed and estimated streamflow, there is relatively good agreement between observed and estimated streamflows. For the streamgage with the poorest agreement between observed and estimated streamflow, streamflows appear to be substantially underestimated for much of the time period. Estimated cumulative streamflow for the period October 1, 2004, to September 30, 2009, are underestimated by -9.3 and -22.7 percent for the closest and poorest comparisons, respectively.
Code of Federal Regulations, 2014 CFR
2014-07-01
... of a zero-percent certificate of indebtedness that is made in error? 363.138 Section 363.138 Money... Zero-Percent Certificate of Indebtedness General § 363.138 Is Treasury liable for the purchase of a zero-percent certificate of indebtedness that is made in error? We are not liable for any deposits of...
Standardising analysis of carbon monoxide rebreathing for application in anti-doping.
Alexander, Anthony C; Garvican, Laura A; Burge, Caroline M; Clark, Sally A; Plowman, James S; Gore, Christopher J
2011-03-01
Determination of total haemoglobin mass (Hbmass) via carbon monoxide (CO) depends critically on repeatable measurement of percent carboxyhaemoglobin (%HbCO) in blood with a hemoximeter. The main aim of this study was to determine, for an OSM3 hemoximeter, the number of replicate measures as well as the theoretical change in percent carboxyhaemoglobin required to yield a random error of analysis (Analyser Error) of ≤1%. Before and after inhalation of CO, nine participants provided a total of 576 blood samples that were each analysed five times for percent carboxyhaemoglobin on one of three OSM3 hemoximeters; with approximately one-third of blood samples analysed on each OSM3. The Analyser Error was calculated for the first two (duplicate), first three (triplicate) and first four (quadruplicate) measures on each OSM3, as well as for all five measures (quintuplicates). Two methods of CO-rebreathing, a 2-min and 10-min procedure, were evaluated for Analyser Error. For duplicate analyses of blood, the Analyser Error for the 2-min method was 3.7, 4.0 and 5.0% for the three OSM3s when the percent carboxyhaemoglobin increased by two above resting values. With quintuplicate analyses of blood, the corresponding errors reduced to .8, .9 and 1.0% for the 2-min method when the percent carboxyhaemoglobin increased by 5.5 above resting values. In summary, to minimise the Analyser Error to ∼≤1% on an OSM3 hemoximeter, researchers should make ≥5 replicates of percent carboxyhaemoglobin and the volume of CO administered should be sufficient increase percent carboxyhaemoglobin by ≥5.5 above baseline levels. Crown Copyright © 2010. Published by Elsevier Ltd. All rights reserved.
Satellite inventory of Minnesota forest resources
NASA Technical Reports Server (NTRS)
Bauer, Marvin E.; Burk, Thomas E.; Ek, Alan R.; Coppin, Pol R.; Lime, Stephen D.; Walsh, Terese A.; Walters, David K.; Befort, William; Heinzen, David F.
1993-01-01
The methods and results of using Landsat Thematic Mapper (TM) data to classify and estimate the acreage of forest covertypes in northeastern Minnesota are described. Portions of six TM scenes covering five counties with a total area of 14,679 square miles were classified into six forest and five nonforest classes. The approach involved the integration of cluster sampling, image processing, and estimation. Using cluster sampling, 343 plots, each 88 acres in size, were photo interpreted and field mapped as a source of reference data for classifier training and calibration of the TM data classifications. Classification accuracies of up to 75 percent were achieved; most misclassification was between similar or related classes. An inverse method of calibration, based on the error rates obtained from the classifications of the cluster plots, was used to adjust the classification class proportions for classification errors. The resulting area estimates for total forest land in the five-county area were within 3 percent of the estimate made independently by the USDA Forest Service. Area estimates for conifer and hardwood forest types were within 0.8 and 6.0 percent respectively, of the Forest Service estimates. A trial of a second method of estimating the same classes as the Forest Service resulted in standard errors of 0.002 to 0.015. A study of the use of multidate TM data for change detection showed that forest canopy depletion, canopy increment, and no change could be identified with greater than 90 percent accuracy. The project results have been the basis for the Minnesota Department of Natural Resources and the Forest Service to define and begin to implement an annual system of forest inventory which utilizes Landsat TM data to detect changes in forest cover.
UNDERSTANDING OR NURSES' REACTIONS TO ERRORS AND USING THIS UNDERSTANDING TO IMPROVE PATIENT SAFETY.
Taifoori, Ladan; Valiee, Sina
2015-09-01
The operating room can be home to many different types of nursing errors due to the invasiveness of OR procedures. The nurses' reactions towards errors can be a key factor in patient safety. This article is based on a study, with the aim of investigating nurses' reactions toward nursing errors and the various contributing and resulting factors, conducted at Kurdistan University of Medical Sciences in Sanandaj, Iran in 2014. The goal of the study was to determine how OR nurses' reacted to nursing errors with the goal of having this information used to improve patient safety. Research was conducted as a cross-sectional descriptive study. The participants were all nurses employed in the operating rooms of the teaching hospitals of Kurdistan University of Medical Sciences, which was selected by a consensus method (170 persons). The information was gathered through questionnaires that focused on demographic information, error definition, reasons for error occurrence, and emotional reactions for error occurrence, and emotional reactions toward the errors. 153 questionnaires were completed and analyzed by SPSS software version 16.0. "Not following sterile technique" (82.4 percent) was the most reported nursing error, "tiredness" (92.8 percent) was the most reported reason for the error occurrence, "being upset at having harmed the patient" (85.6 percent) was the most reported emotional reaction after error occurrence", with "decision making for a better approach to tasks the next time" (97.7 percent) as the most common goal and "paying more attention to details" (98 percent) was the most reported planned strategy for future improved outcomes. While healthcare facilities are focused on planning for the prevention and elimination of errors it was shown that nurses can also benefit from support after error occurrence. Their reactions, and coping strategies, need guidance and, with both individual and organizational support, can be a factor in improving patient safety.
NASA Technical Reports Server (NTRS)
Bahcall, J. N.; Pinsonneault, M. H.
1992-01-01
We calculate improved standard solar models using the new Livermore (OPAL) opacity tables, an accurate (exportable) nuclear energy generation routine which takes account of recent measurements and analyses, and the recent Anders-Grevesse determination of heavy element abundances. We also evaluate directly the effect of the diffusion of helium with respect to hydrogen on the calculated neutrino fluxes, on the primordial solar helium abundance, and on the depth of the convective zone. Helium diffusion increases the predicted event rates by about 0.8 SNU, or 11 percent of the total rate, in the chlorine solar neutrino experiment, by about 3.5 SNU, or 3 percent, in the gallium solar neutrino experiments, and by about 12 percent in the Kamiokande and SNO solar neutrino experiments. The best standard solar model including helium diffusion and the most accurate nuclear parameters, element abundances, and radiative opacity predicts a value of 8.0 SNU +/- 3.0 SNU for the C1-37 experiment and 132 +21/-17 SNU for the Ga - 71 experiment, where the uncertainties include 3 sigma errors for all measured input parameters.
Computation of backwater and discharge at width constrictions of heavily vegetated flood plains
Schneider, V.R.; Board, J.W.; Colson, B.E.; Lee, F.N.; Druffel, Leroy
1977-01-01
The U.S. Geological Survey, cooperated with the Federal Highway Administration and the State Highway Departments of Mississippi, Alabama, and Louisiana, to develop a proposed method for computing backwater and discharge at width constrictions of heavily vegetated flood plains. Data were collected at 20 single opening sites for 31 floods. Flood-plain width varied from 4 to 14 times the bridge opening width. The recurrence intervals of peak discharge ranged from a 2-year flood to greater than a 100-year flood, with a median interval of 6 years. Measured backwater ranged from 0.39 to 3.16 feet. Backwater computed by the present standard Geological Survey method averaged 29 percent less than the measured, and that computed by the currently used Federal Highway Administration method averaged 47 percent less than the measured. Discharge computed by the Survey method averaged 21 percent more then the measured. Analysis of data showed that the flood-plain widths and the Manning 's roughness coefficient are larger than those used to develop the standard methods. A method to more accurately compute backwater and discharge was developed. The difference between the contracted and natural water-surface profiles computed using standard step-backwater procedures is defined as backwater. The energy loss terms in the step-backwater procedure are computed as the product of the geometric mean of the energy slopes and the flow distance in the reach was derived from potential flow theory. The mean error was 1 percent when using the proposed method for computing backwater and 3 percent for computing discharge. (Woodard-USGS)
Evaluation of quality of commercial pedometers.
Tudor-Locke, Catrine; Sisson, Susan B; Lee, Sarah M; Craig, Cora L; Plotnikoff, Ronald C; Bauman, Adrian
2006-01-01
The purpose of this study was to: 1) evaluate the quality of promotional pedometers widely distributed through cereal boxes at the time of the 2004 Canada on the Move campaign; and 2) establish a battery of testing protocols to provide direction for future consensus on industry standards for pedometer quality. Fifteen Kellogg's* Special K* Step Counters (K pedometers or K; manufactured for Kellogg Canada by Sasco, Inc.) and 9 Yamax pedometers (Yamax; Yamax Corporation, Tokyo, Japan) were tested with 9 participants accordingly: 1) 20 Step Test; 2) treadmill at 80m x min(-1) (3 miles x hr(-1)) and motor vehicle controlled conditions; and 3) 24-hour free-living conditions against an accelerometer criterion. Fifty-three percent of the K pedometers passed the 20 Step Test compared to 100% of the Yamax. Mean absolute percent error for the K during treadmill walking was 24.2+/-33.9 vs. 3.9+/-6.6% for the Yamax. The K detected 5.7-fold more non-steps compared to the Yamax during the motor vehicle condition. In the free-living condition, mean absolute percent error relative to the ActiGraph was 44.9+/-34.5% for the K vs. 19.5+/-21.2% for the Yamax. K pedometers are unacceptably inaccurate. We suggest that research grade pedometers: 1) be manufactured to a sensitivity threshold of 0.35 Gs; 2) detect +/-1 step error on the 20 Step Test (i.e., within 5%); 3) detect +/-1% error most of the time during treadmill walking at 80m x min(-1) (3 miles x hr(-1)); as well as, 4) detect steps/day within 10% of the ActiGraph at least 60% of the time, or be within 10% of the Yamax under free-living conditions.
Development and validity of an instrumented handbike: initial results of propulsion kinetics.
van Drongelen, Stefan; van den Berg, Jos; Arnet, Ursina; Veeger, Dirkjan H E J; van der Woude, Lucas H V
2011-11-01
To develop an instrumented handbike system to measure the forces applied to the handgrip during handbiking. A 6 degrees of freedom force sensor was built into the handgrip of an attach-unit handbike, together with two optical encoders to measure the orientation of the handgrip and crank in space. Linearity, precision, and percent error were determined for static and dynamic tests. High linearity was demonstrated for both the static and the dynamic condition (r=1.01). Precision was high under the static condition (standard deviation of 0.2N), however the precision decreased with higher loads during the dynamic condition. Percent error values were between 0.3 and 5.1%. This is the first instrumented handbike system that can register 3-dimensional forces. It can be concluded that the instrumented handbike system allows for an accurate force analysis based on forces registered at the handle bars. Copyright © 2011 IPEM. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Smith, James A.
1992-01-01
The inversion of the leaf area index (LAI) canopy parameter from optical spectral reflectance measurements is obtained using a backpropagation artificial neural network trained using input-output pairs generated by a multiple scattering reflectance model. The problem of LAI estimation over sparse canopies (LAI < 1.0) with varying soil reflectance backgrounds is particularly difficult. Standard multiple regression methods applied to canopies within a single homogeneous soil type yield good results but perform unacceptably when applied across soil boundaries, resulting in absolute percentage errors of >1000 percent for low LAI. Minimization methods applied to merit functions constructed from differences between measured reflectances and predicted reflectances using multiple-scattering models are unacceptably sensitive to a good initial guess for the desired parameter. In contrast, the neural network reported generally yields absolute percentage errors of <30 percent when weighting coefficients trained on one soil type were applied to predicted canopy reflectance at a different soil background.
Omang, R.J.; Parrett, Charles; Hull, J.A.
1983-01-01
Equations using channel-geometry measurements were developed for estimating mean runoff and peak flows of ungaged streams in southeastern Montana. Two separate sets of esitmating equations were developed for determining mean annual runoff: one for perennial streams and one for ephemeral and intermittent streams. Data from 29 gaged sites on perennial streams and 21 gaged sites on ephemeral and intermittent streams were used in these analyses. Data from 78 gaged sites were used in the peak-flow analyses. Southeastern Montana was divided into three regions and separate multiple-regression equations for each region were developed that relate channel dimensions to peak discharge having recurrence intervals of 2, 5, 10, 25, 50, and 100 years. Channel-geometery relations were developed using measurements of the active-channel width and bankfull width. Active-channel width and bankfull width were the most significant channel features for estimating mean annual runoff for al types of streams. Use of this method requires that onsite measurements be made of channel width. The standard error of estimate for predicting mean annual runoff ranged from about 38 to 79 percent. The standard error of estimate relating active-channel width or bankfull width to peak flow ranged from about 37 to 115 percent. (USGS)
Flood-frequency characteristics of Wisconsin streams
Walker, John F.; Peppler, Marie C.; Danz, Mari E.; Hubbard, Laura E.
2017-05-22
Flood-frequency characteristics for 360 gaged sites on unregulated rural streams in Wisconsin are presented for percent annual exceedance probabilities ranging from 0.2 to 50 using a statewide skewness map developed for this report. Equations of the relations between flood-frequency and drainage-basin characteristics were developed by multiple-regression analyses. Flood-frequency characteristics for ungaged sites on unregulated, rural streams can be estimated by use of the equations presented in this report. The State was divided into eight areas of similar physiographic characteristics. The most significant basin characteristics are drainage area, soil saturated hydraulic conductivity, main-channel slope, and several land-use variables. The standard error of prediction for the equation for the 1-percent annual exceedance probability flood ranges from 56 to 70 percent for Wisconsin Streams; these values are larger than results presented in previous reports. The increase in the standard error of prediction is likely due to increased variability of the annual-peak discharges, resulting in increased variability in the magnitude of flood peaks at higher frequencies. For each of the unregulated rural streamflow-gaging stations, a weighted estimate based on the at-site log Pearson type III analysis and the multiple regression results was determined. The weighted estimate generally has a lower uncertainty than either the Log Pearson type III or multiple regression estimates. For regulated streams, a graphical method for estimating flood-frequency characteristics was developed from the relations of discharge and drainage area for selected annual exceedance probabilities. Graphs for the major regulated streams in Wisconsin are presented in the report.
Wu, S.-S.; Wang, L.; Qiu, X.
2008-01-01
This article presents a deterministic model for sub-block-level population estimation based on the total building volumes derived from geographic information system (GIS) building data and three census block-level housing statistics. To assess the model, we generated artificial blocks by aggregating census block areas and calculating the respective housing statistics. We then applied the model to estimate populations for sub-artificial-block areas and assessed the estimates with census populations of the areas. Our analyses indicate that the average percent error of population estimation for sub-artificial-block areas is comparable to those for sub-census-block areas of the same size relative to associated blocks. The smaller the sub-block-level areas, the higher the population estimation errors. For example, the average percent error for residential areas is approximately 0.11 percent for 100 percent block areas and 35 percent for 5 percent block areas.
Barth, Nancy A.; Veilleux, Andrea G.
2012-01-01
The U.S. Geological Survey (USGS) is currently updating at-site flood frequency estimates for USGS streamflow-gaging stations in the desert region of California. The at-site flood-frequency analysis is complicated by short record lengths (less than 20 years is common) and numerous zero flows/low outliers at many sites. Estimates of the three parameters (mean, standard deviation, and skew) required for fitting the log Pearson Type 3 (LP3) distribution are likely to be highly unreliable based on the limited and heavily censored at-site data. In a generalization of the recommendations in Bulletin 17B, a regional analysis was used to develop regional estimates of all three parameters (mean, standard deviation, and skew) of the LP3 distribution. A regional skew value of zero from a previously published report was used with a new estimated mean squared error (MSE) of 0.20. A weighted least squares (WLS) regression method was used to develop both a regional standard deviation and a mean model based on annual peak-discharge data for 33 USGS stations throughout California’s desert region. At-site standard deviation and mean values were determined by using an expected moments algorithm (EMA) method for fitting the LP3 distribution to the logarithms of annual peak-discharge data. Additionally, a multiple Grubbs-Beck (MGB) test, a generalization of the test recommended in Bulletin 17B, was used for detecting multiple potentially influential low outliers in a flood series. The WLS regression found that no basin characteristics could explain the variability of standard deviation. Consequently, a constant regional standard deviation model was selected, resulting in a log-space value of 0.91 with a MSE of 0.03 log units. Yet drainage area was found to be statistically significant at explaining the site-to-site variability in mean. The linear WLS regional mean model based on drainage area had a Pseudo- 2 R of 51 percent and a MSE of 0.32 log units. The regional parameter estimates were then used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins. The final equations are functions of drainage area.Average standard errors of prediction for these regression equations range from 214.2 to 856.2 percent.
Physical Activity and Change in Mammographic Density
Conroy, Shannon M.; Butler, Lesley M.; Harvey, Danielle; Gold, Ellen B.; Sternfeld, Barbara; Oestreicher, Nina; Greendale, Gail A.; Habel, Laurel A.
2010-01-01
One potential mechanism by which physical activity may protect against breast cancer is by decreasing mammographic density. Percent mammographic density, the proportion of dense breast tissue area to total breast area, declines with age and is a strong risk factor for breast cancer. The authors hypothesized that women who were more physically active would have a greater decline in percent mammographic density with age, compared with less physically active women. The authors tested this hypothesis using longitudinal data (1996–2004) from 722 participants in the Study of Women's Health Across the Nation (SWAN), a multiethnic cohort of women who were pre- and early perimenopausal at baseline, with multivariable, repeated-measures linear regression analyses. During an average of 5.6 years, the mean annual decline in percent mammographic density was 1.1% (standard deviation = 0.1). A 1-unit increase in total physical activity score was associated with a weaker annual decline in percent mammographic density by 0.09% (standard error = 0.03; P = 0.01). Physical activity was inversely associated with the change in nondense breast area (P < 0.01) and not associated with the change in dense breast area (P = 0.17). Study results do not support the hypothesis that physical activity reduces breast cancer through a mechanism that includes reduced mammographic density. PMID:20354074
Cost effectiveness of the stream-gaging program in northeastern California
Hoffard, S.H.; Pearce, V.F.; Tasker, Gary D.; Doyle, W.H.
1984-01-01
Results are documented of a study of the cost effectiveness of the stream-gaging program in northeastern California. Data uses and funding sources were identified for the 127 continuous stream gages currently being operated in the study area. One stream gage was found to have insufficient data use to warrant cooperative Federal funding. Flow-routing and multiple-regression models were used to simulate flows at selected gaging stations. The models may be sufficiently accurate to replace two of the stations. The average standard error of estimate of streamflow records is 12.9 percent. This overall level of accuracy could be reduced to 12.0 percent using computer-recommended service routes and visit frequencies. (USGS)
Piston manometer as an absolute standard for vacuum-gage calibration in the range 2 to 500 millitorr
NASA Technical Reports Server (NTRS)
Warshawsky, I.
1972-01-01
A thin disk is suspended, with very small annular clearance, in a cylindrical opening in the base plate of a calibration chamber. A continuous flow of calibration gas passes through the chamber and annular opening to a downstream high vacuum pump. The ratio of pressures on the two faces of the disk is very large, so that the upstream pressure is substantially equal to net force on the disk divided by disk area. This force is measured with a dynamometer that is calibrated in place with dead weights. A probable error of + or - (0.2 millitorr plus 0.2 percent) is attainable when downstream pressure is known to 10 percent.
48 CFR 52.241-6 - Service Provisions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... such errors. However, any meter which registers not more than __ percent slow or fast shall be deemed... the Government if the percentage of errors is found to be not more than __ percent slow or fast. (3...
48 CFR 52.241-6 - Service Provisions.
Code of Federal Regulations, 2010 CFR
2010-10-01
... such errors. However, any meter which registers not more than __ percent slow or fast shall be deemed... the Government if the percentage of errors is found to be not more than __ percent slow or fast. (3...
48 CFR 52.241-6 - Service Provisions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... such errors. However, any meter which registers not more than __ percent slow or fast shall be deemed... the Government if the percentage of errors is found to be not more than __ percent slow or fast. (3...
48 CFR 52.241-6 - Service Provisions.
Code of Federal Regulations, 2011 CFR
2011-10-01
... such errors. However, any meter which registers not more than __ percent slow or fast shall be deemed... the Government if the percentage of errors is found to be not more than __ percent slow or fast. (3...
48 CFR 52.241-6 - Service Provisions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... such errors. However, any meter which registers not more than __ percent slow or fast shall be deemed... the Government if the percentage of errors is found to be not more than __ percent slow or fast. (3...
Determination of uranium in natural waters
Barker, Franklin Butt; Johnson, J.O.; Edwards, K.W.; Robinson, B.P.
1965-01-01
A method is described for the determination of very low concentrations of uranium in water. The method is based on the fluorescence of uranium in a pad prepared by fusion of the dried solids from the water sample with a flux of 10 percent NaF 45.5 percent Na2CO3 , and 45.5 percent K2CO3 . This flux permits use of a low fusion temperature and yields pads which are easily removed from the platinum fusion dishes for fluorescence measurements. Uranium concentrations of less than 1 microgram per liter can be determined on a sample of 10 milliliters, or less. The sensitivity and accuracy of the method are dependent primarily on the purity of reagents used, the stability and linearity of the fluorimeter, and the concentration of quenching elements in the water residue. A purification step is recommended when the fluorescence is quenched by more than 30 percent. Equations are given for the calculation of standard deviations of analyses by this method. Graphs of error functions and representative data are also included.
Prevalence of DSM-IV major depression among U.S. military personnel: Meta-analysis and simulation
Gadermann, Anne M.; Engel, COL Charles C.; Naifeh, James A.; Nock, Matthew K.; Petukhova, Maria; Santiago, LCDR Patcho N.; Benjamin, Wu; Zaslavsky, Alan M.; Kessler, Ronald C.
2014-01-01
A meta-analysis of 25 epidemiological studies estimated the prevalence of recent DSM-IV major depression among U.S. military personnel. Best estimates of recent prevalence (standard error) were 12.0 percent (1.2) among currently deployed, 13.1 percent (1.8) among previously deployed and 5.7 percent (1.2) among never deployed. Consistent correlates of prevalence were being female, enlisted, young (ages 17 to 25), unmarried and having less than a college education. Simulation of data from a national general population survey was used to estimate expected lifetime prevalence of major depression among respondents with the socio-demographic profile and none of the enlistment exclusions of Army personnel. In this simulated sample, 16.2 percent (3.1) of respondents had lifetime major depression and 69.7 percent (8.5) of first onsets occurred before expected age of enlistment. Numerous methodological problems limit the results of the meta-analysis and simulation. The paper closes with a discussion of recommendations for correcting these problems in future surveillance and operational stress studies. PMID:22953441
2013-01-01
Background Cardiovascular magnetic resonance (CMR) T1 mapping indices, such as T1 time and partition coefficient (λ), have shown potential to assess diffuse myocardial fibrosis. The purpose of this study was to investigate how scanner and field strength variation affect the accuracy and precision/reproducibility of T1 mapping indices. Methods CMR studies were performed on two 1.5T and three 3T scanners. Eight phantoms were made to mimic the T1/T2 of pre- and post-contrast myocardium and blood at 1.5T and 3T. T1 mapping using MOLLI was performed with simulated heart rate of 40-100 bpm. Inversion recovery spin echo (IR-SE) was the reference standard for T1 determination. Accuracy was defined as the percent error between MOLLI and IR-SE, and scan/re-scan reproducibility was defined as the relative percent mean difference between repeat MOLLI scans. Partition coefficient was estimated by ΔR1myocardium phantom/ΔR1blood phantom. Generalized linear mixed model was used to compare the accuracy and precision/reproducibility of T1 and λ across field strength, scanners, and protocols. Results Field strength significantly affected MOLLI T1 accuracy (6.3% error for 1.5T vs. 10.8% error for 3T, p<0.001) but not λ accuracy (8.8% error for 1.5T vs. 8.0% error for 3T, p=0.11). Partition coefficients of MOLLI were not different between two 1.5T scanners (47.2% vs. 47.9%, p=0.13), and showed only slight variation across three 3T scanners (49.2% vs. 49.8% vs. 49.9%, p=0.016). Partition coefficient also had significantly lower percent error for precision (better scan/re-scan reproducibility) than measurement of individual T1 values (3.6% for λ vs. 4.3%-4.8% for T1 values, approximately, for pre/post blood and myocardium values). Conclusion Based on phantom studies, T1 errors using MOLLI ranged from 6-14% across various MR scanners while errors for partition coefficient were less (6-10%). Compared with absolute T1 times, partition coefficient showed less variability across platforms and field strengths as well as higher precision. PMID:23890156
Program documentation: Surface heating rate of thin skin models (THNSKN)
NASA Technical Reports Server (NTRS)
Mcbryde, J. D.
1975-01-01
Program THNSKN computes the mean heating rate at a maximum of 100 locations on the surface of thin skin transient heating rate models. Output is printed in tabular form and consists of time history tabulation of temperatures, average temperatures, heat loss without conduction correction, mean heating rate, least squares heating rate, and the percent standard error of the least squares heating rates. The input tape used is produced by the program EHTS03.
An emulator for minimizing computer resources for finite element analysis
NASA Technical Reports Server (NTRS)
Melosh, R.; Utku, S.; Islam, M.; Salama, M.
1984-01-01
A computer code, SCOPE, has been developed for predicting the computer resources required for a given analysis code, computer hardware, and structural problem. The cost of running the code is a small fraction (about 3 percent) of the cost of performing the actual analysis. However, its accuracy in predicting the CPU and I/O resources depends intrinsically on the accuracy of calibration data that must be developed once for the computer hardware and the finite element analysis code of interest. Testing of the SCOPE code on the AMDAHL 470 V/8 computer and the ELAS finite element analysis program indicated small I/O errors (3.2 percent), larger CPU errors (17.8 percent), and negligible total errors (1.5 percent).
A two-dimensional, finite-difference model of the high plains aquifer in southern South Dakota
Kolm, K.E.; Case, H. L.
1983-01-01
The High Plains aquifer is the principal source of water for irrigation, industry, municipalities, and domestic use in south-central South Dakota. The aquifer, composed of upper sandstone units of the Arikaree Formation, and the overlying Ogallala and Sand Hills Formations, was simulated using a two-dimensional, finite-difference computer model. The maximum difference between simulated and measured potentiometric heads was less than 60 feet (1- to 4-percent error). Two-thirds of the simulated potentiometric heads were within 26 feet of the measured values (3-percent error). The estimated saturated thickness, computed from simulated potentiometric heads, was within 25-percent error of the known saturated thickness for 95 percent of the study area. (USGS)
Pittman, Jeremy Joshua; Arnall, Daryl Brian; Interrante, Sindy M.; Moffet, Corey A.; Butler, Twain J.
2015-01-01
Non-destructive biomass estimation of vegetation has been performed via remote sensing as well as physical measurements. An effective method for estimating biomass must have accuracy comparable to the accepted standard of destructive removal. Estimation or measurement of height is commonly employed to create a relationship between height and mass. This study examined several types of ground-based mobile sensing strategies for forage biomass estimation. Forage production experiments consisting of alfalfa (Medicago sativa L.), bermudagrass [Cynodon dactylon (L.) Pers.], and wheat (Triticum aestivum L.) were employed to examine sensor biomass estimation (laser, ultrasonic, and spectral) as compared to physical measurements (plate meter and meter stick) and the traditional harvest method (clipping). Predictive models were constructed via partial least squares regression and modeled estimates were compared to the physically measured biomass. Least significant difference separated mean estimates were examined to evaluate differences in the physical measurements and sensor estimates for canopy height and biomass. Differences between methods were minimal (average percent error of 11.2% for difference between predicted values versus machine and quadrat harvested biomass values (1.64 and 4.91 t·ha−1, respectively), except at the lowest measured biomass (average percent error of 89% for harvester and quad harvested biomass < 0.79 t·ha−1) and greatest measured biomass (average percent error of 18% for harvester and quad harvested biomass >6.4 t·ha−1). These data suggest that using mobile sensor-based biomass estimation models could be an effective alternative to the traditional clipping method for rapid, accurate in-field biomass estimation. PMID:25635415
Interpreting SBUV Smoothing Errors: an Example Using the Quasi-biennial Oscillation
NASA Technical Reports Server (NTRS)
Kramarova, N. A.; Bhartia, Pawan K.; Frith, S. M.; McPeters, R. D.; Stolarski, R. S.
2013-01-01
The Solar Backscattered Ultraviolet (SBUV) observing system consists of a series of instruments that have been measuring both total ozone and the ozone profile since 1970. SBUV measures the profile in the upper stratosphere with a resolution that is adequate to resolve most of the important features of that region. In the lower stratosphere the limited vertical resolution of the SBUV system means that there are components of the profile variability that SBUV cannot measure. The smoothing error, as defined in the optimal estimation retrieval method, describes the components of the profile variability that the SBUV observing system cannot measure. In this paper we provide a simple visual interpretation of the SBUV smoothing error by comparing SBUV ozone anomalies in the lower tropical stratosphere associated with the quasi-biennial oscillation (QBO) to anomalies obtained from the Aura Microwave Limb Sounder (MLS). We describe a methodology for estimating the SBUV smoothing error for monthly zonal mean (mzm) profiles. We construct covariance matrices that describe the statistics of the inter-annual ozone variability using a 6 yr record of Aura MLS and ozonesonde data. We find that the smoothing error is of the order of 1percent between 10 and 1 hPa, increasing up to 15-20 percent in the troposphere and up to 5 percent in the mesosphere. The smoothing error for total ozone columns is small, mostly less than 0.5 percent. We demonstrate that by merging the partial ozone columns from several layers in the lower stratosphere/troposphere into one thick layer, we can minimize the smoothing error. We recommend using the following layer combinations to reduce the smoothing error to about 1 percent: surface to 25 hPa (16 hPa) outside (inside) of the narrow equatorial zone 20 S-20 N.
Validation of bioelectrical impedance analysis to hydrostatic weighing in male body builders.
Volpe, Stella Lucia; Melanson, Edward L; Kline, Gregory
2010-03-01
The purpose of this study was to compare bioelectrical impedance analysis (BIA) to hydrostatic weighing (HW) in male weight lifters and body builders. Twenty-two male body builders and weight lifters, 23 +/- 3 years of age (mean +/- SD), were studied to determine the efficacy of BIA to HW in this population. Subjects were measured on two separate occasions, 6 weeks apart, for test-retest reliability purposes. Participants recorded 3-day dietary intakes and average work-out times and regimens between the two testing periods. Subjects were, on average, 75 +/- 8 kg of body weight and 175 +/- 7 cm tall. Validation results were as follows: constant error for HW-BIA = 0.128 +/- 3.7%, r for HW versus BIA = -0.294. Standard error of the estimate for BIA = 2.32% and the total error for BIA = 3.6%. Percent body fat was 7.8 +/- 1% from BIA and 8.5 +/- 2% from HW (P > 0.05). Subjects consumed 3,217 +/- 1,027 kcals; 1,848 +/- 768 kcals from carbohydrates; 604 +/- 300 kcals from protein; and 783 +/- 369 kcals from fat. Although work-outs differed among one another, within subject training did not vary. These results suggest that measurement of percent body fat in male body builders and weight trainers is equally as accurate using BIA or HW.
SU-F-T-310: Does a Head-Mounted Ionization Chamber Detect IMRT Errors?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wegener, S; Herzog, B; Sauer, O
2016-06-15
Purpose: The conventional plan verification strategy is delivering a plan to a QA-phantom before the first treatment. Monitoring each fraction of the patient treatment in real-time would improve patient safety. We evaluated how well a new detector, the IQM (iRT Systems, Germany), is capable of detecting errors we induced into IMRT plans of three different treatment regions. Results were compared to an established phantom. Methods: Clinical plans of a brain, prostate and head-and-neck patient were modified in the Pinnacle planning system, such that they resulted in either several percent lower prescribed doses to the target volume or several percent highermore » doses to relevant organs at risk. Unaltered plans were measured on three days, modified plans once, each with the IQM at an Elekta Synergy with an Agility MLC. All plans were also measured with the ArcCHECK with the cavity plug and a PTW semiflex 31010 ionization chamber inserted. Measurements were evaluated with SNC patient software. Results: Repeated IQM measurements of the original plans were reproducible, such that a 1% deviation from the mean as warning and 3% as action level as suggested by the manufacturer seemed reasonable. The IQM detected most of the simulated errors including wrong energy, a faulty leaf, wrong trial exported and a 2 mm shift of one leaf bank. Detection limits were reached for two plans - a 2 mm field position error and a leaf bank offset combined with an MU change. ArcCHECK evaluation according to our current standards also left undetected errors. Ionization chamber evaluation alone would leave most errors undetected. Conclusion: The IQM detected most errors and performed as well as currently established phantoms with the advantage that it can be used throughout the whole treatment. Drawback is that it does not indicate the source of the error.« less
Estimating the magnitude of peak flows for streams in Kentucky for selected recurrence intervals
Hodgkins, Glenn A.; Martin, Gary R.
2003-01-01
This report gives estimates of, and presents techniques for estimating, the magnitude of peak flows for streams in Kentucky for recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years. A flowchart in this report guides the user to the appropriate estimates and (or) estimating techniques for a site on a specific stream. Estimates of peak flows are given for 222 U.S. Geological Survey streamflow-gaging stations in Kentucky. In the development of the peak-flow estimates at gaging stations, a new generalized skew coefficient was calculated for the State. This single statewide value of 0.011 (with a standard error of prediction of 0.520) is more appropriate for Kentucky than the national skew isoline map in Bulletin 17B of the Interagency Advisory Committee on Water Data. Regression equations are presented for estimating the peak flows on ungaged, unregulated streams in rural drainage basins. The equations were developed by use of generalized-least-squares regression procedures at 187 U.S. Geological Survey gaging stations in Kentucky and 51 stations in surrounding States. Kentucky was divided into seven flood regions. Total drainage area is used in the final regression equations as the sole explanatory variable, except in Regions 1 and 4 where main-channel slope also was used. The smallest average standard errors of prediction were in Region 3 (from -13.1 to +15.0 percent) and the largest average standard errors of prediction were in Region 5 (from -37.6 to +60.3 percent). One section of this report describes techniques for estimating peak flows for ungaged sites on gaged, unregulated streams in rural drainage basins. Another section references two previous U.S. Geological Survey reports for peak-flow estimates on ungaged, unregulated, urban streams. Estimating peak flows at ungaged sites on regulated streams is beyond the scope of this report, because peak flows on regulated streams are dependent upon variable human activities.
Eash, David A.; Barnes, Kimberlee K.; O'Shea, Padraic S.
2016-09-19
A statewide study was led to develop regression equations for estimating three selected spring and three selected fall low-flow frequency statistics for ungaged stream sites in Iowa. The estimation equations developed for the six low-flow frequency statistics include spring (April through June) 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years and fall (October through December) 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years. Estimates of the three selected spring statistics are provided for 241 U.S. Geological Survey continuous-record streamgages, and estimates of the three selected fall statistics are provided for 238 of these streamgages, using data through June 2014. Because only 9 years of fall streamflow record were available, three streamgages included in the development of the spring regression equations were not included in the development of the fall regression equations. Because of regulation, diversion, or urbanization, 30 of the 241 streamgages were not included in the development of the regression equations. The study area includes Iowa and adjacent areas within 50 miles of the Iowa border. Because trend analyses indicated statistically significant positive trends when considering the period of record for most of the streamgages, the longest, most recent period of record without a significant trend was determined for each streamgage for use in the study. Geographic information system software was used to measure 63 selected basin characteristics for each of the 211streamgages used to develop the regional regression equations. The study area was divided into three low-flow regions that were defined in a previous study for the development of regional regression equations.Because several streamgages included in the development of regional regression equations have estimates of zero flow calculated from observed streamflow for selected spring and fall low-flow frequency statistics, the final equations for the three low-flow regions were developed using two types of regression analyses—left-censored and generalized-least-squares regression analyses. A total of 211 streamgages were included in the development of nine spring regression equations—three equations for each of the three low-flow regions. A total of 208 streamgages were included in the development of nine fall regression equations—three equations for each of the three low-flow regions. A censoring threshold was used to develop 15 left-censored regression equations to estimate the three fall low-flow frequency statistics for each of the three low-flow regions and to estimate the three spring low-flow frequency statistics for the southern and northwest regions. For the northeast region, generalized-least-squares regression was used to develop three equations to estimate the three spring low-flow frequency statistics. For the northeast region, average standard errors of prediction range from 32.4 to 48.4 percent for the spring equations and average standard errors of estimate range from 56.4 to 73.8 percent for the fall equations. For the northwest region, average standard errors of estimate range from 58.9 to 62.1 percent for the spring equations and from 83.2 to 109.4 percent for the fall equations. For the southern region, average standard errors of estimate range from 43.2 to 64.0 percent for the spring equations and from 78.1 to 78.7 percent for the fall equations.The regression equations are applicable only to stream sites in Iowa with low flows not substantially affected by regulation, diversion, or urbanization and with basin characteristics within the range of those used to develop the equations. The regression equations will be implemented within the U.S. Geological Survey StreamStats Web-based geographic information system application. StreamStats allows users to click on any ungaged stream site and compute estimates of the six selected spring and fall low-flow statistics; in addition, 90-percent prediction intervals and the measured basin characteristics for the ungaged site are provided. StreamStats also allows users to click on any Iowa streamgage to obtain computed estimates for the six selected spring and fall low-flow statistics.
NASA Technical Reports Server (NTRS)
Chitsomboon, Tawit
1994-01-01
Wall functions, as used in the typical high Reynolds number k-epsilon turbulence model, can be implemented in various ways. A least disruptive method (to the flow solver) is to directly solve for the flow variables at the grid point next to the wall while prescribing the values of k and epsilon. For the centrally-differenced finite-difference scheme employing artificial viscocity (AV) as a stabilizing mechanism, this methodology proved to be totally useless. This is because the AV gives rise to a large error at the wall due to too steep a velocity gradient resulting from the use of a coarse grid as required by the wall function methodology. This error can be eliminated simply by extrapolating velocities at the wall, instead of using the physical values of the no-slip velocities (i.e. the zero value). The applicability of the technique used in this paper is demonstrated by solving a flow over a flat plate and comparing the results with those of experiments. It was also observed that AV gives rise to a velocity overshoot (about 1 percent) near the edge of the boundary layer. This small velocity error, however, can yield as much as 10 percent error in the momentum thickness. A method which integrates the boundary layer up to only the edge of the boundary (instead of infinity) was proposed and demonstrated to give better results than the standard method.
Estimating 1970-99 average annual groundwater recharge in Wisconsin using streamflow data
Gebert, Warren A.; Walker, John F.; Kennedy, James L.
2011-01-01
Average annual recharge in Wisconsin for the period 1970-99 was estimated using streamflow data from U.S. Geological Survey continuous-record streamflow-gaging stations and partial-record sites. Partial-record sites have discharge measurements collected during low-flow conditions. The average annual base flow of a stream divided by the drainage area is a good approximation of the recharge rate; therefore, once average annual base flow is determined recharge can be calculated. Estimates of recharge for nearly 72 percent of the surface area of the State are provided. The results illustrate substantial spatial variability of recharge across the State, ranging from less than 1 inch to more than 12 inches per year. The average basin size for partial-record sites (50 square miles) was less than the average basin size for the gaging stations (305 square miles). Including results for smaller basins reveals a spatial variability that otherwise would be smoothed out using only estimates for larger basins. An error analysis indicates that the techniques used provide base flow estimates with standard errors ranging from 5.4 to 14 percent.
Dente, Christopher J; Ashley, Dennis W; Dunne, James R; Henderson, Vernon; Ferdinand, Colville; Renz, Barry; Massoud, Romeo; Adamski, John; Hawke, Thomas; Gravlee, Mark; Cascone, John; Paynter, Steven; Medeiros, Regina; Atkins, Elizabeth; Nicholas, Jeffrey M
2016-03-01
Led by the American College of Surgeons Trauma Quality Improvement Program, performance improvement efforts have expanded to regional and national levels. The American College of Surgeons Trauma Quality Improvement Program recommends 5 audit filters to identify records with erroneous data, and the Georgia Committee on Trauma instituted standardized audit filter analysis in all Level I and II trauma centers in the state. Audit filter reports were performed from July 2013 to September 2014. Records were reviewed to determine whether there was erroneous data abstraction. Percent yield was defined as number of errors divided by number of charts captured. Twelve centers submitted complete datasets. During 15 months, 21,115 patient records were subjected to analysis. Audit filter captured 2,901 (14%) records and review yielded 549 (2.5%) records with erroneous data. Audit filter 1 had the highest number of records identified and audit filter 3 had the highest percent yield. Individual center error rates ranged from 0.4% to 5.2%. When comparing quarters 1 and 2 with quarters 4 and 5, there were 7 of 12 centers with substantial decreases in error rates. The most common missed complications were pneumonia, urinary tract infection, and acute renal failure. The most common missed comorbidities were hypertension, diabetes, and substance abuse. In Georgia, the prevalence of erroneous data in trauma registries varies among centers, leading to heterogeneity in data quality, and suggests that targeted educational opportunities exist at the institutional level. Standardized audit filter assessment improved data quality in the majority of participating centers. Copyright © 2016 American College of Surgeons. Published by Elsevier Inc. All rights reserved.
Uncertainty Analysis of Instrument Calibration and Application
NASA Technical Reports Server (NTRS)
Tripp, John S.; Tcheng, Ping
1999-01-01
Experimental aerodynamic researchers require estimated precision and bias uncertainties of measured physical quantities, typically at 95 percent confidence levels. Uncertainties of final computed aerodynamic parameters are obtained by propagation of individual measurement uncertainties through the defining functional expressions. In this paper, rigorous mathematical techniques are extended to determine precision and bias uncertainties of any instrument-sensor system. Through this analysis, instrument uncertainties determined through calibration are now expressed as functions of the corresponding measurement for linear and nonlinear univariate and multivariate processes. Treatment of correlated measurement precision error is developed. During laboratory calibration, calibration standard uncertainties are assumed to be an order of magnitude less than those of the instrument being calibrated. Often calibration standards do not satisfy this assumption. This paper applies rigorous statistical methods for inclusion of calibration standard uncertainty and covariance due to the order of their application. The effects of mathematical modeling error on calibration bias uncertainty are quantified. The effects of experimental design on uncertainty are analyzed. The importance of replication is emphasized, techniques for estimation of both bias and precision uncertainties using replication are developed. Statistical tests for stationarity of calibration parameters over time are obtained.
Measurement-based analysis of error latency. [in computer operating system
NASA Technical Reports Server (NTRS)
Chillarege, Ram; Iyer, Ravishankar K.
1987-01-01
This paper demonstrates a practical methodology for the study of error latency under a real workload. The method is illustrated with sampled data on the physical memory activity, gathered by hardware instrumentation on a VAX 11/780 during the normal workload cycle of the installation. These data are used to simulate fault occurrence and to reconstruct the error discovery process in the system. The technique provides a means to study the system under different workloads and for multiple days. An approach to determine the percentage of undiscovered errors is also developed and a verification of the entire methodology is performed. This study finds that the mean error latency, in the memory containing the operating system, varies by a factor of 10 to 1 (in hours) between the low and high workloads. It is found that of all errors occurring within a day, 70 percent are detected in the same day, 82 percent within the following day, and 91 percent within the third day. The increase in failure rate due to latency is not so much a function of remaining errors but is dependent on whether or not there is a latent error.
Ground-Water Quality of the Northern High Plains Aquifer, 1997, 2002-04
Stanton, Jennifer S.; Qi, Sharon L.
2007-01-01
An assessment of ground-water quality in the northern High Plains aquifer was completed during 1997 and 2002-04. Ground-water samples were collected at 192 low-capacity, primarily domestic wells in four major hydrogeologic units of the northern High Plains aquifer-Ogallala Formation, Eastern Nebraska, Sand Hills, and Platte River Valley. Each well was sampled once, and water samples were analyzed for physical properties and concentrations of nitrogen and phosphorus compounds, pesticides and pesticide degradates, dissolved solids, major ions, trace elements, dissolved organic carbon (DOC), radon, and volatile organic compounds (VOCs). Tritium and microbiology were analyzed at selected sites. The results of this assessment were used to determine the current water-quality conditions in this subregion of the High Plains aquifer and to relate ground-water quality to natural and human factors affecting water quality. Water-quality analyses indicated that water samples rarely exceeded established U.S. Environmental Protection Agency public drinking-water standards for those constituents sampled; 13 of the constituents measured or analyzed exceeded their respective standards in at least one sample. The constituents that most often failed to meet drinking-water standards were dissolved solids (13 percent of samples exceeded the U.S. Environmental Protection Agency Secondary Drinking-Water Regulation) and arsenic (8 percent of samples exceeded the U.S. Environmental Protection Agency Maximum Contaminant Level). Nitrate, uranium, iron, and manganese concentrations were larger than drinking-water standards in 6 percent of the samples. Ground-water chemistry varied among hydrogeologic units. Wells sampled in the Platte River Valley and Eastern Nebraska units exceeded water-quality standards more often than the Ogallala Formation and Sand Hills units. Thirty-one percent of the samples collected in the Platte River Valley unit had nitrate concentrations greater than the standard, 22 percent exceeded the manganese standard, 19 percent exceeded the sulfate standard, 26 percent exceeded the uranium standard, and 38 percent exceeded the dissolved-solids standard. In addition, 78 percent of samples had at least one detectable pesticide and 22 percent of samples had at least one detectable VOC. In the Eastern Nebraska unit, 30 percent of the samples collected had dissolved-solids concentrations larger than the standard, 23 percent exceeded the iron standard, 13 percent exceeded the manganese standard, 10 percent exceeded the arsenic standard, 7 percent exceeded the sulfate standard, 7 percent exceeded the uranium standard, and 7 percent exceeded the selenium standard. No samples exceeded the nitrate standard. Thirty percent of samples had at least one detectable pesticide compound and 10 percent of samples had at least one detectable VOC. In contrast, the Sand Hills and Ogallala Formation units had fewer detections of anthropogenic compounds and drinking-water exceedances. In the Sand Hills unit, 15 percent of the samples exceeded the arsenic standard, 4 percent exceeded the nitrate standard, 4 percent exceeded the uranium standard, 4 percent exceeded the iron standard, and 4 percent exceeded the dissolved-solids standard. Fifteen percent of samples had at least one pesticide compound detected and 4 percent had at least one VOC detected. In the Ogallala Formation unit, 6 percent of water samples exceeded the arsenic standard, 4 percent exceeded the dissolved-solids standard, 3 percent exceeded the nitrate standard, 2 percent exceeded the manganese standard, 1 percent exceeded the iron standard, 1 percent exceeded the sulfate standard, and 1 percent exceeded the uranium standard. Eight percent of samples collected in the Ogallala Formation unit had at least one pesticide detected and 6 percent had at least one VOC detected. Differences in ground-water chemistry among the hydrogeologic units were attributed to variable depth to water, depth of the well screen below the water table, reduction-oxidation conditions, ground-water residence time, interactions with surface water, composition of aquifer sediments, extent of cropland, extent of irrigated land, and fertilizer application rates.
Technique for temperature compensation of eddy-current proximity probes
NASA Technical Reports Server (NTRS)
Masters, Robert M.
1989-01-01
Eddy-current proximity probes are used in turbomachinery evaluation testing and operation to measure distances, primarily vibration, deflection, or displacment of shafts, bearings and seals. Measurements of steady-state conditions made with standard eddy-current proximity probes are susceptible to error caused by temperature variations during normal operation of the component under investigation. Errors resulting from temperature effects for the specific probes used in this study were approximately 1.016 x 10 to the -3 mm/deg C over the temperature range of -252 to 100 C. This report examines temperature caused changes on the eddy-current proximity probe measurement system, establishes their origin, and discusses what may be done to minimize their effect on the output signal. In addition, recommendations are made for the installation and operation of the electronic components associated with an eddy-current proximity probe. Several techniques are described that provide active on-line error compensation for over 95 percent of the temperature effects.
Tarone, Aaron M; Foran, David R
2008-07-01
Forensic entomologists use blow fly development to estimate a postmortem interval. Although accurate, fly age estimates can be imprecise for older developmental stages and no standard means of assigning confidence intervals exists. Presented here is a method for modeling growth of the forensically important blow fly Lucilia sericata, using generalized additive models (GAMs). Eighteen GAMs were created to predict the extent of juvenile fly development, encompassing developmental stage, length, weight, strain, and temperature data, collected from 2559 individuals. All measures were informative, explaining up to 92.6% of the deviance in the data, though strain and temperature exerted negligible influences. Predictions made with an independent data set allowed for a subsequent examination of error. Estimates using length and developmental stage were within 5% of true development percent during the feeding portion of the larval life cycle, while predictions for postfeeding third instars were less precise, but within expected error.
Propagation of Radiosonde Pressure Sensor Errors to Ozonesonde Measurements
NASA Technical Reports Server (NTRS)
Stauffer, R. M.; Morris, G.A.; Thompson, A. M.; Joseph, E.; Coetzee, G. J. R.; Nalli, N. R.
2014-01-01
Several previous studies highlight pressure (or equivalently, pressure altitude) discrepancies between the radiosonde pressure sensor and that derived from a GPS flown with the radiosonde. The offsets vary during the ascent both in absolute and percent pressure differences. To investigate this problem further, a total of 731 radiosonde-ozonesonde launches from the Southern Hemisphere subtropics to Northern mid-latitudes are considered, with launches between 2005 - 2013 from both longer-term and campaign-based intensive stations. Five series of radiosondes from two manufacturers (International Met Systems: iMet, iMet-P, iMet-S, and Vaisala: RS80-15N and RS92-SGP) are analyzed to determine the magnitude of the pressure offset. Additionally, electrochemical concentration cell (ECC) ozonesondes from three manufacturers (Science Pump Corporation; SPC and ENSCI-Droplet Measurement Technologies; DMT) are analyzed to quantify the effects these offsets have on the calculation of ECC ozone (O3) mixing ratio profiles (O3MR) from the ozonesonde-measured partial pressure. Approximately half of all offsets are 0.6 hPa in the free troposphere, with nearly a third 1.0 hPa at 26 km, where the 1.0 hPa error represents 5 persent of the total atmospheric pressure. Pressure offsets have negligible effects on O3MR below 20 km (96 percent of launches lie within 5 percent O3MR error at 20 km). Ozone mixing ratio errors above 10 hPa (30 km), can approach greater than 10 percent ( 25 percent of launches that reach 30 km exceed this threshold). These errors cause disagreement between the integrated ozonesonde-only column O3 from the GPS and radiosonde pressure profile by an average of +6.5 DU. Comparisons of total column O3 between the GPS and radiosonde pressure profiles yield average differences of +1.1 DU when the O3 is integrated to burst with addition of the McPeters and Labow (2012) above-burst O3 column climatology. Total column differences are reduced to an average of -0.5 DU when the O3 profile is integrated to 10 hPa with subsequent addition of the O3 climatology above 10 hPa. The RS92 radiosondes are superior in performance compared to other radiosondes, with average 26 km errors of -0.12 hPa or +0.61 percent O3MR error. iMet-P radiosondes had average 26 km errors of -1.95 hPa or +8.75 percent O3MR error. Based on our analysis, we suggest that ozonesondes always be coupled with a GPS-enabled radiosonde and that pressure-dependent variables, such as O3MR, be recalculated-reprocessed using the GPS-measured altitude, especially when 26 km pressure offsets exceed 1.0 hPa 5 percent.
Acoustic holography as a metrological tool for characterizing medical ultrasound sources and fields
Sapozhnikov, Oleg A.; Tsysar, Sergey A.; Khokhlova, Vera A.; Kreider, Wayne
2015-01-01
Acoustic holography is a powerful technique for characterizing ultrasound sources and the fields they radiate, with the ability to quantify source vibrations and reduce the number of required measurements. These capabilities are increasingly appealing for meeting measurement standards in medical ultrasound; however, associated uncertainties have not been investigated systematically. Here errors associated with holographic representations of a linear, continuous-wave ultrasound field are studied. To facilitate the analysis, error metrics are defined explicitly, and a detailed description of a holography formulation based on the Rayleigh integral is provided. Errors are evaluated both for simulations of a typical therapeutic ultrasound source and for physical experiments with three different ultrasound sources. Simulated experiments explore sampling errors introduced by the use of a finite number of measurements, geometric uncertainties in the actual positions of acquired measurements, and uncertainties in the properties of the propagation medium. Results demonstrate the theoretical feasibility of keeping errors less than about 1%. Typical errors in physical experiments were somewhat larger, on the order of a few percent; comparison with simulations provides specific guidelines for improving the experimental implementation to reduce these errors. Overall, results suggest that holography can be implemented successfully as a metrological tool with small, quantifiable errors. PMID:26428789
Error Detection in Mechanized Classification Systems
ERIC Educational Resources Information Center
Hoyle, W. G.
1976-01-01
When documentary material is indexed by a mechanized classification system, and the results judged by trained professionals, the number of documents in disagreement, after suitable adjustment, defines the error rate of the system. In a test case disagreement was 22 percent and, of this 22 percent, the computer correctly identified two-thirds of…
Radiometric properties of the NS001 Thematic Mapper Simulator aircraft multispectral scanner
NASA Technical Reports Server (NTRS)
Markham, Brian L.; Ahmad, Suraiya P.
1990-01-01
Laboratory tests of the NS001 TM are described emphasizing absolute calibration to determine the radiometry of the simulator's reflective channels. In-flight calibration of the data is accomplished with the NS001 internal integrating-sphere source because instabilities in the source can limit the absolute calibration. The data from 1987-89 indicate uncertainties of up to 25 percent with an apparent average uncertainty of about 15 percent. Also identified are dark current drift and sensitivity changes along the scan line, random noise, and nonlinearity which contribute errors of 1-2 percent. Uncertainties similar to hysteresis are also noted especially in the 2.08-2.35-micron range which can reduce sensitivity and cause errors. The NS001 TM Simulator demonstrates a polarization sensitivity that can generate errors of up to about 10 percent depending on the wavelength.
Thrust Stand Characterization of the NASA Evolutionary Xenon Thruster (NEXT)
NASA Technical Reports Server (NTRS)
Diamant, Kevin D.; Pollard, James E.; Crofton, Mark W.; Patterson, Michael J.; Soulas, George C.
2010-01-01
Direct thrust measurements have been made on the NASA Evolutionary Xenon Thruster (NEXT) ion engine using a standard pendulum style thrust stand constructed specifically for this application. Values have been obtained for the full 40-level throttle table, as well as for a few off-nominal operating conditions. Measurements differ from the nominal NASA throttle table 10 (TT10) values by 3.1 percent at most, while at 30 throttle levels (TLs) the difference is less than 2.0 percent. When measurements are compared to TT10 values that have been corrected using ion beam current density and charge state data obtained at The Aerospace Corporation, they differ by 1.2 percent at most, and by 1.0 percent or less at 37 TLs. Thrust correction factors calculated from direct thrust measurements and from The Aerospace Corporation s plume data agree to within measurement error for all but one TL. Thrust due to cold flow and "discharge only" operation has been measured, and analytical expressions are presented which accurately predict thrust based on thermal thrust generation mechanisms.
August median streamflow on ungaged streams in Eastern Coastal Maine
Lombard, Pamela J.
2004-01-01
Methods for estimating August median streamflow were developed for ungaged, unregulated streams in eastern coastal Maine. The methods apply to streams with drainage areas ranging in size from 0.04 to 73.2 square miles and fraction of basin underlain by a sand and gravel aquifer ranging from 0 to 71 percent. The equations were developed with data from three long-term (greater than or equal to 10 years of record) continuous-record streamflow-gaging stations, 23 partial-record streamflow- gaging stations, and 5 short-term (less than 10 years of record) continuous-record streamflow-gaging stations. A mathematical technique for estimating a standard low-flow statistic, August median streamflow, at partial-record streamflow-gaging stations and short-term continuous-record streamflow-gaging stations was applied by relating base-flow measurements at these stations to concurrent daily streamflows at nearby long-term continuous-record streamflow-gaging stations (index stations). Generalized least-squares regression analysis (GLS) was used to relate estimates of August median streamflow at streamflow-gaging stations to basin characteristics at these same stations to develop equations that can be applied to estimate August median streamflow on ungaged streams. GLS accounts for different periods of record at the gaging stations and the cross correlation of concurrent streamflows among gaging stations. Thirty-one stations were used for the final regression equations. Two basin characteristics?drainage area and fraction of basin underlain by a sand and gravel aquifer?are used in the calculated regression equation to estimate August median streamflow for ungaged streams. The equation has an average standard error of prediction from -27 to 38 percent. A one-variable equation uses only drainage area to estimate August median streamflow when less accuracy is acceptable. This equation has an average standard error of prediction from -30 to 43 percent. Model error is larger than sampling error for both equations, indicating that additional or improved estimates of basin characteristics could be important to improved estimates of low-flow statistics. Weighted estimates of August median streamflow at partial- record or continuous-record gaging stations range from 0.003 to 31.0 cubic feet per second or from 0.1 to 0.6 cubic feet per second per square mile. Estimates of August median streamflow on ungaged streams in eastern coastal Maine, within the range of acceptable explanatory variables, range from 0.003 to 45 cubic feet per second or 0.1 to 0.6 cubic feet per second per square mile. Estimates of August median streamflow per square mile of drainage area generally increase as drainage area and fraction of basin underlain by a sand and gravel aquifer increase.
Extragalactic counterparts to Einstein slew survey sources
NASA Technical Reports Server (NTRS)
Schachter, Jonathan F.; Elvis, Martin; Plummer, David; Remillard, Ron
1992-01-01
The Einstein slew survey consists of 819 bright X-ray sources, of which 636 (or 78 percent) are identified with counterparts in standard catalogs. The importance of bright X-ray surveys is stressed, and the slew survey is compared to the Rosat all sky survey. Statistical techniques for minimizing confusion in arcminute error circles in digitized data are discussed. The 238 slew survey active galactic nuclei, clusters, and BL Lacertae objects identified to date and their implications for logN-logS and source evolution studies are described.
ADEOS Total Ozone Mapping Spectrometer (TOMS) Data Products User's Guide
NASA Technical Reports Server (NTRS)
Krueger, A.; Bhartia, P. K.; McPeters, R.; Herman, J.; Wellemeyer, C.; Jaross, G.; Seftor, C.; Torres, O.; Labow, G.; Byerly, W.;
1998-01-01
Two data products from the Total Ozone Mapping Spectrometer (ADEOS/TOMS) have been archived at the Distributed Active Archive Center, in the form of Hierarchical Data Format files. The ADEOS/ TOMS began taking measurements on September 11, 1996, and ended on June 29, 1997. The instrument measured backscattered Earth radiance and incoming solar irradiance; their ratio was used in ozone retrievals. Changes in the reflectivity of the solar diffuser used for the irradiance measurement were monitored using a carousel of three diffusers, each exposed to the degrading effects of solar irradiation at different rates. The algorithm to retrieve total column ozone compares measured Earth radiances at sets of three wavelengths with radiances calculated for different total ozone values, solar zenith angles, and optical paths. The initial error in the absolute scale for TOMS total ozone is 3 percent, the one standard deviation random error is 2 percent, and the drift is less than 0.5 percent over the 9-month data record. The Level 2 product contains the measured radiances, the derived total ozone amount, and reflectivity information for each scan position. The Level 3 product contains daily total ozone and reflectivity in a 1-degree latitude by 1.25 degrees longitude grid. The Level 3 files containing estimates of UVB at the Earth surface and tropospheric aerosol information will also be available. Detailed descriptions of both HDF data files and the CDROM product are provided.
Earth Probe Total Ozone Mapping Spectrometer (TOMS) Data Product User's Guide
NASA Technical Reports Server (NTRS)
McPeters, R.; Bhartia, P. K.; Krueger, A.; Herman, J.; Wellemeyer, C.; Seftor, C.; Jaross, G.; Torres, O.; Moy, L.; Labow, G.;
1998-01-01
Two data products from the Earth Probe Total Ozone Mapping Spectrometer (EP/TOMS) have been archived at the Distributed Active Archive Center, in the form of Hierarchical Data Format files. The EP/ TOMS began taking measurements on July 15, 1996. The instrument measures backscattered Earth radiance and incoming solar irradiance; their ratio is used in ozone retrievals. Changes in the reflectivity of the solar diffuser used for the irradiance measurement are monitored using a carousel of three diffusers, each exposed to the degrading effects of solar irradiation at different rates. The algorithm to retrieve total column ozone compares measured Earth radiances at sets of three wavelengths with radiances calculated for different total ozone values. The initial error in the absolute scale for TOMS total ozone is 3 percent, the one standard deviation random error is 2 percent, and the drift is less than 0.5 percent over the first year of data. The Level-2 product contains the measured radiances, the derived total ozone amount, and reflectivity information for each scan position. The Level-3 product contains daily total ozone and reflectivity in a 1-degree latitude by 1.25 degrees longitude grid. Level-3 files containing estimates of LTVB at the Earth surface and tropospheric aerosol information are also available, Detailed descriptions of both HDF data-files and the CD-ROM product are provided.
Macrae, Toby; Tyler, Ann A
2014-10-01
The authors compared preschool children with co-occurring speech sound disorder (SSD) and language impairment (LI) to children with SSD only in their numbers and types of speech sound errors. In this post hoc quasi-experimental study, independent samples t tests were used to compare the groups in the standard score from different tests of articulation/phonology, percent consonants correct, and the number of omission, substitution, distortion, typical, and atypical error patterns used in the production of different wordlists that had similar levels of phonetic and structural complexity. In comparison with children with SSD only, children with SSD and LI used similar numbers but different types of errors, including more omission patterns ( p < .001, d = 1.55) and fewer distortion patterns ( p = .022, d = 1.03). There were no significant differences in substitution, typical, and atypical error pattern use. Frequent omission error pattern use may reflect a more compromised linguistic system characterized by absent phonological representations for target sounds (see Shriberg et al., 2005). Research is required to examine the diagnostic potential of early frequent omission error pattern use in predicting later diagnoses of co-occurring SSD and LI and/or reading problems.
Multicenter Assessment of Gram Stain Error Rates.
Samuel, Linoj P; Balada-Llasat, Joan-Miquel; Harrington, Amanda; Cavagnolo, Robert
2016-06-01
Gram stains remain the cornerstone of diagnostic testing in the microbiology laboratory for the guidance of empirical treatment prior to availability of culture results. Incorrectly interpreted Gram stains may adversely impact patient care, and yet there are no comprehensive studies that have evaluated the reliability of the technique and there are no established standards for performance. In this study, clinical microbiology laboratories at four major tertiary medical care centers evaluated Gram stain error rates across all nonblood specimen types by using standardized criteria. The study focused on several factors that primarily contribute to errors in the process, including poor specimen quality, smear preparation, and interpretation of the smears. The number of specimens during the evaluation period ranged from 976 to 1,864 specimens per site, and there were a total of 6,115 specimens. Gram stain results were discrepant from culture for 5% of all specimens. Fifty-eight percent of discrepant results were specimens with no organisms reported on Gram stain but significant growth on culture, while 42% of discrepant results had reported organisms on Gram stain that were not recovered in culture. Upon review of available slides, 24% (63/263) of discrepant results were due to reader error, which varied significantly based on site (9% to 45%). The Gram stain error rate also varied between sites, ranging from 0.4% to 2.7%. The data demonstrate a significant variability between laboratories in Gram stain performance and affirm the need for ongoing quality assessment by laboratories. Standardized monitoring of Gram stains is an essential quality control tool for laboratories and is necessary for the establishment of a quality benchmark across laboratories. Copyright © 2016, American Society for Microbiology. All Rights Reserved.
Multicenter Assessment of Gram Stain Error Rates
Balada-Llasat, Joan-Miquel; Harrington, Amanda; Cavagnolo, Robert
2016-01-01
Gram stains remain the cornerstone of diagnostic testing in the microbiology laboratory for the guidance of empirical treatment prior to availability of culture results. Incorrectly interpreted Gram stains may adversely impact patient care, and yet there are no comprehensive studies that have evaluated the reliability of the technique and there are no established standards for performance. In this study, clinical microbiology laboratories at four major tertiary medical care centers evaluated Gram stain error rates across all nonblood specimen types by using standardized criteria. The study focused on several factors that primarily contribute to errors in the process, including poor specimen quality, smear preparation, and interpretation of the smears. The number of specimens during the evaluation period ranged from 976 to 1,864 specimens per site, and there were a total of 6,115 specimens. Gram stain results were discrepant from culture for 5% of all specimens. Fifty-eight percent of discrepant results were specimens with no organisms reported on Gram stain but significant growth on culture, while 42% of discrepant results had reported organisms on Gram stain that were not recovered in culture. Upon review of available slides, 24% (63/263) of discrepant results were due to reader error, which varied significantly based on site (9% to 45%). The Gram stain error rate also varied between sites, ranging from 0.4% to 2.7%. The data demonstrate a significant variability between laboratories in Gram stain performance and affirm the need for ongoing quality assessment by laboratories. Standardized monitoring of Gram stains is an essential quality control tool for laboratories and is necessary for the establishment of a quality benchmark across laboratories. PMID:26888900
Matter power spectrum and the challenge of percent accuracy
DOE Office of Scientific and Technical Information (OSTI.GOV)
Schneider, Aurel; Teyssier, Romain; Potter, Doug
2016-04-01
Future galaxy surveys require one percent precision in the theoretical knowledge of the power spectrum over a large range including very nonlinear scales. While this level of accuracy is easily obtained in the linear regime with perturbation theory, it represents a serious challenge for small scales where numerical simulations are required. In this paper we quantify the precision of present-day N -body methods, identifying main potential error sources from the set-up of initial conditions to the measurement of the final power spectrum. We directly compare three widely used N -body codes, Ramses, Pkdgrav3, and Gadget3 which represent three main discretisationmore » techniques: the particle-mesh method, the tree method, and a hybrid combination of the two. For standard run parameters, the codes agree to within one percent at k ≤1 h Mpc{sup −1} and to within three percent at k ≤10 h Mpc{sup −1}. We also consider the bispectrum and show that the reduced bispectra agree at the sub-percent level for k ≤ 2 h Mpc{sup −1}. In a second step, we quantify potential errors due to initial conditions, box size, and resolution using an extended suite of simulations performed with our fastest code Pkdgrav3. We demonstrate that the simulation box size should not be smaller than L =0.5 h {sup −1}Gpc to avoid systematic finite-volume effects (while much larger boxes are required to beat down the statistical sample variance). Furthermore, a maximum particle mass of M {sub p}=10{sup 9} h {sup −1}M{sub ⊙} is required to conservatively obtain one percent precision of the matter power spectrum. As a consequence, numerical simulations covering large survey volumes of upcoming missions such as DES, LSST, and Euclid will need more than a trillion particles to reproduce clustering properties at the targeted accuracy.« less
Failure analysis and modeling of a multicomputer system. M.S. Thesis
NASA Technical Reports Server (NTRS)
Subramani, Sujatha Srinivasan
1990-01-01
This thesis describes the results of an extensive measurement-based analysis of real error data collected from a 7-machine DEC VaxCluster multicomputer system. In addition to evaluating basic system error and failure characteristics, we develop reward models to analyze the impact of failures and errors on the system. The results show that, although 98 percent of errors in the shared resources recover, they result in 48 percent of all system failures. The analysis of rewards shows that the expected reward rate for the VaxCluster decreases to 0.5 in 100 days for a 3 out of 7 model, which is well over a 100 times that for a 7-out-of-7 model. A comparison of the reward rates for a range of k-out-of-n models indicates that the maximum increase in reward rate (0.25) occurs in going from the 6-out-of-7 model to the 5-out-of-7 model. The analysis also shows that software errors have the lowest reward (0.2 vs. 0.91 for network errors). The large loss in reward rate for software errors is due to the fact that a large proportion (94 percent) of software errors lead to failure. In comparison, the high reward rate for network errors is due to fast recovery from a majority of these errors (median recovery duration is 0 seconds).
Grant, R. Stephen; Skavroneck, Steven
1980-01-01
The top five ranking predictive equations were as follows: Tsivoglou-Neal with 18 percent mean error, Negulescu-Rojanski with 21 percent, Padden-Gloyna with 23 percent, Thackston-Krenkel with 29 percent, and Bansal with 32 percent. (USGS).
NASA Technical Reports Server (NTRS)
Lockwood, G. W.; Tueg, H.; White, N. M.
1992-01-01
By imaging sunlight diffracted by 20- and 30-micron diameter pinholes onto the entrance aperture of a photoelectric grating scanner, the solar spectral irradiance was determined relative to the spectrophotometric standard star Vega, observed at night with the same instrument. Solar irradiances are tabulated at 4 A increments from 3295 A to 8500 A. Over most of the visible spectrum, the internal error of measurement is less than 2 percent. This calibration is compared with earlier irradiance measurements by Neckel and Labs (1984) and by Arvesen et al. (1969) and with the high-resolution solar atlas by Kurucz et al. The three calibrations agree well in visible light but differ by as much as 10 percent in the ultraviolet.
Parrett, Charles; Johnson, D.R.; Hull, J.A.
1989-01-01
Estimates of streamflow characteristics (monthly mean flow that is exceeded 90, 80, 50, and 20 percent of the time for all years of record and mean monthly flow) were made and are presented in tabular form for 312 sites in the Missouri River basin in Montana. Short-term gaged records were extended to the base period of water years 1937-86, and were used to estimate monthly streamflow characteristics at 100 sites. Data from 47 gaged sites were used in regression analysis relating the streamflow characteristics to basin characteristics and to active-channel width. The basin-characteristics equations, with standard errors of 35% to 97%, were used to estimate streamflow characteristics at 179 ungaged sites. The channel-width equations, with standard errors of 36% to 103%, were used to estimate characteristics at 138 ungaged sites. Streamflow measurements were correlated with concurrent streamflows at nearby gaged sites to estimate streamflow characteristics at 139 ungaged sites. In a test using 20 pairs of gages, the standard errors ranged from 31% to 111%. At 139 ungaged sites, the estimates from two or more of the methods were weighted and combined in accordance with the variance of individual methods. When estimates from three methods were combined the standard errors ranged from 24% to 63 %. A drainage-area-ratio adjustment method was used to estimate monthly streamflow characteristics at seven ungaged sites. The reliability of the drainage-area-ratio adjustment method was estimated to be about equal to that of the basin-characteristics method. The estimate were checked for reliability. Estimates of monthly streamflow characteristics from gaged records were considered to be most reliable, and estimates at sites with actual flow record from 1937-86 were considered to be completely reliable (zero error). Weighted-average estimates were considered to be the most reliable estimates made at ungaged sites. (USGS)
78 FR 77399 - Basic Health Program: Proposed Federal Funding Methodology for Program Year 2015
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-23
... American Indians and Alaska Natives F. Example Application of the BHP Funding Methodology III. Collection... effectively 138 percent due to the application of a required 5 percent income disregard in determining the... correct errors in applying the methodology (such as mathematical errors). Under section 1331(d)(3)(ii) of...
Evaluating Equating Results: Percent Relative Error for Chained Kernel Equating
ERIC Educational Resources Information Center
Jiang, Yanlin; von Davier, Alina A.; Chen, Haiwen
2012-01-01
This article presents a method for evaluating equating results. Within the kernel equating framework, the percent relative error (PRE) for chained equipercentile equating was computed under the nonequivalent groups with anchor test (NEAT) design. The method was applied to two data sets to obtain the PRE, which can be used to measure equating…
NASA Technical Reports Server (NTRS)
Otterson, D. A.; Seng, G. T.
1984-01-01
A new high-performance liquid chromatographic (HPLC) method for group-type analysis of middistillate fuels is described. It uses a refractive index detector and standards that are prepared by reacting a portion of the fuel sample with sulfuric acid. A complete analysis of a middistillate fuel for saturates and aromatics (including the preparation of the standard) requires about 15 min if standards for several fuels are prepared simultaneously. From model fuel studies, the method was found to be accurate to within 0.4 vol% saturates or aromatics, and provides a precision of + or - 0.4 vol%. Olefin determinations require an additional 15 min of analysis time. However, this determination is needed only for those fuels displaying a significant olefin response at 200 nm (obtained routinely during the saturated/aromatics analysis procedure). The olefin determination uses the responses of the olefins and the corresponding saturates, as well as the average value of their refractive index sensitivity ratios (1.1). Studied indicated that, although the relative error in the olefins result could reach 10 percent by using this average sensitivity ratio, it was 5 percent for the fuels used in this study. Olefin concentrations as low as 0.1 vol% have been determined using this method.
NASA Technical Reports Server (NTRS)
Hollyday, E. F. (Principal Investigator)
1975-01-01
The author has identified the following significant results. Streamflow characteristics in the Delmarva Peninsula derived from the records of daily discharge of 20 gaged basins are representative of the full range in flow conditions and include all of those commonly used for design or planning purposes. They include annual flood peaks with recurrence intervals of 2, 5, 10, 25, and 50 years, mean annual discharge, standard deviation of the mean annual discharge, mean monthly discharges, standard deviation of the mean monthly discharges, low-flow characteristics, flood volume characteristics, and the discharge equalled or exceeded 50 percent of the time. Streamflow and basin characteristics were related by a technique of multiple regression using a digital computer. A control group of equations was computed using basin characteristics derived from maps and climatological records. An experimental group of equations was computed using basin characteristics derived from LANDSAT imagery as well as from maps and climatological records. Based on a reduction in standard error of estimate equal to or greater than 10 percent, the equations for 12 stream flow characteristics were substantially improved by adding to the analyses basin characteristics derived from LANDSAT imagery.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Morley, Steven
The PyForecastTools package provides Python routines for calculating metrics for model validation, forecast verification and model comparison. For continuous predictands the package provides functions for calculating bias (mean error, mean percentage error, median log accuracy, symmetric signed bias), and for calculating accuracy (mean squared error, mean absolute error, mean absolute scaled error, normalized RMSE, median symmetric accuracy). Convenience routines to calculate the component parts (e.g. forecast error, scaled error) of each metric are also provided. To compare models the package provides: generic skill score; percent better. Robust measures of scale including median absolute deviation, robust standard deviation, robust coefficient ofmore » variation and the Sn estimator are all provided by the package. Finally, the package implements Python classes for NxN contingency tables. In the case of a multi-class prediction, accuracy and skill metrics such as proportion correct and the Heidke and Peirce skill scores are provided as object methods. The special case of a 2x2 contingency table inherits from the NxN class and provides many additional metrics for binary classification: probability of detection, probability of false detection, false alarm ration, threat score, equitable threat score, bias. Confidence intervals for many of these quantities can be calculated using either the Wald method or Agresti-Coull intervals.« less
Classifying multispectral data by neural networks
NASA Technical Reports Server (NTRS)
Telfer, Brian A.; Szu, Harold H.; Kiang, Richard K.
1993-01-01
Several energy functions for synthesizing neural networks are tested on 2-D synthetic data and on Landsat-4 Thematic Mapper data. These new energy functions, designed specifically for minimizing misclassification error, in some cases yield significant improvements in classification accuracy over the standard least mean squares energy function. In addition to operating on networks with one output unit per class, a new energy function is tested for binary encoded outputs, which result in smaller network sizes. The Thematic Mapper data (four bands were used) is classified on a single pixel basis, to provide a starting benchmark against which further improvements will be measured. Improvements are underway to make use of both subpixel and superpixel (i.e. contextual or neighborhood) information in tile processing. For single pixel classification, the best neural network result is 78.7 percent, compared with 71.7 percent for a classical nearest neighbor classifier. The 78.7 percent result also improves on several earlier neural network results on this data.
Ries, Kernell G.; Eng, Ken
2010-01-01
The U.S. Geological Survey, in cooperation with the Maryland Department of the Environment, operated a network of 20 low-flow partial-record stations during 2008 in a region that extends from southwest of Baltimore to the northeastern corner of Maryland to obtain estimates of selected streamflow statistics at the station locations. The study area is expected to face a substantial influx of new residents and businesses as a result of military and civilian personnel transfers associated with the Federal Base Realignment and Closure Act of 2005. The estimated streamflow statistics, which include monthly 85-percent duration flows, the 10-year recurrence-interval minimum base flow, and the 7-day, 10-year low flow, are needed to provide a better understanding of the availability of water resources in the area to be affected by base-realignment activities. Streamflow measurements collected for this study at the low-flow partial-record stations and measurements collected previously for 8 of the 20 stations were related to concurrent daily flows at nearby index streamgages to estimate the streamflow statistics. Three methods were used to estimate the streamflow statistics and two methods were used to select the index streamgages. Of the three methods used to estimate the streamflow statistics, two of them--the Moments and MOVE1 methods--rely on correlating the streamflow measurements at the low-flow partial-record stations with concurrent streamflows at nearby, hydrologically similar index streamgages to determine the estimates. These methods, recommended for use by the U.S. Geological Survey, generally require about 10 streamflow measurements at the low-flow partial-record station. The third method transfers the streamflow statistics from the index streamgage to the partial-record station based on the average of the ratios of the measured streamflows at the partial-record station to the concurrent streamflows at the index streamgage. This method can be used with as few as one pair of streamflow measurements made on a single streamflow recession at the low-flow partial-record station, although additional pairs of measurements will increase the accuracy of the estimates. Errors associated with the two correlation methods generally were lower than the errors associated with the flow-ratio method, but the advantages of the flow-ratio method are that it can produce reasonably accurate estimates from streamflow measurements much faster and at lower cost than estimates obtained using the correlation methods. The two index-streamgage selection methods were (1) selection based on the highest correlation coefficient between the low-flow partial-record station and the index streamgages, and (2) selection based on Euclidean distance, where the Euclidean distance was computed as a function of geographic proximity and the basin characteristics: drainage area, percentage of forested area, percentage of impervious area, and the base-flow recession time constant, t. Method 1 generally selected index streamgages that were significantly closer to the low-flow partial-record stations than method 2. The errors associated with the estimated streamflow statistics generally were lower for method 1 than for method 2, but the differences were not statistically significant. The flow-ratio method for estimating streamflow statistics at low-flow partial-record stations was shown to be independent from the two correlation-based estimation methods. As a result, final estimates were determined for eight low-flow partial-record stations by weighting estimates from the flow-ratio method with estimates from one of the two correlation methods according to the respective variances of the estimates. Average standard errors of estimate for the final estimates ranged from 90.0 to 7.0 percent, with an average value of 26.5 percent. Average standard errors of estimate for the weighted estimates were, on average, 4.3 percent less than the best average standard errors of estima
NASA Technical Reports Server (NTRS)
Chang, Alfred T. C.; Chiu, Long S.; Wilheit, Thomas T.
1993-01-01
Global averages and random errors associated with the monthly oceanic rain rates derived from the Special Sensor Microwave/Imager (SSM/I) data using the technique developed by Wilheit et al. (1991) are computed. Accounting for the beam-filling bias, a global annual average rain rate of 1.26 m is computed. The error estimation scheme is based on the existence of independent (morning and afternoon) estimates of the monthly mean. Calculations show overall random errors of about 50-60 percent for each 5 deg x 5 deg box. The results are insensitive to different sampling strategy (odd and even days of the month). Comparison of the SSM/I estimates with raingage data collected at the Pacific atoll stations showed a low bias of about 8 percent, a correlation of 0.7, and an rms difference of 55 percent.
NASA Astrophysics Data System (ADS)
González-Jorge, Higinio; Riveiro, Belén; Varela, María; Arias, Pedro
2012-07-01
A low-cost image orthorectification tool based on the utilization of compact cameras and scale bars is developed to obtain the main geometric parameters of masonry bridges for inventory and routine inspection purposes. The technique is validated in three different bridges by comparison with laser scanning data. The surveying process is very delicate and must make a balance between working distance and angle. Three different cameras are used in the study to establish the relationship between the error and the camera model. Results depict nondependence in error between the length of the bridge element, the type of bridge, and the type of element. Error values for all the cameras are below 4 percent (95 percent of the data). A compact Canon camera, the model with the best technical specifications, shows an error level ranging from 0.5 to 1.5 percent.
Refractive errors in medical students in Singapore.
Woo, W W; Lim, K A; Yang, H; Lim, X Y; Liew, F; Lee, Y S; Saw, S M
2004-10-01
Refractive errors are becoming more of a problem in many societies, with prevalence rates of myopia in many Asian urban countries reaching epidemic proportions. This study aims to determine the prevalence rates of various refractive errors in Singapore medical students. 157 second year medical students (aged 19-23 years) in Singapore were examined. Refractive error measurements were determined using a stand-alone autorefractor. Additional demographical data was obtained via questionnaires filled in by the students. The prevalence rate of myopia in Singapore medical students was 89.8 percent (Spherical equivalence (SE) at least -0.50 D). Hyperopia was present in 1.3 percent (SE more than +0.50 D) of the participants and the overall astigmatism prevalence rate was 82.2 percent (Cylinder at least 0.50 D). Prevalence rates of myopia and astigmatism in second year Singapore medical students are one of the highest in the world.
DAT/SERT Selectivity of Flexible GBR 12909 Analogs Modeled Using 3D-QSAR Methods
Gilbert, Kathleen M.; Boos, Terrence L.; Dersch, Christina M.; Greiner, Elisabeth; Jacobson, Arthur E.; Lewis, David; Matecka, Dorota; Prisinzano, Thomas E.; Zhang, Ying; Rothman, Richard B.; Rice, Kenner C.; Venanzi, Carol A.
2007-01-01
The dopamine reuptake inhibitor GBR 12909 (1-{2-[bis(4-fluorophenyl)methoxy]ethyl}-4-(3-phenylpropyl)piperazine, 1) and its analogs have been developed as tools to test the hypothesis that selective dopamine transporter (DAT) inhibitors will be useful therapeutics for cocaine addiction. This 3D-QSAR study focuses on the effect of substitutions in the phenylpropyl region of 1. CoMFA and CoMSIA techniques were used to determine a predictive and stable model for the DAT/serotonin transporter (SERT) selectivity (represented by pKi (DAT/SERT)) of a set of flexible analogs of 1, most of which have eight rotatable bonds. In the absence of a rigid analog to use as a 3D-QSAR template, six conformational families of analogs were constructed from six pairs of piperazine and piperidine template conformers identified by hierarchical clustering as representative molecular conformations. Three models stable to y-value scrambling were identified after a comprehensive CoMFA and CoMSIA survey with Region Focusing. Test set correlation validation led to an acceptable model, with q2 = 0.508, standard error of prediction = 0.601, two components, r2 = 0.685, standard error of estimate = 0.481, F value = 39, percent steric contribution = 65, and percent electrostatic contribution = 35. A CoMFA contour map identified areas of the molecule that affect pKi (DAT/SERT). This work outlines a protocol for deriving a stable and predictive model of the biological activity of a set of very flexible molecules. PMID:17127069
Estimation of Magnitude and Frequency of Floods for Streams on the Island of Oahu, Hawaii
Wong, Michael F.
1994-01-01
This report describes techniques for estimating the magnitude and frequency of floods for the island of Oahu. The log-Pearson Type III distribution and methodology recommended by the Interagency Committee on Water Data was used to determine the magnitude and frequency of floods at 79 gaging stations that had 11 to 72 years of record. Multiple regression analysis was used to construct regression equations to transfer the magnitude and frequency information from gaged sites to ungaged sites. Oahu was divided into three hydrologic regions to define relations between peak discharge and drainage-basin and climatic characteristics. Regression equations are provided to estimate the 2-, 5-, 10-, 25-, 50-, and 100-year peak discharges at ungaged sites. Significant basin and climatic characteristics included in the regression equations are drainage area, median annual rainfall, and the 2-year, 24-hour rainfall intensity. Drainage areas for sites used in this study ranged from 0.03 to 45.7 square miles. Standard error of prediction for the regression equations ranged from 34 to 62 percent. Peak-discharge data collected through water year 1988, geographic information system (GIS) technology, and generalized least-squares regression were used in the analyses. The use of GIS seems to be a more flexible and consistent means of defining and calculating basin and climatic characteristics than using manual methods. Standard errors of estimate for the regression equations in this report are an average of 8 percent less than those published in previous studies.
SEL's Software Process-Improvement Program
NASA Technical Reports Server (NTRS)
Basili, Victor; Zelkowitz, Marvin; McGarry, Frank; Page, Jerry; Waligora, Sharon; Pajerski, Rose
1995-01-01
The goals and operations of the Software Engineering Laboratory (SEL) is reviewed. For nearly 20 years the SEL has worked to understand, assess, and improve software and the development process within the production environment of the Flight Dynamics Division (FDD) of NASA's Goddard Space Flight Center. The SEL was established in 1976 with the goals of reducing: (1) the defect rate of delivered software, (2) the cost of software to support flight projects, and (3) the average time to produce mission-support software. After studying over 125 projects of FDD, the results have guided the standards, management practices, technologies, and the training within the division. The results of the studies have been a 75 percent reduction in defects, a 50 percent reduction in cost, and a 25 percent reduction in development time. Over time the goals of SEL have been clarified. The goals are now stated as: (1) Understand baseline processes and product characteristics, (2) Assess improvements that have been incorporated into the development projects, (3) Package and infuse improvements into the standard SEL process. The SEL improvement goal is to demonstrate continual improvement of the software process by carrying out analysis, measurement and feedback to projects with in the FDD environment. The SEL supports the understanding of the process by study of several processes including, the effort distribution, and error detection rates. The SEL assesses and refines the processes. Once the assessment and refinement of a process is completed, the SEL packages the process by capturing the process in standards, tools and training.
Castrillon, Juliana; Huston, Wilhelmina; Bengtson Nash, Susan
2017-07-01
The ability to accurately evaluate the energetic health of wildlife is of critical importance, particularly under conditions of environmental change. Despite the relevance of this issue, currently there are no reliable, standardized, nonlethal measures to assess the energetic reserves of large, free-roaming marine mammals such as baleen whales. This study investigated the potential of adipocyte area analysis and further, a standardized adipocyte index (AI), to yield reliable information regarding humpback whale ( Megaptera novaeangliae ) adiposity. Adipocyte area and AI, as ascertained by image analysis, showed a direct correlation with each other but only a weak correlation with the commonly used, but error prone, blubber lipid-percent measure. The relative power of the three respective measures was further evaluated by comparing humpback whale cohorts at different stages of migration and fasting. Adipocyte area, AI, and blubber lipid-percent were assessed by binary logistic regression revealing that adipocyte area had the greatest probability to predict the migration cohort with a high level of redundancy attributed to the AI given their strong linear relationship (r = -.784). When only AI and lipid-percent were assessed, the performance of both predictor variables was significant but the power of AI far exceeded lipid-percent. The sensitivity of adipocyte metrics and the rapid, nonlethal, and inexpensive nature of the methodology and AI calculation validate the inclusion of the AI in long-term monitoring of humpback whale population health, and further raises its potential for broader wildlife applications.
NASA Technical Reports Server (NTRS)
Clinton, N. J. (Principal Investigator)
1980-01-01
Labeling errors made in the large area crop inventory experiment transition year estimates by Earth Observation Division image analysts are identified and quantified. The analysis was made from a subset of blind sites in six U.S. Great Plains states (Oklahoma, Kansas, Montana, Minnesota, North and South Dakota). The image interpretation basically was well done, resulting in a total omission error rate of 24 percent and a commission error rate of 4 percent. The largest amount of error was caused by factors beyond the control of the analysts who were following the interpretation procedures. The odd signatures, the largest error cause group, occurred mostly in areas of moisture abnormality. Multicrop labeling was tabulated showing the distribution of labeling for all crops.
Analysis of the U.S. geological survey streamgaging network
Scott, A.G.
1987-01-01
This paper summarizes the results from the first 3 years of a 5-year cost-effectiveness study of the U.S. Geological Survey streamgaging network. The objective of the study is to define and document the most cost-effective means of furnishing streamflow information. In the first step of this study, data uses were identified for 3,493 continuous-record stations currently being operated in 32 States. In the second step, evaluation of alternative methods of providing streamflow information, flow-routing models, and regression models were developed for estimating daily flows at 251 stations of the 3,493 stations analyzed. In the third step of the analysis, relationships were developed between the accuracy of the streamflow records and the operating budget. The weighted standard error for all stations, with current operating procedures, was 19.9 percent. By altering field activities, as determined by the analyses, this could be reduced to 17.8 percent. The existing streamgaging networks in four Districts were further analyzed to determine the impacts that satellite telemetry would have on the cost effectiveness. Satellite telemetry was not found to be cost effective on the basis of hydrologic data collection alone, given present cost of equipment and operation.This paper summarizes the results from the first 3 years of a 5-year cost-effectiveness study of the U. S. Geological Survey streamgaging network. The objective of the study is to define and document the most cost-effective means of furnishing streamflow information. In the first step of this study, data uses were identified for 3,493 continuous-record stations currently being operated in 32 States. In the second step, evaluation of alternative methods of providing streamflow information, flow-routing models, and regression models were developed for estimating daily flows at 251 stations of the 3, 493 stations analyzed. In the third step of the analysis, relationships were developed between the accuracy of the streamflow records and the operating budget. The weighted standard error for all stations, with current operating procedures, was 19. 9 percent. By altering field activities, as determined by the analyses, this could be reduced to 17. 8 percent. Additional study results are discussed.
Multiple regression technique for Pth degree polynominals with and without linear cross products
NASA Technical Reports Server (NTRS)
Davis, J. W.
1973-01-01
A multiple regression technique was developed by which the nonlinear behavior of specified independent variables can be related to a given dependent variable. The polynomial expression can be of Pth degree and can incorporate N independent variables. Two cases are treated such that mathematical models can be studied both with and without linear cross products. The resulting surface fits can be used to summarize trends for a given phenomenon and provide a mathematical relationship for subsequent analysis. To implement this technique, separate computer programs were developed for the case without linear cross products and for the case incorporating such cross products which evaluate the various constants in the model regression equation. In addition, the significance of the estimated regression equation is considered and the standard deviation, the F statistic, the maximum absolute percent error, and the average of the absolute values of the percent of error evaluated. The computer programs and their manner of utilization are described. Sample problems are included to illustrate the use and capability of the technique which show the output formats and typical plots comparing computer results to each set of input data.
Statistical analysis of AFE GN&C aeropass performance
NASA Technical Reports Server (NTRS)
Chang, Ho-Pen; French, Raymond A.
1990-01-01
Performance of the guidance, navigation, and control (GN&C) system used on the Aeroassist Flight Experiment (AFE) spacecraft has been studied with Monte Carlo techniques. The performance of the AFE GN&C is investigated with a 6-DOF numerical dynamic model which includes a Global Reference Atmospheric Model (GRAM) and a gravitational model with oblateness corrections. The study considers all the uncertainties due to the environment and the system itself. In the AFE's aeropass phase, perturbations on the system performance are caused by an error space which has over 20 dimensions of the correlated/uncorrelated error sources. The goal of this study is to determine, in a statistical sense, how much flight path angle error can be tolerated at entry interface (EI) and still have acceptable delta-V capability at exit to position the AFE spacecraft for recovery. Assuming there is fuel available to produce 380 ft/sec of delta-V at atmospheric exit, a 3-sigma standard deviation in flight path angle error of 0.04 degrees at EI would result in a 98-percent probability of mission success.
MISSE 2 PEACE Polymers Experiment Atomic Oxygen Erosion Yield Error Analysis
NASA Technical Reports Server (NTRS)
McCarthy, Catherine E.; Banks, Bruce A.; deGroh, Kim, K.
2010-01-01
Atomic oxygen erosion of polymers in low Earth orbit (LEO) poses a serious threat to spacecraft performance and durability. To address this, 40 different polymer samples and a sample of pyrolytic graphite, collectively called the PEACE (Polymer Erosion and Contamination Experiment) Polymers, were exposed to the LEO space environment on the exterior of the International Space Station (ISS) for nearly 4 years as part of the Materials International Space Station Experiment 1 & 2 (MISSE 1 & 2). The purpose of the PEACE Polymers experiment was to obtain accurate mass loss measurements in space to combine with ground measurements in order to accurately calculate the atomic oxygen erosion yields of a wide variety of polymeric materials exposed to the LEO space environment for a long period of time. Error calculations were performed in order to determine the accuracy of the mass measurements and therefore of the erosion yield values. The standard deviation, or error, of each factor was incorporated into the fractional uncertainty of the erosion yield for each of three different situations, depending on the post-flight weighing procedure. The resulting error calculations showed the erosion yield values to be very accurate, with an average error of 3.30 percent.
Analysis of counting errors in the phase/Doppler particle analyzer
NASA Technical Reports Server (NTRS)
Oldenburg, John R.
1987-01-01
NASA is investigating the application of the Phase Doppler measurement technique to provide improved drop sizing and liquid water content measurements in icing research. The magnitude of counting errors were analyzed because these errors contribute to inaccurate liquid water content measurements. The Phase Doppler Particle Analyzer counting errors due to data transfer losses and coincidence losses were analyzed for data input rates from 10 samples/sec to 70,000 samples/sec. Coincidence losses were calculated by determining the Poisson probability of having more than one event occurring during the droplet signal time. The magnitude of the coincidence loss can be determined, and for less than a 15 percent loss, corrections can be made. The data transfer losses were estimated for representative data transfer rates. With direct memory access enabled, data transfer losses are less than 5 percent for input rates below 2000 samples/sec. With direct memory access disabled losses exceeded 20 percent at a rate of 50 samples/sec preventing accurate number density or mass flux measurements. The data transfer losses of a new signal processor were analyzed and found to be less than 1 percent for rates under 65,000 samples/sec.
The efficacy of a novel mobile phone application for goldmann ptosis visual field interpretation.
Maamari, Robi N; D'Ambrosio, Michael V; Joseph, Jeffrey M; Tao, Jeremiah P
2014-01-01
To evaluate the efficacy of a novel mobile phone application that calculates superior visual field defects on Goldmann visual field charts. Experimental study in which the mobile phone application and 14 oculoplastic surgeons interpreted the superior visual field defect in 10 Goldmann charts. Percent error of the mobile phone application and the oculoplastic surgeons' estimates were calculated compared with computer software computation of the actual defects. Precision and time efficiency of the application were evaluated by processing the same Goldmann visual field chart 10 repeated times. The mobile phone application was associated with a mean percent error of 1.98% (95% confidence interval[CI], 0.87%-3.10%) in superior visual field defect calculation. The average mean percent error of the oculoplastic surgeons' visual estimates was 19.75% (95% CI, 14.39%-25.11%). Oculoplastic surgeons, on average, underestimated the defect in all 10 Goldmann charts. There was high interobserver variance among oculoplastic surgeons. The percent error of the 10 repeated measurements on a single chart was 0.93% (95% CI, 0.40%-1.46%). The average time to process 1 chart was 12.9 seconds (95% CI, 10.9-15.0 seconds). The mobile phone application was highly accurate, precise, and time-efficient in calculating the percent superior visual field defect using Goldmann charts. Oculoplastic surgeon visual interpretations were highly inaccurate, highly variable, and usually underestimated the field vision loss.
Methods for estimating selected low-flow frequency statistics for unregulated streams in Kentucky
Martin, Gary R.; Arihood, Leslie D.
2010-01-01
This report provides estimates of, and presents methods for estimating, selected low-flow frequency statistics for unregulated streams in Kentucky including the 30-day mean low flows for recurrence intervals of 2 and 5 years (30Q2 and 30Q5) and the 7-day mean low flows for recurrence intervals of 5, 10, and 20 years (7Q2, 7Q10, and 7Q20). Estimates of these statistics are provided for 121 U.S. Geological Survey streamflow-gaging stations with data through the 2006 climate year, which is the 12-month period ending March 31 of each year. Data were screened to identify the periods of homogeneous, unregulated flows for use in the analyses. Logistic-regression equations are presented for estimating the annual probability of the selected low-flow frequency statistics being equal to zero. Weighted-least-squares regression equations were developed for estimating the magnitude of the nonzero 30Q2, 30Q5, 7Q2, 7Q10, and 7Q20 low flows. Three low-flow regions were defined for estimating the 7-day low-flow frequency statistics. The explicit explanatory variables in the regression equations include total drainage area and the mapped streamflow-variability index measured from a revised statewide coverage of this characteristic. The percentage of the station low-flow statistics correctly classified as zero or nonzero by use of the logistic-regression equations ranged from 87.5 to 93.8 percent. The average standard errors of prediction of the weighted-least-squares regression equations ranged from 108 to 226 percent. The 30Q2 regression equations have the smallest standard errors of prediction, and the 7Q20 regression equations have the largest standard errors of prediction. The regression equations are applicable only to stream sites with low flows unaffected by regulation from reservoirs and local diversions of flow and to drainage basins in specified ranges of basin characteristics. Caution is advised when applying the equations for basins with characteristics near the applicable limits and for basins with karst drainage features.
Accelerator test of the coded aperture mask technique for gamma-ray astronomy
NASA Technical Reports Server (NTRS)
Jenkins, T. L.; Frye, G. M., Jr.; Owens, A.; Carter, J. N.; Ramsden, D.
1982-01-01
A prototype gamma-ray telescope employing the coded aperture mask technique has been constructed and its response to a point source of 20 MeV gamma-rays has been measured. The point spread function is approximately a Gaussian with a standard deviation of 12 arc minutes. This resolution is consistent with the cell size of the mask used and the spatial resolution of the detector. In the context of the present experiment, the error radius of the source position (90 percent confidence level) is 6.1 arc minutes.
Identification of driver errors : overview and recommendations
DOT National Transportation Integrated Search
2002-08-01
Driver error is cited as a contributing factor in most automobile crashes, and although estimates vary by source, driver error is cited as the principal cause of from 45 to 75 percent of crashes. However, the specific errors that lead to crashes, and...
NASA Technical Reports Server (NTRS)
Kwon, Jin H.; Lee, Ja H.
1989-01-01
The far-field beam pattern and the power-collection efficiency are calculated for a multistage laser-diode-array amplifier consisting of about 200,000 5-W laser diode arrays with random distributions of phase and orientation errors and random diode failures. From the numerical calculation it is found that the far-field beam pattern is little affected by random failures of up to 20 percent of the laser diodes with reference of 80 percent receiving efficiency in the center spot. The random differences in phases among laser diodes due to probable manufacturing errors is allowed to about 0.2 times the wavelength. The maximum allowable orientation error is about 20 percent of the diffraction angle of a single laser diode aperture (about 1 cm). The preliminary results indicate that the amplifier could be used for space beam-power transmission with an efficiency of about 80 percent for a moderate-size (3-m-diameter) receiver placed at a distance of less than 50,000 km.
Jiménez, Monik C; Sanders, Anne E; Mauriello, Sally M; Kaste, Linda M; Beck, James D
2014-08-01
Hispanics and Latinos are an ethnically heterogeneous population with distinct oral health risk profiles. Few study investigators have examined potential variation in the burden of periodontitis according to Hispanic or Latino background. The authors used a multicenter longitudinal population-based cohort study to examine the periodontal health status at screening (2008-2011) of 14,006 Hispanic and Latino adults, aged 18 to 74 years, from four U.S. communities who self-identified as Cuban, Dominican, Mexican, Puerto Rican, Central American or South American. The authors present weighted, age-standardized prevalence estimates and corrected standard errors of probing depth (PD), attachment loss (AL) and periodontitis classified according to the case definition established by the Centers for Disease Control and Prevention and the American Academy of Periodontology (CDC-AAP). The authors used a Wald χ(2) test to compare prevalence estimates across Hispanic or Latino background, age and sex. Fifty-one percent of all participants had exhibited total periodontitis (mild, moderate or severe) per the CDC-AAP classification. Cubans and Central Americans exhibited the highest prevalence of moderate periodontitis (39.9 percent and 37.2 percent, respectively). Across all ages, Mexicans had the highest prevalence of PD across severity thresholds. Among those aged 18 through 44 years, Dominicans consistently had the lowest prevalence of AL at all severity thresholds. Measures of periodontitis varied significantly by age, sex and Hispanic or Latino background among the four sampled Hispanic Community Health Study/Study of Latinos communities. Further analyses are needed to account for lifestyle, behavioral, demographic and social factors, including those related to acculturation. Aggregating Hispanics and Latinos or using estimates from Mexicans may lead to substantial underestimation or overestimation of the burden of disease, thus leading to errors in the estimation of needed clinical and public health resources. This information will be useful in informing decisions from public health planning to patient-centered risk assessment.
Finding Blackbody Temperature and Emissivity on a Sub-Pixel Scale
NASA Astrophysics Data System (ADS)
Bernstein, D. J.; Bausell, J.; Grigsby, S.; Kudela, R. M.
2015-12-01
Surface temperature and emissivity provide important insight into the ecosystem being remotely sensed. Dozier (1981) proposed a an algorithm to solve for percent coverage and temperatures of two different surface types (e.g. sea surface, cloud cover, etc.) within a given pixel, with a constant value for emissivity assumed. Here we build on Dozier (1981) by proposing an algorithm that solves for both temperature and emissivity of a water body within a satellite pixel by assuming known percent coverage of surface types within the pixel. Our algorithm generates thermal infrared (TIR) and emissivity end-member spectra for the two surface types. Our algorithm then superposes these end-member spectra on emissivity and TIR spectra emitted from four pixels with varying percent coverage of different surface types. The algorithm was tested preliminarily (48 iterations) using simulated pixels containing more than one surface type, with temperature and emissivity percent errors of ranging from 0 to 1.071% and 2.516 to 15.311% respectively[1]. We then tested the algorithm using a MASTER image from MASTER collected as part of the NASA Student Airborne Research Program (NASA SARP). Here the temperature of water was calculated to be within 0.22 K of in situ data. The algorithm calculated emissivity of water with an accuracy of 0.13 to 1.53% error for Salton Sea pixels collected with MASTER, also collected as part of NASA SARP. This method could improve retrievals for the HyspIRI sensor. [1] Percent error for emissivity was generated by averaging percent error across all selected bands widths.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tan, Sirui, E-mail: siruitan@hotmail.com; Huang, Lianjie, E-mail: ljh@lanl.gov
For modeling scalar-wave propagation in geophysical problems using finite-difference schemes, optimizing the coefficients of the finite-difference operators can reduce numerical dispersion. Most optimized finite-difference schemes for modeling seismic-wave propagation suppress only spatial but not temporal dispersion errors. We develop a novel optimized finite-difference scheme for numerical scalar-wave modeling to control dispersion errors not only in space but also in time. Our optimized scheme is based on a new stencil that contains a few more grid points than the standard stencil. We design an objective function for minimizing relative errors of phase velocities of waves propagating in all directions within amore » given range of wavenumbers. Dispersion analysis and numerical examples demonstrate that our optimized finite-difference scheme is computationally up to 2.5 times faster than the optimized schemes using the standard stencil to achieve the similar modeling accuracy for a given 2D or 3D problem. Compared with the high-order finite-difference scheme using the same new stencil, our optimized scheme reduces 50 percent of the computational cost to achieve the similar modeling accuracy. This new optimized finite-difference scheme is particularly useful for large-scale 3D scalar-wave modeling and inversion.« less
Using an Integrative Approach To Teach Hebrew Grammar in an Elementary Immersion Class.
ERIC Educational Resources Information Center
Eckstein, Peter
The 12-week program described here was designed to improve a Hebrew language immersion class' ability to correctly use the simple past and present tenses. The target group was a sixth-grade class that achieved a 65.68 percent error-free rate on a pre-test; the project's objective was to achieve 90 percent error free tests, using student…
Residual volume on land and when immersed in water: effect on percent body fat.
Demura, Shinichi; Yamaji, Shunsuke; Kitabayashi, Tamotsu
2006-08-01
There is a large residual volume (RV) error when assessing percent body fat by means of hydrostatic weighing. It has generally been measured before hydrostatic weighing. However, an individual's maximal exhalations on land and in the water may not be identical. The aims of this study were to compare residual volumes and vital capacities on land and when immersed to the neck in water, and to examine the influence of the measurement error on percent body fat. The participants were 20 healthy Japanese males and 20 healthy Japanese females. To assess the influence of the RV error on percent body fat in both conditions and to evaluate the cross-validity of the prediction equation, another 20 males and 20 females were measured using hydrostatic weighing. Residual volume was measured on land and in the water using a nitrogen wash-out technique based on an open-circuit approach. In water, residual volume was measured with the participant sitting on a chair while the whole body, except the head, was submerged . The trial-to-trial reliabilities of residual volume in both conditions were very good (intraclass correlation coefficient > 0.98). Although residual volume measured under the two conditions did not agree completely, they showed a high correlation (males: 0.880; females: 0.853; P < 0.05). The limits of agreement for residual volumes in both conditions using Bland-Altman plots were -0.430 to 0.508 litres. This range was larger than the trial-to-trial error of residual volume on land (-0.260 to 0.304 litres). Moreover, the relationship between percent body fat computed using residual volume measured in both conditions was very good for both sexes (males: r = 0.902; females: r = 0.869, P < 0.0001), and the errors were approximately -6 to 4% (limits of agreement for percent body fat: -3.4 to 2.2% for males; -6.3 to 4.4% for females). We conclude that if these errors are of no importance, residual volume measured on land can be used when assessing body composition.
Validity of body composition assessment methods for older men with cardiac disease.
Young, H; Porcari, J; Terry, L; Brice, G
1998-01-01
This study was designed to determine which of several body composition assessment methods was most accurate for patients with cardiac disease for the purpose of outcome measurement. Six body composition assessment methods were administered to each of 24 men with cardiac disease. Methods included circumference measurement, skinfold measurement, near-infrared interactance via the Futrex-5000, bioelectrical impedance via the BioAnalogics ElectroLipoGraph and Tanita TBF-150, and hydrostatic weighing, the criterion measure. A repeated measures analysis of variance indicated no significant (P > .05) difference between circumference and skinfold measurements compared to hydrostatic weighing. Near-infrared interactance presented the best standard error of estimates (3.5%) and the best correlation (r = .84) with hydrostatic weighing; however, the constant error was 3.76%. Bioelectrical impedance measured by the ElectroLipoGraph and TBF-150 instruments significantly underestimated percent body fat by 8.81% and 4.8%, respectively. In this study of middle-aged to older men with cardiac disease, the best method for determining body fat was circumferences. This technique was accurate, easy to administer, inexpensive, and had a lower error potential than the other techniques. Skinfold measurements were also closely related to hydrostatic weighing, but should be performed only by experienced practitioners because there is a greater potential for tester error in certain patients. In the future, near-infrared interactance measurements may be a viable technique for body composition assessment in patients with cardiac disease. However, algorithms specific to the population of patients with cardiac disease being tested must be developed before this technique can be routinely recommended for body composition assessment. Bioelectrical impedance assessment by either method is not recommended for patients with cardiac disease, as it consistently underestimated percent body fat when compared to hydrostatic weighing in this population.
Peak-flow frequency relations and evaluation of the peak-flow gaging network in Nebraska
Soenksen, Philip J.; Miller, Lisa D.; Sharpe, Jennifer B.; Watton, Jason R.
1999-01-01
Estimates of peak-flow magnitude and frequency are required for the efficient design of structures that convey flood flows or occupy floodways, such as bridges, culverts, and roads. The U.S. Geological Survey, in cooperation with the Nebraska Department of Roads, conducted a study to update peak-flow frequency analyses for selected streamflow-gaging stations, develop a new set of peak-flow frequency relations for ungaged streams, and evaluate the peak-flow gaging-station network for Nebraska. Data from stations located in or within about 50 miles of Nebraska were analyzed using guidelines of the Interagency Advisory Committee on Water Data in Bulletin 17B. New generalized skew relations were developed for use in frequency analyses of unregulated streams. Thirty-three drainage-basin characteristics related to morphology, soils, and precipitation were quantified using a geographic information system, related computer programs, and digital spatial data.For unregulated streams, eight sets of regional regression equations relating drainage-basin to peak-flow characteristics were developed for seven regions of the state using a generalized least squares procedure. Two sets of regional peak-flow frequency equations were developed for basins with average soil permeability greater than 4 inches per hour, and six sets of equations were developed for specific geographic areas, usually based on drainage-basin boundaries. Standard errors of estimate for the 100-year frequency equations (1percent probability) ranged from 12.1 to 63.8 percent. For regulated reaches of nine streams, graphs of peak flow for standard frequencies and distance upstream of the mouth were estimated.The regional networks of streamflow-gaging stations on unregulated streams were analyzed to evaluate how additional data might affect the average sampling errors of the newly developed peak-flow equations for the 100-year frequency occurrence. Results indicated that data from new stations, rather than more data from existing stations, probably would produce the greatest reduction in average sampling errors of the equations.
Diffuse-flow conceptualization and simulation of the Edwards aquifer, San Antonio region, Texas
Lindgren, R.J.
2006-01-01
A numerical ground-water-flow model (hereinafter, the conduit-flow Edwards aquifer model) of the karstic Edwards aquifer in south-central Texas was developed for a previous study on the basis of a conceptualization emphasizing conduit development and conduit flow, and included simulating conduits as one-cell-wide, continuously connected features. Uncertainties regarding the degree to which conduits pervade the Edwards aquifer and influence ground-water flow, as well as other uncertainties inherent in simulating conduits, raised the question of whether a model based on the conduit-flow conceptualization was the optimum model for the Edwards aquifer. Accordingly, a model with an alternative hydraulic conductivity distribution without conduits was developed in a study conducted during 2004-05 by the U.S. Geological Survey, in cooperation with the San Antonio Water System. The hydraulic conductivity distribution for the modified Edwards aquifer model (hereinafter, the diffuse-flow Edwards aquifer model), based primarily on a conceptualization in which flow in the aquifer predominantly is through a network of numerous small fractures and openings, includes 38 zones, with hydraulic conductivities ranging from 3 to 50,000 feet per day. Revision of model input data for the diffuse-flow Edwards aquifer model was limited to changes in the simulated hydraulic conductivity distribution. The root-mean-square error for 144 target wells for the calibrated steady-state simulation for the diffuse-flow Edwards aquifer model is 20.9 feet. This error represents about 3 percent of the total head difference across the model area. The simulated springflows for Comal and San Marcos Springs for the calibrated steady-state simulation were within 2.4 and 15 percent of the median springflows for the two springs, respectively. The transient calibration period for the diffuse-flow Edwards aquifer model was 1947-2000, with 648 monthly stress periods, the same as for the conduit-flow Edwards aquifer model. The root-mean-square error for a period of drought (May-November 1956) for the calibrated transient simulation for 171 target wells is 33.4 feet, which represents about 5 percent of the total head difference across the model area. The root-mean-square error for a period of above-normal rainfall (November 1974-July 1975) for the calibrated transient simulation for 169 target wells is 25.8 feet, which represents about 4 percent of the total head difference across the model area. The root-mean-square error ranged from 6.3 to 30.4 feet in 12 target wells with long-term water-level measurements for varying periods during 1947-2000 for the calibrated transient simulation for the diffuse-flow Edwards aquifer model, and these errors represent 5.0 to 31.3 percent of the range in water-level fluctuations of each of those wells. The root-mean-square errors for the five major springs in the San Antonio segment of the aquifer for the calibrated transient simulation, as a percentage of the range of discharge fluctuations measured at the springs, varied from 7.2 percent for San Marcos Springs and 8.1 percent for Comal Springs to 28.8 percent for Leona Springs. The root-mean-square errors for hydraulic heads for the conduit-flow Edwards aquifer model are 27, 76, and 30 percent greater than those for the diffuse-flow Edwards aquifer model for the steady-state, drought, and above-normal rainfall synoptic time periods, respectively. The goodness-of-fit between measured and simulated springflows is similar for Comal, San Marcos, and Leona Springs for the diffuse-flow Edwards aquifer model and the conduit-flow Edwards aquifer model. The root-mean-square errors for Comal and Leona Springs were 15.6 and 21.3 percent less, respectively, whereas the root-mean-square error for San Marcos Springs was 3.3 percent greater for the diffuse-flow Edwards aquifer model compared to the conduit-flow Edwards aquifer model. The root-mean-square errors for San Antonio and San Pedro Springs were appreciably greater, 80.2 and 51.0 percent, respectively, for the diffuse-flow Edwards aquifer model. The simulated water budgets for the diffuse-flow Edwards aquifer model are similar to those for the conduit-flow Edwards aquifer model. Differences in percentage of total sources or discharges for a budget component are 2.0 percent or less for all budget components for the steady-state and transient simulations. The largest difference in terms of the magnitude of water budget components for the transient simulation for 1956 was a decrease of about 10,730 acre-feet per year (about 2 per-cent) in springflow for the diffuse-flow Edwards aquifer model compared to the conduit-flow Edwards aquifer model. This decrease in springflow (a water budget discharge) was largely offset by the decreased net loss of water from storage (a water budget source) of about 10,500 acre-feet per year.
Leach, Julia M; Mancini, Martina; Peterka, Robert J; Hayes, Tamara L; Horak, Fay B
2014-09-29
The Nintendo Wii balance board (WBB) has generated significant interest in its application as a postural control measurement device in both the clinical and (basic, clinical, and rehabilitation) research domains. Although the WBB has been proposed as an alternative to the "gold standard" laboratory-grade force plate, additional research is necessary before the WBB can be considered a valid and reliable center of pressure (CoP) measurement device. In this study, we used the WBB and a laboratory-grade AMTI force plate (AFP) to simultaneously measure the CoP displacement of a controlled dynamic load, which has not been done before. A one-dimensional inverted pendulum was displaced at several different displacement angles and load heights to simulate a variety of postural sway amplitudes and frequencies (<1 Hz). Twelve WBBs were tested to address the issue of inter-device variability. There was a significant effect of sway amplitude, frequency, and direction on the WBB's CoP measurement error, with an increase in error as both sway amplitude and frequency increased and a significantly greater error in the mediolateral (ML) (compared to the anteroposterior (AP)) sway direction. There was no difference in error across the 12 WBB's, supporting low inter-device variability. A linear calibration procedure was then implemented to correct the WBB's CoP signals and reduce measurement error. There was a significant effect of calibration on the WBB's CoP signal accuracy, with a significant reduction in CoP measurement error (quantified by root-mean-squared error) from 2-6 mm (before calibration) to 0.5-2 mm (after calibration). WBB-based CoP signal calibration also significantly reduced the percent error in derived (time-domain) CoP sway measures, from -10.5% (before calibration) to -0.05% (after calibration) (percent errors averaged across all sway measures and in both sway directions). In this study, we characterized the WBB's CoP measurement error under controlled, dynamic conditions and implemented a linear calibration procedure for WBB CoP signals that is recommended to reduce CoP measurement error and provide more reliable estimates of time-domain CoP measures. Despite our promising results, additional work is necessary to understand how our findings translate to the clinical and rehabilitation research domains. Once the WBB's CoP measurement error is fully characterized in human postural sway (which differs from our simulated postural sway in both amplitude and frequency content), it may be used to measure CoP displacement in situations where lower accuracy and precision is acceptable.
NASA Technical Reports Server (NTRS)
Ranaudo, R. J.; Batterson, J. G.; Reehorst, A. L.; Bond, T. H.; Omara, T. M.
1989-01-01
A flight test was performed with the NASA Lewis Research Center's DH-6 icing research aircraft. The purpose was to employ a flight test procedure and data analysis method, to determine the accuracy with which the effects of ice on aircraft stability and control could be measured. For simplicity, flight testing was restricted to the short period longitudinal mode. Two flights were flown in a clean (baseline) configuration, and two flights were flown with simulated horizontal tail ice. Forty-five repeat doublet maneuvers were performed in each of four test configurations, at a given trim speed, to determine the ensemble variation of the estimated stability and control derivatives. Additional maneuvers were also performed in each configuration, to determine the variation in the longitudinal derivative estimates over a wide range of trim speeds. Stability and control derivatives were estimated by a Modified Stepwise Regression (MSR) technique. A measure of the confidence in the derivative estimates was obtained by comparing the standard error for the ensemble of repeat maneuvers, to the average of the estimated standard errors predicted by the MSR program. A multiplicative relationship was determined between the ensemble standard error, and the averaged program standard errors. In addition, a 95 percent confidence interval analysis was performed for the elevator effectiveness estimates, C sub m sub delta e. This analysis identified the speed range where changes in C sub m sub delta e could be attributed to icing effects. The magnitude of icing effects on the derivative estimates were strongly dependent on flight speed and aircraft wing flap configuration. With wing flaps up, the estimated derivatives were degraded most at lower speeds corresponding to that configuration. With wing flaps extended to 10 degrees, the estimated derivatives were degraded most at the higher corresponding speeds. The effects of icing on the changes in longitudinal stability and control derivatives were adequately determined by the flight test procedure and the MSR analysis method discussed herein.
Kumar, K Vasanth; Porkodi, K; Rocha, F
2008-01-15
A comparison of linear and non-linear regression method in selecting the optimum isotherm was made to the experimental equilibrium data of basic red 9 sorption by activated carbon. The r(2) was used to select the best fit linear theoretical isotherm. In the case of non-linear regression method, six error functions namely coefficient of determination (r(2)), hybrid fractional error function (HYBRID), Marquardt's percent standard deviation (MPSD), the average relative error (ARE), sum of the errors squared (ERRSQ) and sum of the absolute errors (EABS) were used to predict the parameters involved in the two and three parameter isotherms and also to predict the optimum isotherm. Non-linear regression was found to be a better way to obtain the parameters involved in the isotherms and also the optimum isotherm. For two parameter isotherm, MPSD was found to be the best error function in minimizing the error distribution between the experimental equilibrium data and predicted isotherms. In the case of three parameter isotherm, r(2) was found to be the best error function to minimize the error distribution structure between experimental equilibrium data and theoretical isotherms. The present study showed that the size of the error function alone is not a deciding factor to choose the optimum isotherm. In addition to the size of error function, the theory behind the predicted isotherm should be verified with the help of experimental data while selecting the optimum isotherm. A coefficient of non-determination, K(2) was explained and was found to be very useful in identifying the best error function while selecting the optimum isotherm.
Atlas of interoccurrence intervals for selected thresholds of daily precipitation in Texas
Asquith, William H.; Roussel, Meghan C.
2003-01-01
A Poisson process model is used to define the distribution of interoccurrence intervals of daily precipitation in Texas. A precipitation interoccurrence interval is the time period between two successive rainfall events. Rainfall events are defined as daily precipitation equaling or exceeding a specified depth threshold. Ten precipitation thresholds are considered: 0.05, 0.10, 0.25, 0.50, 0.75, 1.0, 1.5, 2.0, 2.5, and 3.0 inches. Site-specific mean interoccurrence interval and ancillary statistics are presented for each threshold and for each of 1,306 National Weather Service daily precipitation gages. Maps depicting the spatial variation across Texas of the mean interoccurrence interval for each threshold are presented. The percent change from the statewide standard deviation of the interoccurrence intervals to the root-mean-square error ranges from a magnitude minimum of (negative) -24 to a magnitude maximum of -60 percent for the 0.05- and 2.0-inch thresholds, respectively. Because of the substantial negative percent change, the maps are considered more reliable estimators of the mean interoccurrence interval for most locations in Texas than the statewide mean values.
Adaptive Trajectory Prediction Algorithm for Climbing Flights
NASA Technical Reports Server (NTRS)
Schultz, Charles Alexander; Thipphavong, David P.; Erzberger, Heinz
2012-01-01
Aircraft climb trajectories are difficult to predict, and large errors in these predictions reduce the potential operational benefits of some advanced features for NextGen. The algorithm described in this paper improves climb trajectory prediction accuracy by adjusting trajectory predictions based on observed track data. It utilizes rate-of-climb and airspeed measurements derived from position data to dynamically adjust the aircraft weight modeled for trajectory predictions. In simulations with weight uncertainty, the algorithm is able to adapt to within 3 percent of the actual gross weight within two minutes of the initial adaptation. The root-mean-square of altitude errors for five-minute predictions was reduced by 73 percent. Conflict detection performance also improved, with a 15 percent reduction in missed alerts and a 10 percent reduction in false alerts. In a simulation with climb speed capture intent and weight uncertainty, the algorithm improved climb trajectory prediction accuracy by up to 30 percent and conflict detection performance, reducing missed and false alerts by up to 10 percent.
Strange nucleon electromagnetic form factors from lattice QCD
NASA Astrophysics Data System (ADS)
Alexandrou, C.; Constantinou, M.; Hadjiyiannakou, K.; Jansen, K.; Kallidonis, C.; Koutsou, G.; Avilés-Casco, A. Vaquero
2018-05-01
We evaluate the strange nucleon electromagnetic form factors using an ensemble of gauge configurations generated with two degenerate maximally twisted mass clover-improved fermions with mass tuned to approximately reproduce the physical pion mass. In addition, we present results for the disconnected light quark contributions to the nucleon electromagnetic form factors. Improved stochastic methods are employed leading to high-precision results. The momentum dependence of the disconnected contributions is fitted using the model-independent z-expansion. We extract the magnetic moment and the electric and magnetic radii of the proton and neutron by including both connected and disconnected contributions. We find that the disconnected light quark contributions to both electric and magnetic form factors are nonzero and at the few percent level as compared to the connected. The strange form factors are also at the percent level but more noisy yielding statistical errors that are typically within one standard deviation from a zero value.
Rose, H.J.; Murata, K.J.; Carron, M.K.
1954-01-01
In a combined chemical-spectrochemical procedure for quantitatively determining rare earth elements in cerium minerals, cerium is determined volumetrically, a total rare earths plus thoria precipitate is separated chemically, the ceria content of the precipitate is raised to 80??0 percent by adding pure ceria, and the resulting mixture is analyzed for lanthanum, praseodymium, neodymium, samarium, gadolinium, yttrium, and thorium spectrochemically by means of the d.c. carbon arc. Spectral lines of singly ionized cerium are used as internal standard lines in the spectrochemical determination which is patterned after Fassel's procedure [1]. Results of testing the method with synthetic mixtures of rare earths and with samples of chemically analyzed cerium minerals show that the coefficient of variation for a quadruplicate determination of any element does not exceed 5??0 (excepting yttrium at concentrations less than 1 percent) and that the method is free of serious systematic error. ?? 1954.
NASA Technical Reports Server (NTRS)
Sun, Jielun
1993-01-01
Results are presented of a test of the physically based total column water vapor retrieval algorithm of Wentz (1992) for sensitivity to realistic vertical distributions of temperature and water vapor. The ECMWF monthly averaged temperature and humidity fields are used to simulate the spatial pattern of systematic retrieval error of total column water vapor due to this sensitivity. The estimated systematic error is within 0.1 g/sq cm over about 70 percent of the global ocean area; systematic errors greater than 0.3 g/sq cm are expected to exist only over a few well-defined regions, about 3 percent of the global oceans, assuming that the global mean value is unbiased.
Allen, Robert C; Rutan, Sarah C
2011-10-31
Simulated and experimental data were used to measure the effectiveness of common interpolation techniques during chromatographic alignment of comprehensive two-dimensional liquid chromatography-diode array detector (LC×LC-DAD) data. Interpolation was used to generate a sufficient number of data points in the sampled first chromatographic dimension to allow for alignment of retention times from different injections. Five different interpolation methods, linear interpolation followed by cross correlation, piecewise cubic Hermite interpolating polynomial, cubic spline, Fourier zero-filling, and Gaussian fitting, were investigated. The fully aligned chromatograms, in both the first and second chromatographic dimensions, were analyzed by parallel factor analysis to determine the relative area for each peak in each injection. A calibration curve was generated for the simulated data set. The standard error of prediction and percent relative standard deviation were calculated for the simulated peak for each technique. The Gaussian fitting interpolation technique resulted in the lowest standard error of prediction and average relative standard deviation for the simulated data. However, upon applying the interpolation techniques to the experimental data, most of the interpolation methods were not found to produce statistically different relative peak areas from each other. While most of the techniques were not statistically different, the performance was improved relative to the PARAFAC results obtained when analyzing the unaligned data. Copyright © 2011 Elsevier B.V. All rights reserved.
Taylor, Diane M; Chow, Fotini K; Delkash, Madjid; Imhoff, Paul T
2018-03-01
The short-term temporal variability of landfill methane emissions is not well understood due to uncertainty in measurement methods. Significant variability is seen over short-term measurement campaigns with the tracer dilution method (TDM), but this variability may be due in part to measurement error rather than fluctuations in the actual landfill emissions. In this study, landfill methane emissions and TDM-measured emissions are simulated over a real landfill in Delaware, USA using the Weather Research and Forecasting model (WRF) for two emissions scenarios. In the steady emissions scenario, a constant landfill emissions rate is prescribed at each model grid point on the surface of the landfill. In the unsteady emissions scenario, emissions are calculated at each time step as a function of the local surface wind speed, resulting in variable emissions over each 1.5-h measurement period. The simulation output is used to assess the standard deviation and percent error of the TDM-measured emissions. Eight measurement periods are simulated over two different days to look at different conditions. Results show that standard deviation of the TDM- measured emissions does not increase significantly from the steady emissions simulations to the unsteady emissions scenarios, indicating that the TDM may have inherent errors in its prediction of emissions fluctuations. Results also show that TDM error does not increase significantly from the steady to the unsteady emissions simulations. This indicates that introducing variability to the landfill emissions does not increase errors in the TDM at this site. Across all simulations, TDM errors range from -15% to 43%, consistent with the range of errors seen in previous TDM studies. Simulations indicate diurnal variations of methane emissions when wind effects are significant, which may be important when developing daily and annual emissions estimates from limited field data. Copyright © 2017 Elsevier Ltd. All rights reserved.
A hub dynamometer for measurement of wheel forces in off-road bicycling.
De Lorenzo, D S; Hull, M L
1999-02-01
A dynamometric hubset that measures the two ground contact force components acting on a bicycle wheel in the plane of the bicycle during off-road riding while either coasting or braking was designed, constructed, and evaluated. To maintain compatibility with standard mountain bike construction, the hubs use commercially available shells with modified, strain gage-equipped axles. The axle strain gages are sensitive to forces acting in the radial and tangential directions, while minimizing sensitivity to transverse forces, steering moments, and variations in the lateral location of the center of pressure. Static calibration and a subsequent accuracy check that computed differences between applied and apparent loads developed during coasting revealed root mean squared errors of 1 percent full-scale or less (full-scale load = 4500 N). The natural frequency of the rear hub with the wheel attached exceeded 350 Hz. These performance capabilities make the dynamometer useful for its intended purpose during coasting. To demonstrate this usefulness, sample ground contact forces are presented for a subject who coasted downhill over rough terrain. The dynamometric hubset can also be used to determine ground contact forces during braking providing that the brake reaction force components are known. However, compliance of the fork can lead to high cross-sensitivity and corresponding large (> 5 percent FS) measurement errors at the front wheel.
August Median Streamflow on Ungaged Streams in Eastern Aroostook County, Maine
Lombard, Pamela J.; Tasker, Gary D.; Nielsen, Martha G.
2003-01-01
Methods for estimating August median streamflow were developed for ungaged, unregulated streams in the eastern part of Aroostook County, Maine, with drainage areas from 0.38 to 43 square miles and mean basin elevations from 437 to 1,024 feet. Few long-term, continuous-record streamflow-gaging stations with small drainage areas were available from which to develop the equations; therefore, 24 partial-record gaging stations were established in this investigation. A mathematical technique for estimating a standard low-flow statistic, August median streamflow, at partial-record stations was applied by relating base-flow measurements at these stations to concurrent daily flows at nearby long-term, continuous-record streamflow- gaging stations (index stations). Generalized least-squares regression analysis (GLS) was used to relate estimates of August median streamflow at gaging stations to basin characteristics at these same stations to develop equations that can be applied to estimate August median streamflow on ungaged streams. GLS accounts for varying periods of record at the gaging stations and the cross correlation of concurrent streamflows among gaging stations. Twenty-three partial-record stations and one continuous-record station were used for the final regression equations. The basin characteristics of drainage area and mean basin elevation are used in the calculated regression equation for ungaged streams to estimate August median flow. The equation has an average standard error of prediction from -38 to 62 percent. A one-variable equation uses only drainage area to estimate August median streamflow when less accuracy is acceptable. This equation has an average standard error of prediction from -40 to 67 percent. Model error is larger than sampling error for both equations, indicating that additional basin characteristics could be important to improved estimates of low-flow statistics. Weighted estimates of August median streamflow, which can be used when making estimates at partial-record or continuous-record gaging stations, range from 0.03 to 11.7 cubic feet per second or from 0.1 to 0.4 cubic feet per second per square mile. Estimates of August median streamflow on ungaged streams in the eastern part of Aroostook County, within the range of acceptable explanatory variables, range from 0.03 to 30 cubic feet per second or 0.1 to 0.7 cubic feet per second per square mile. Estimates of August median streamflow per square mile of drainage area generally increase as mean elevation and drainage area increase.
Code of Federal Regulations, 2013 CFR
2013-07-01
... a zero-percent certificate of indebtedness that is made in error? 363.138 Section 363.138 Money and... TREASURY BUREAU OF THE PUBLIC DEBT REGULATIONS GOVERNING SECURITIES HELD IN TREASURYDIRECT Zero-Percent Certificate of Indebtedness General § 363.138 Is Treasury liable for the purchase of a zero-percent...
Code of Federal Regulations, 2012 CFR
2012-07-01
... a zero-percent certificate of indebtedness that is made in error? 363.138 Section 363.138 Money and... TREASURY BUREAU OF THE PUBLIC DEBT REGULATIONS GOVERNING SECURITIES HELD IN TREASURYDIRECT Zero-Percent Certificate of Indebtedness General § 363.138 Is Treasury liable for the purchase of a zero-percent...
Code of Federal Regulations, 2011 CFR
2011-07-01
... a zero-percent certificate of indebtedness that is made in error? 363.138 Section 363.138 Money and... TREASURY BUREAU OF THE PUBLIC DEBT REGULATIONS GOVERNING SECURITIES HELD IN TREASURYDIRECT Zero-Percent Certificate of Indebtedness General § 363.138 Is Treasury liable for the purchase of a zero-percent...
Challenges in Whole-Genome Annotation of Pyrosequenced Eukaryotic Genomes
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuo, Alan; Grigoriev, Igor
2009-04-17
Pyrosequencing technologies such as 454/Roche and Solexa/Illumina vastly lower the cost of nucleotide sequencing compared to the traditional Sanger method, and thus promise to greatly expand the number of sequenced eukaryotic genomes. However, the new technologies also bring new challenges such as shorter reads and new kinds and higher rates of sequencing errors, which complicate genome assembly and gene prediction. At JGI we are deploying 454 technology for the sequencing and assembly of ever-larger eukaryotic genomes. Here we describe our first whole-genome annotation of a purely 454-sequenced fungal genome that is larger than a yeast (>30 Mbp). The pezizomycotine (filamentousmore » ascomycote) Aspergillus carbonarius belongs to the Aspergillus section Nigri species complex, members of which are significant as platforms for bioenergy and bioindustrial technology, as members of soil microbial communities and players in the global carbon cycle, and as agricultural toxigens. Application of a modified version of the standard JGI Annotation Pipeline has so far predicted ~;;10k genes. ~;;12percent of these preliminary annotations suffer a potential frameshift error, which is somewhat higher than the ~;;9percent rate in the Sanger-sequenced and conventionally assembled and annotated genome of fellow Aspergillus section Nigri member A. niger. Also,>90percent of A. niger genes have potential homologs in the A. carbonarius preliminary annotation. Weconclude, and with further annotation and comparative analysis expect to confirm, that 454 sequencing strategies provide a promising substrate for annotation of modestly sized eukaryotic genomes. We will also present results of annotation of a number of other pyrosequenced fungal genomes of bioenergy interest.« less
Vulnerability of ground water to atrazine leaching in Kent County, Michigan
Holtschlag, D.J.; Luukkonen, C.L.
1997-01-01
A steady-state model of pesticide leaching through the unsaturated zone was used with readily available hydrologic, lithologic, and pesticide characteristics to estimate the vulnerability of the near-surface aquifer to atrazine contamination from non-point sources in Kent County, Michigan. The modelcomputed fraction of atrazine remaining at the water table, RM, was used as the vulnerability criterion; time of travel to the water table also was computed. Model results indicate that the average fraction of atrazine remaining at the water table was 0.039 percent; the fraction ranged from 0 to 3.6 percent. Time of travel of atrazine from the soil surface to the water table averaged 17.7 years and ranged from 2.2 to 118 years.Three maps were generated to present three views of the same atrazine vulnerability characteristics using different metrics (nonlinear transformations of the computed fractions remaining). The metrics were chosen because of the highly (right) skewed distribution of computed fractions. The first metric, rm = RMλ (where λ was 0.0625), depicts a relatively uniform distribution of vulnerability across the county with localized areas of high and low vulnerability visible. The second metric, rmλ-0.5, depicts about one-half the county at low vulnerability with discontinuous patterns of high vulnerability evident. In the third metric, rmλ-1.0 (RM), more than 95 percent of the county appears to have low vulnerability; small, distinct areas of high vulnerability are present.Aquifer vulnerability estimates in the RM metric were used with a steady-state, uniform atrazine application rate to compute a potential concentration of atrazine in leachate reaching the water table. The average estimated potential atrazine concentration in leachate at the water table was 0.16 μg/L (micrograms per liter) in the model area; estimated potential concentrations ranged from 0 to 26 μg/L. About 2 percent of the model area had estimated potential atrazine concentrations in leachate at the water table that exceeded the USEPA (U.S. Environmental Protection Agency) maximum contaminant level of 3 μg/L.Uncertainty analyses were used to assess effects of parameter uncertainty and spatial interpolation error on the variability of the estimated fractions of atrazine remaining at the water table. Results of Monte Carlo simulations indicate that parameter uncertainty is associated with a standard error of 0.0875 in the computed fractions (in the rm metric). Results of kriging analysis indicate that errors in spatial interpolation are associated with a standard error of 0.146 (in the rm metric). Thus, uncertainty in fractions remaining is primarily associated with spatial interpolation error, which can be reduced by increasing the density of points where the leaching model is applied.A sensitivity analysis indicated which of 13 hydrologic, lithologic, and pesticide characteristics were influential in determining fractions of atrazine remaining at the water table. Results indicate that fractions remaining are most sensitive to the unit changes in pesticide half life and in organic-carbon content in soils and unweathered rocks, and least sensitive to infiltration rates.The leaching model applied in this report provides an estimate of the vulnerability of the near-surface aquifer in Kent County to contamination by atrazine. The vulnerability estimate is related to water-quality criteria developed by the USEPA to help assess potential risks from atrazine to the near-surface aquifer. However, atrazine accounts for only 28 percent of the herbicide use in the county; additional potential for contamination exists from other pesticides and pesticide metabolites. Therefore, additional work is needed to develop a comprehensive understanding of the relative risks associated with specific pesticides. The modeling approach described in this report provides a technique for estimating relative vulnerabilities to specific pesticides and for helping to assess potential risks.
Farzandipour, Mehrdad; Sheikhtaheri, Abbas
2009-01-01
To evaluate the accuracy of procedural coding and the factors that influence it, 246 records were randomly selected from four teaching hospitals in Kashan, Iran. “Recodes” were assigned blindly and then compared to the original codes. Furthermore, the coders' professional behaviors were carefully observed during the coding process. Coding errors were classified as major or minor. The relations between coding accuracy and possible effective factors were analyzed by χ2 or Fisher exact tests as well as the odds ratio (OR) and the 95 percent confidence interval for the OR. The results showed that using a tabular index for rechecking codes reduces errors (83 percent vs. 72 percent accuracy). Further, more thorough documentation by the clinician positively affected coding accuracy, though this relation was not significant. Readability of records decreased errors overall (p = .003), including major ones (p = .012). Moreover, records with no abbreviations had fewer major errors (p = .021). In conclusion, not using abbreviations, ensuring more readable documentation, and paying more attention to available information increased coding accuracy and the quality of procedure databases. PMID:19471647
Use of Quality Controlled AIRS Temperature Soundings to Improve Forecast Skill
NASA Technical Reports Server (NTRS)
Susskind, Joel; Reale, Oreste; Iredell, Lena
2010-01-01
AIRS was launched on EOS Aqua on May 4, 2002, together with AMSU-A and HSB, to form a next generation polar orbiting infrared and microwave atmospheric sounding system. The primary products of AIRS/AMSU-A are twice daily global fields of atmospheric temperature-humidity profiles, ozone profiles, sea/land surface skin temperature, and cloud related parameters including OLR. Also included are the clear column radiances used to derive these products which are representative of the radiances AIRS would have seen if there were no clouds in the field of view. All products also have error estimates. The sounding goals of AIRS are to produce 1 km tropospheric layer mean temperatures with an rms error of 1K, and layer precipitable water with an rms error of 20 percent, in cases with up to 90 percent effective cloud cover. The products are designed for data assimilation purposes for the improvement of numerical weather prediction, as well as for the study of climate and meteorological processes. With regard to data assimilation, one can use either the products themselves or the clear column radiances from which the products were derived. The AIRS Version 5 retrieval algorithm is now being used operationally at the Goddard DISC in the routine generation of geophysical parameters derived from AIRS/AMSU data. A major innovation in Version 5 is the ability to generate case-by-case level-by-level error estimates for retrieved quantities and clear column radiances, and the use of these error estimates for Quality Control. The temperature profile error estimates are used to determine a case-by-case characteristic pressure pbest, down to which the profile is considered acceptable for data assimilation purposes. The characteristic pressure p(sub best) is determined by comparing the case dependent error estimate (delta)T(p) to the threshold values (Delta)T(p). The AIRS Version 5 data set provides error estimates of T(p) at all levels, and also profile dependent values of pbest based on use of a Standard profile dependent threshold (Delta)T(p). These Standard thresholds were designed as a compromise between optimal use for data assimilation purposes, which requires highest accuracy (tighter Quality Control), and climate purposes, which requires more spatial coverage (looser Quality Control). Subsequent research using Version 5 sounding and error estimates showed that tighter Quality Control performs better for data assimilation proposes, while looser Quality Control better spatial coverage) performs better for climate purposes. We conducted a number of data assimilation experiments using the NASA GEOS-5 Data Assimilation System as a step toward finding an optimum balance of spatial coverage and sounding accuracy with regard to improving forecast skill. The model was run at a horizontal resolution of 0.5 degree latitude x 0.67 degree longitude with 72 vertical levels. These experiments were run during four different seasons, each using a different year. The AIRS temperature profiles were presented to the GEOS-5 analysis as rawinsonde profiles, and the profile error estimates (delta)T(p) were used as the uncertainty for each measurement in the data assimilation process.
Perry, Charles A.; Wolock, David M.; Artman, Joshua C.
2004-01-01
Streamflow statistics of flow duration and peak-discharge frequency were estimated for 4,771 individual locations on streams listed on the 1999 Kansas Surface Water Register. These statistics included the flow-duration values of 90, 75, 50, 25, and 10 percent, as well as the mean flow value. Peak-discharge frequency values were estimated for the 2-, 5-, 10-, 25-, 50-, and 100-year floods. Least-squares multiple regression techniques were used, along with Tobit analyses, to develop equations for estimating flow-duration values of 90, 75, 50, 25, and 10 percent and the mean flow for uncontrolled flow stream locations. The contributing-drainage areas of 149 U.S. Geological Survey streamflow-gaging stations in Kansas and parts of surrounding States that had flow uncontrolled by Federal reservoirs and used in the regression analyses ranged from 2.06 to 12,004 square miles. Logarithmic transformations of climatic and basin data were performed to yield the best linear relation for developing equations to compute flow durations and mean flow. In the regression analyses, the significant climatic and basin characteristics, in order of importance, were contributing-drainage area, mean annual precipitation, mean basin permeability, and mean basin slope. The analyses yielded a model standard error of prediction range of 0.43 logarithmic units for the 90-percent duration analysis to 0.15 logarithmic units for the 10-percent duration analysis. The model standard error of prediction was 0.14 logarithmic units for the mean flow. Regression equations used to estimate peak-discharge frequency values were obtained from a previous report, and estimates for the 2-, 5-, 10-, 25-, 50-, and 100-year floods were determined for this report. The regression equations and an interpolation procedure were used to compute flow durations, mean flow, and estimates of peak-discharge frequency for locations along uncontrolled flow streams on the 1999 Kansas Surface Water Register. Flow durations, mean flow, and peak-discharge frequency values determined at available gaging stations were used to interpolate the regression-estimated flows for the stream locations where available. Streamflow statistics for locations that had uncontrolled flow were interpolated using data from gaging stations weighted according to the drainage area and the bias between the regression-estimated and gaged flow information. On controlled reaches of Kansas streams, the streamflow statistics were interpolated between gaging stations using only gaged data weighted by drainage area.
Radiologic Errors in Patients With Lung Cancer
Forrest, John V.; Friedman, Paul J.
1981-01-01
Some 20 percent to 50 percent of detectable malignant lesions are missed or misdiagnosed at the time of their first radiologic appearance. These errors can result in delayed diagnosis and treatment, which may affect a patient's survival. Use of moderately high (130 to 150) kilovolt peak films, awareness of portions of the lung where lesions are often missed (such as lung apices and paramediastinal and hilar areas), careful comparison of current roentgenograms with those taken previously and the use of an independent second observer can help to minimize the rate of radiologic diagnostic errors in patients with lung cancer. ImagesFigure 3.Figure 4. PMID:7257363
HMI Measured Doppler Velocity Contamination from the SDO Orbit Velocity
NASA Astrophysics Data System (ADS)
Scherrer, Phil; HMI Team
2016-10-01
The Problem: The SDO satellite is in an inclined Geo-sync orbit which allows uninterrupted views of the Sun nearly 98% of the time. This orbit has a velocity of about 3,500 m/s with the solar line-of-sight component varying with time of day and time of year. Due to remaining calibration errors in wavelength filters the orbit velocity leaks into the line-of-sight solar velocity and magnetic field measurements. Since the same model of the filter is used in the Milne-Eddington inversions used to generate the vector magnetic field data, the orbit velocity also contaminates the vector magnetic products. These errors contribute 12h and 24h variations in most HMI data products and are known as the 24-hour problem. Early in the mission we made a patch to the calibration that corrected the disk mean velocity. The resulting LOS velocity has been used for helioseismology with no apparent problems. The velocity signal has about a 1% scale error that varies with time of day and with velocity, i.e. it is non-linear for large velocities. This causes leaks into the LOS field (which is simply the difference between velocity measured in LCP and RCP rescaled for the Zeeman splitting). This poster reviews the measurement process, shows examples of the problem, and describes recent work at resolving the issues. Since the errors are in the filter characterization it makes most sense to work first on the LOS data products since they, unlike the vector products, are directly and simply related to the filter profile without assumptions on the solar atmosphere, filling factors, etc. Therefore this poster is strictly limited to understanding how to better understand the filter profiles as they vary across the field and with time of day and time in years resulting in velocity errors of up to a percent and LOS field estimates with errors up to a few percent (of the standard LOS magnetograph method based on measuring the differences in wavelength of the line centroids in LCP and RCP light). We expect that when better filter profiles are available it will be possible to generate improved vector field data products as well.
Resolving Mixed Algal Species in Hyperspectral Images
Mehrubeoglu, Mehrube; Teng, Ming Y.; Zimba, Paul V.
2014-01-01
We investigated a lab-based hyperspectral imaging system's response from pure (single) and mixed (two) algal cultures containing known algae types and volumetric combinations to characterize the system's performance. The spectral response to volumetric changes in single and combinations of algal mixtures with known ratios were tested. Constrained linear spectral unmixing was applied to extract the algal content of the mixtures based on abundances that produced the lowest root mean square error. Percent prediction error was computed as the difference between actual percent volumetric content and abundances at minimum RMS error. Best prediction errors were computed as 0.4%, 0.4% and 6.3% for the mixed spectra from three independent experiments. The worst prediction errors were found as 5.6%, 5.4% and 13.4% for the same order of experiments. Additionally, Beer-Lambert's law was utilized to relate transmittance to different volumes of pure algal suspensions demonstrating linear logarithmic trends for optical property measurements. PMID:24451451
Cruz, Jennifer L; Brown, Jamie N
2015-06-01
Rigorous practices for safe dispensing of investigational drugs are not standardized. This investigation sought to identify error-prevention processes utilized in the provision of investigational drug services (IDS) and to characterize pharmacists' perceptions about safety risks posed by investigational drugs. An electronic questionnaire was distributed to an audience of IDS pharmacists within the Veteran Affairs Health System. Multiple facets were examined including demographics, perceptions of medication safety, and standard processes used to support investigational drug protocols. Twenty-one respondents (32.8% response rate) from the Northeast, Midwest, South, West, and Non-contiguous United States participated. The mean number of pharmacist full-time equivalents (FTEs) dedicated to the IDS was 0.77 per site with 0.2 technician FTEs. The mean number of active protocols was 22. Seventeen respondents (81%) indicated some level of concern for safety risks. Concerns related to the packaging of medications were expressed, most notably lack of product differentiation, expiration dating, barcodes, and choice of font size or color. Regarding medication safety practices, the majority of sites had specific procedures in place for storing and securing drug supply, temperature monitoring, and prescription labeling. Repackaging bulk items and proactive error-identification strategies were less common. Sixty-seven percent of respondents reported that an independent double check was not routinely performed. Medication safety concerns exist among pharmacists in an investigational drug service; however, a variety of measures have been employed to improve medication safety practices. Best practices for the safe dispensing of investigational medications should be developed in order to standardize these error-prevention strategies.
Brown, Jamie N.
2015-01-01
Objectives: Rigorous practices for safe dispensing of investigational drugs are not standardized. This investigation sought to identify error-prevention processes utilized in the provision of investigational drug services (IDS) and to characterize pharmacists’ perceptions about safety risks posed by investigational drugs. Methods: An electronic questionnaire was distributed to an audience of IDS pharmacists within the Veteran Affairs Health System. Multiple facets were examined including demographics, perceptions of medication safety, and standard processes used to support investigational drug protocols. Results: Twenty-one respondents (32.8% response rate) from the Northeast, Midwest, South, West, and Non-contiguous United States participated. The mean number of pharmacist full-time equivalents (FTEs) dedicated to the IDS was 0.77 per site with 0.2 technician FTEs. The mean number of active protocols was 22. Seventeen respondents (81%) indicated some level of concern for safety risks. Concerns related to the packaging of medications were expressed, most notably lack of product differentiation, expiration dating, barcodes, and choice of font size or color. Regarding medication safety practices, the majority of sites had specific procedures in place for storing and securing drug supply, temperature monitoring, and prescription labeling. Repackaging bulk items and proactive error-identification strategies were less common. Sixty-seven percent of respondents reported that an independent double check was not routinely performed. Conclusions: Medication safety concerns exist among pharmacists in an investigational drug service; however, a variety of measures have been employed to improve medication safety practices. Best practices for the safe dispensing of investigational medications should be developed in order to standardize these error-prevention strategies. PMID:26240744
Predictability of process resource usage - A measurement-based study on UNIX
NASA Technical Reports Server (NTRS)
Devarakonda, Murthy V.; Iyer, Ravishankar K.
1989-01-01
A probabilistic scheme is developed to predict process resource usage in UNIX. Given the identity of the program being run, the scheme predicts CPU time, file I/O, and memory requirements of a process at the beginning of its life. The scheme uses a state-transition model of the program's resource usage in its past executions for prediction. The states of the model are the resource regions obtained from an off-line cluster analysis of processes run on the system. The proposed method is shown to work on data collected from a VAX 11/780 running 4.3 BSD UNIX. The results show that the predicted values correlate well with the actual. The correlation coefficient betweeen the predicted and actual values of CPU time is 0.84. Errors in prediction are mostly small. Some 82 percent of errors in CPU time prediction are less than 0.5 standard deviations of process CPU time.
Ocean data assimilation using optimal interpolation with a quasi-geostrophic model
NASA Technical Reports Server (NTRS)
Rienecker, Michele M.; Miller, Robert N.
1991-01-01
A quasi-geostrophic (QG) stream function is analyzed by optimal interpolation (OI) over a 59-day period in a 150-km-square domain off northern California. Hydrographic observations acquired over five surveys were assimilated into a QG open boundary ocean model. Assimilation experiments were conducted separately for individual surveys to investigate the sensitivity of the OI analyses to parameters defining the decorrelation scale of an assumed error covariance function. The analyses were intercompared through dynamical hindcasts between surveys. The best hindcast was obtained using the smooth analyses produced with assumed error decorrelation scales identical to those of the observed stream function. The rms difference between the hindcast stream function and the final analysis was only 23 percent of the observation standard deviation. The two sets of OI analyses were temporally smoother than the fields from statistical objective analysis and in good agreement with the only independent data available for comparison.
Estimating Discharge in Low-Order Rivers With High-Resolution Aerial Imagery
NASA Astrophysics Data System (ADS)
King, Tyler V.; Neilson, Bethany T.; Rasmussen, Mitchell T.
2018-02-01
Remote sensing of river discharge promises to augment in situ gauging stations, but the majority of research in this field focuses on large rivers (>50 m wide). We present a method for estimating volumetric river discharge in low-order (<50 m wide) rivers from remotely sensed data by coupling high-resolution imagery with one-dimensional hydraulic modeling at so-called virtual gauging stations. These locations were identified as locations where the river contracted under low flows, exposing a substantial portion of the river bed. Topography of the exposed river bed was photogrammetrically extracted from high-resolution aerial imagery while the geometry of the remaining inundated portion of the channel was approximated based on adjacent bank topography and maximum depth assumptions. Full channel bathymetry was used to create hydraulic models that encompassed virtual gauging stations. Discharge for each aerial survey was estimated with the hydraulic model by matching modeled and remotely sensed wetted widths. Based on these results, synthetic width-discharge rating curves were produced for each virtual gauging station. In situ observations were used to determine the accuracy of wetted widths extracted from imagery (mean error 0.36 m), extracted bathymetry (mean vertical RMSE 0.23 m), and discharge (mean percent error 7% with a standard deviation of 6%). Sensitivity analyses were conducted to determine the influence of inundated channel bathymetry and roughness parameters on estimated discharge. Comparison of synthetic rating curves produced through sensitivity analyses show that reasonable ranges of parameter values result in mean percent errors in predicted discharges of 12%-27%.
Hong, KyungPyo; Jeong, Eun-Kee; Wall, T. Scott; Drakos, Stavros G.; Kim, Daniel
2015-01-01
Purpose To develop and evaluate a wideband arrhythmia-insensitive-rapid (AIR) pulse sequence for cardiac T1 mapping without image artifacts induced by implantable-cardioverter-defibrillator (ICD). Methods We developed a wideband AIR pulse sequence by incorporating a saturation pulse with wide frequency bandwidth (8.9 kHz), in order to achieve uniform T1 weighting in the heart with ICD. We tested the performance of original and “wideband” AIR cardiac T1 mapping pulse sequences in phantom and human experiments at 1.5T. Results In 5 phantoms representing native myocardium and blood and post-contrast blood/tissue T1 values, compared with the control T1 values measured with an inversion-recovery pulse sequence without ICD, T1 values measured with original AIR with ICD were considerably lower (absolute percent error >29%), whereas T1 values measured with wideband AIR with ICD were similar (absolute percent error <5%). Similarly, in 11 human subjects, compared with the control T1 values measured with original AIR without ICD, T1 measured with original AIR with ICD was significantly lower (absolute percent error >10.1%), whereas T1 measured with wideband AIR with ICD was similar (absolute percent error <2.0%). Conclusion This study demonstrates the feasibility of a wideband pulse sequence for cardiac T1 mapping without significant image artifacts induced by ICD. PMID:25975192
Eash, David A.; Barnes, Kimberlee K.
2017-01-01
A statewide study was conducted to develop regression equations for estimating six selected low-flow frequency statistics and harmonic mean flows for ungaged stream sites in Iowa. The estimation equations developed for the six low-flow frequency statistics include: the annual 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years, the annual 30-day mean low flow for a recurrence interval of 5 years, and the seasonal (October 1 through December 31) 1- and 7-day mean low flows for a recurrence interval of 10 years. Estimation equations also were developed for the harmonic-mean-flow statistic. Estimates of these seven selected statistics are provided for 208 U.S. Geological Survey continuous-record streamgages using data through September 30, 2006. The study area comprises streamgages located within Iowa and 50 miles beyond the State's borders. Because trend analyses indicated statistically significant positive trends when considering the entire period of record for the majority of the streamgages, the longest, most recent period of record without a significant trend was determined for each streamgage for use in the study. The median number of years of record used to compute each of these seven selected statistics was 35. Geographic information system software was used to measure 54 selected basin characteristics for each streamgage. Following the removal of two streamgages from the initial data set, data collected for 206 streamgages were compiled to investigate three approaches for regionalization of the seven selected statistics. Regionalization, a process using statistical regression analysis, provides a relation for efficiently transferring information from a group of streamgages in a region to ungaged sites in the region. The three regionalization approaches tested included statewide, regional, and region-of-influence regressions. For the regional regression, the study area was divided into three low-flow regions on the basis of hydrologic characteristics, landform regions, and soil regions. A comparison of root mean square errors and average standard errors of prediction for the statewide, regional, and region-of-influence regressions determined that the regional regression provided the best estimates of the seven selected statistics at ungaged sites in Iowa. Because a significant number of streams in Iowa reach zero flow as their minimum flow during low-flow years, four different types of regression analyses were used: left-censored, logistic, generalized-least-squares, and weighted-least-squares regression. A total of 192 streamgages were included in the development of 27 regression equations for the three low-flow regions. For the northeast and northwest regions, a censoring threshold was used to develop 12 left-censored regression equations to estimate the 6 low-flow frequency statistics for each region. For the southern region a total of 12 regression equations were developed; 6 logistic regression equations were developed to estimate the probability of zero flow for the 6 low-flow frequency statistics and 6 generalized least-squares regression equations were developed to estimate the 6 low-flow frequency statistics, if nonzero flow is estimated first by use of the logistic equations. A weighted-least-squares regression equation was developed for each region to estimate the harmonic-mean-flow statistic. Average standard errors of estimate for the left-censored equations for the northeast region range from 64.7 to 88.1 percent and for the northwest region range from 85.8 to 111.8 percent. Misclassification percentages for the logistic equations for the southern region range from 5.6 to 14.0 percent. Average standard errors of prediction for generalized least-squares equations for the southern region range from 71.7 to 98.9 percent and pseudo coefficients of determination for the generalized-least-squares equations range from 87.7 to 91.8 percent. Average standard errors of prediction for weighted-least-squares equations developed for estimating the harmonic-mean-flow statistic for each of the three regions range from 66.4 to 80.4 percent. The regression equations are applicable only to stream sites in Iowa with low flows not significantly affected by regulation, diversion, or urbanization and with basin characteristics within the range of those used to develop the equations. If the equations are used at ungaged sites on regulated streams, or on streams affected by water-supply and agricultural withdrawals, then the estimates will need to be adjusted by the amount of regulation or withdrawal to estimate the actual flow conditions if that is of interest. Caution is advised when applying the equations for basins with characteristics near the applicable limits of the equations and for basins located in karst topography. A test of two drainage-area ratio methods using 31 pairs of streamgages, for the annual 7-day mean low-flow statistic for a recurrence interval of 10 years, indicates a weighted drainage-area ratio method provides better estimates than regional regression equations for an ungaged site on a gaged stream in Iowa when the drainage-area ratio is between 0.5 and 1.4. These regression equations will be implemented within the U.S. Geological Survey StreamStats web-based geographic-information-system tool. StreamStats allows users to click on any ungaged site on a river and compute estimates of the seven selected statistics; in addition, 90-percent prediction intervals and the measured basin characteristics for the ungaged sites also are provided. StreamStats also allows users to click on any streamgage in Iowa and estimates computed for these seven selected statistics are provided for the streamgage.
Failure analysis and modeling of a VAXcluster system
NASA Technical Reports Server (NTRS)
Tang, Dong; Iyer, Ravishankar K.; Subramani, Sujatha S.
1990-01-01
This paper discusses the results of a measurement-based analysis of real error data collected from a DEC VAXcluster multicomputer system. In addition to evaluating basic system dependability characteristics such as error and failure distributions and hazard rates for both individual machines and for the VAXcluster, reward models were developed to analyze the impact of failures on the system as a whole. The results show that more than 46 percent of all failures were due to errors in shared resources. This is despite the fact that these errors have a recovery probability greater than 0.99. The hazard rate calculations show that not only errors, but also failures occur in bursts. Approximately 40 percent of all failures occur in bursts and involved multiple machines. This result indicates that correlated failures are significant. Analysis of rewards shows that software errors have the lowest reward (0.05 vs 0.74 for disk errors). The expected reward rate (reliability measure) of the VAXcluster drops to 0.5 in 18 hours for the 7-out-of-7 model and in 80 days for the 3-out-of-7 model.
40 CFR 80.1405 - What are the Renewable Fuel Standards?
Code of Federal Regulations, 2012 CFR
2012-07-01
... Renewable Fuel Standards? (a) (1) Renewable Fuel Standards for 2010. (i) The value of the cellulosic biofuel... shall be 1.10 percent. (iii) The value of the advanced biofuel standard for 2010 shall be 0.61 percent... Standards for 2011. (i) The value of the cellulosic biofuel standard for 2011 shall be 0.003 percent. (ii...
40 CFR 80.1405 - What are the Renewable Fuel Standards?
Code of Federal Regulations, 2013 CFR
2013-07-01
... Renewable Fuel Standards? (a) (1) Renewable Fuel Standards for 2010. (i) The value of the cellulosic biofuel... shall be 1.10 percent. (iii) The value of the advanced biofuel standard for 2010 shall be 0.61 percent... Standards for 2011. (i) The value of the cellulosic biofuel standard for 2011 shall be 0.003 percent. (ii...
40 CFR 80.1405 - What are the Renewable Fuel Standards?
Code of Federal Regulations, 2014 CFR
2014-07-01
... Renewable Fuel Standards? (a) (1) Renewable Fuel Standards for 2010. (i) The value of the cellulosic biofuel... shall be 1.10 percent. (iii) The value of the advanced biofuel standard for 2010 shall be 0.61 percent... Standards for 2011. (i) The value of the cellulosic biofuel standard for 2011 shall be 0.003 percent. (ii...
Problems in determining the surface density of the Galactic disk
NASA Technical Reports Server (NTRS)
Statler, Thomas S.
1989-01-01
A new method is presented for determining the local surface density of the Galactic disk from distance and velocity measurements of stars toward the Galactic poles. The procedure is fully three-dimensional, approximating the Galactic potential by a potential of Staeckel form and using the analytic third integral to treat the tilt and the change of shape of the velocity ellipsoid consistently. Applying the procedure to artificial data superficially resembling the K dwarf sample of Kuijken and Gilmore (1988, 1989), it is shown that the current best estimates of local disk surface density are uncertain by at least 30 percent. Of this, about 25 percent is due to the size of the velocity sample, about 15 percent comes from uncertainties in the rotation curve and the solar galactocentric distance, and about 10 percent from ignorance of the shape of the velocity distribution above z = 1 kpc, the errors adding in quadrature. Increasing the sample size by a factor of 3 will reduce the error to 20 percent. To achieve 10 percent accuracy, observations will be needed along other lines of sight to constrain the shape of the velocity ellipsoid.
NASA Astrophysics Data System (ADS)
Boes, Kelsey S.; Roberts, Michael S.; Vinueza, Nelson R.
2018-03-01
Complex mixture analysis is a costly and time-consuming task facing researchers with foci as varied as food science and fuel analysis. When faced with the task of quantifying oxygen-rich bio-oil molecules in a complex diesel mixture, we asked whether complex mixtures could be qualitatively and quantitatively analyzed on a single mass spectrometer with mid-range resolving power without the use of lengthy separations. To answer this question, we developed and evaluated a quantitation method that eliminated chromatography steps and expanded the use of quadrupole-time-of-flight mass spectrometry from primarily qualitative to quantitative as well. To account for mixture complexity, the method employed an ionization dopant, targeted tandem mass spectrometry, and an internal standard. This combination of three techniques achieved reliable quantitation of oxygen-rich eugenol in diesel from 300 to 2500 ng/mL with sufficient linearity (R2 = 0.97 ± 0.01) and excellent accuracy (percent error = 0% ± 5). To understand the limitations of the method, it was compared to quantitation attained on a triple quadrupole mass spectrometer, the gold standard for quantitation. The triple quadrupole quantified eugenol from 50 to 2500 ng/mL with stronger linearity (R2 = 0.996 ± 0.003) than the quadrupole-time-of-flight and comparable accuracy (percent error = 4% ± 5). This demonstrates that a quadrupole-time-of-flight can be used for not only qualitative analysis but also targeted quantitation of oxygen-rich lignin molecules in complex mixtures without extensive sample preparation. The rapid and cost-effective method presented here offers new possibilities for bio-oil research, including: (1) allowing for bio-oil studies that demand repetitive analysis as process parameters are changed and (2) making this research accessible to more laboratories. [Figure not available: see fulltext.
NASA Astrophysics Data System (ADS)
Boes, Kelsey S.; Roberts, Michael S.; Vinueza, Nelson R.
2017-12-01
Complex mixture analysis is a costly and time-consuming task facing researchers with foci as varied as food science and fuel analysis. When faced with the task of quantifying oxygen-rich bio-oil molecules in a complex diesel mixture, we asked whether complex mixtures could be qualitatively and quantitatively analyzed on a single mass spectrometer with mid-range resolving power without the use of lengthy separations. To answer this question, we developed and evaluated a quantitation method that eliminated chromatography steps and expanded the use of quadrupole-time-of-flight mass spectrometry from primarily qualitative to quantitative as well. To account for mixture complexity, the method employed an ionization dopant, targeted tandem mass spectrometry, and an internal standard. This combination of three techniques achieved reliable quantitation of oxygen-rich eugenol in diesel from 300 to 2500 ng/mL with sufficient linearity (R2 = 0.97 ± 0.01) and excellent accuracy (percent error = 0% ± 5). To understand the limitations of the method, it was compared to quantitation attained on a triple quadrupole mass spectrometer, the gold standard for quantitation. The triple quadrupole quantified eugenol from 50 to 2500 ng/mL with stronger linearity (R2 = 0.996 ± 0.003) than the quadrupole-time-of-flight and comparable accuracy (percent error = 4% ± 5). This demonstrates that a quadrupole-time-of-flight can be used for not only qualitative analysis but also targeted quantitation of oxygen-rich lignin molecules in complex mixtures without extensive sample preparation. The rapid and cost-effective method presented here offers new possibilities for bio-oil research, including: (1) allowing for bio-oil studies that demand repetitive analysis as process parameters are changed and (2) making this research accessible to more laboratories. [Figure not available: see fulltext.
Boes, Kelsey S; Roberts, Michael S; Vinueza, Nelson R
2018-03-01
Complex mixture analysis is a costly and time-consuming task facing researchers with foci as varied as food science and fuel analysis. When faced with the task of quantifying oxygen-rich bio-oil molecules in a complex diesel mixture, we asked whether complex mixtures could be qualitatively and quantitatively analyzed on a single mass spectrometer with mid-range resolving power without the use of lengthy separations. To answer this question, we developed and evaluated a quantitation method that eliminated chromatography steps and expanded the use of quadrupole-time-of-flight mass spectrometry from primarily qualitative to quantitative as well. To account for mixture complexity, the method employed an ionization dopant, targeted tandem mass spectrometry, and an internal standard. This combination of three techniques achieved reliable quantitation of oxygen-rich eugenol in diesel from 300 to 2500 ng/mL with sufficient linearity (R 2 = 0.97 ± 0.01) and excellent accuracy (percent error = 0% ± 5). To understand the limitations of the method, it was compared to quantitation attained on a triple quadrupole mass spectrometer, the gold standard for quantitation. The triple quadrupole quantified eugenol from 50 to 2500 ng/mL with stronger linearity (R 2 = 0.996 ± 0.003) than the quadrupole-time-of-flight and comparable accuracy (percent error = 4% ± 5). This demonstrates that a quadrupole-time-of-flight can be used for not only qualitative analysis but also targeted quantitation of oxygen-rich lignin molecules in complex mixtures without extensive sample preparation. The rapid and cost-effective method presented here offers new possibilities for bio-oil research, including: (1) allowing for bio-oil studies that demand repetitive analysis as process parameters are changed and (2) making this research accessible to more laboratories. Graphical Abstract ᅟ.
Macek, Mark D; Manski, Richard J; Vargas, Clemencia M; Moeller, John F
2002-04-01
To compare estimates of dental visits among adults using three national surveys. Cross-sectional data from the National Health Interview Survey (NHIS), National Health and Nutrition Examination Survey (NHANES), and National Health Expenditure surveys (NMCES, NMES, MEPS). This secondary data analysis assessed whether overall estimates and stratum-specific trends are different across surveys. Dental visit data are age standardized via the direct method to the 1990 population of the United States. Point estimates, standard errors, and test statistics are generated using SUDAAN. Sociodemographic, stratum-specific trends are generally consistent across surveys; however, overall estimates differ (NHANES III [364-day estimate] versus 1993 NHIS: -17.5 percent difference, Z = 7.27, p value < 0.001; NHANES III [365-day estimate] vs. 1993 NHIS: 5.4 percent difference, Z = -2.50, p value = 0.006; MEPS vs. 1993 NHIS: -29.8 percent difference, Z = 16.71, p value < 0.001). MEPS is the least susceptible to intrusion, telescoping, and social desirability. Possible explanations for discrepancies include different reference periods, lead-in statements, question format, and social desirability of responses. Choice of survey should depend on the hypothesis. If trends are necessary, choice of survey should not matter however, if health status or expenditure associations are necessary, then surveys that contain these variables should be used, and if accurate overall estimates are necessary, then MEPS should be used. A validation study should be conducted to establish "true" utilization estimates.
[Errors in prescriptions and their preparation at the outpatient pharmacy of a regional hospital].
Alvarado A, Carolina; Ossa G, Ximena; Bustos M, Luis
2017-01-01
Adverse effects of medications are an important cause of morbidity and hospital admissions. Errors in prescription or preparation of medications by pharmacy personnel are a factor that may influence these occurrence of the adverse effects Aim: To assess the frequency and type of errors in prescriptions and in their preparation at the pharmacy unit of a regional public hospital. Prescriptions received by ambulatory patients and those being discharged from the hospital, were reviewed using a 12-item checklist. The preparation of such prescriptions at the pharmacy unit was also reviewed using a seven item checklist. Seventy two percent of prescriptions had at least one error. The most common mistake was the impossibility of determining the concentration of the prescribed drug. Prescriptions for patients being discharged from the hospital had the higher number of errors. When a prescription had more than two drugs, the risk of error increased 2.4 times. Twenty four percent of prescription preparations had at least one error. The most common mistake was the labeling of drugs with incomplete medical indications. When a preparation included more than three drugs, the risk of preparation error increased 1.8 times. Prescription and preparation of medication delivered to patients had frequent errors. The most important risk factor for errors was the number of drugs prescribed.
Waltemeyer, Scott D.
2006-01-01
Estimates of the magnitude and frequency of peak discharges are necessary for the reliable flood-hazard mapping in the Navajo Nation in Arizona, Utah, Colorado, and New Mexico. The Bureau of Indian Affairs, U.S. Army Corps of Engineers, and Navajo Nation requested that the U.S. Geological Survey update estimates of peak discharge magnitude for gaging stations in the region and update regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites using data collected through 1999 at 146 gaging stations, an additional 13 years of peak-discharge data since a 1997 investigation, which used gaging-station data through 1986. The equations for estimation of peak discharges at ungaged sites were developed for flood regions 8, 11, high elevation, and 6 and are delineated on the basis of the hydrologic codes from the 1997 investigation. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 82 of the 146 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge having a recurrence interval of less than 1.4 years in the probability-density function. Within each region, logarithms of the peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then was applied to the same data used in the ordinary least-squares regression analyses. The average standard error of prediction for a peak discharge have a recurrence interval of 100-years for region 8 was 53 percent (average) for the 100-year flood. The average standard of prediction, which includes average sampling error and average standard error of regression, ranged from 45 to 83 percent for the 100-year flood. Estimated standard error of prediction for a hybrid method for region 11 was large in the 1997 investigation. No distinction of floods produced from a high-elevation region was presented in the 1997 investigation. Overall, the equations based on generalized least-squares regression techniques are considered to be more reliable than those in the 1997 report because of the increased length of record and improved GIS method. Techniques for transferring flood-frequency relations to ungaged sites on the same stream can be estimated at an ungaged site by a direct application of the regional regression equation or at an ungaged site on a stream that has a gaging station upstream or downstream by using the drainage-area ratio and the drainage-area exponent from the regional regression equation of the respective region.
Chu, David; Xiao, Jane; Shah, Payal; Todd, Brett
2018-06-20
Cognitive errors are a major contributor to medical error. Traditionally, medical errors at teaching hospitals are analyzed in morbidity and mortality (M&M) conferences. We aimed to describe the frequency of cognitive errors in relation to the occurrence of diagnostic and other error types, in cases presented at an emergency medicine (EM) resident M&M conference. We conducted a retrospective study of all cases presented at a suburban US EM residency monthly M&M conference from September 2011 to August 2016. Each case was reviewed using the electronic medical record (EMR) and notes from the M&M case by two EM physicians. Each case was categorized by type of primary medical error that occurred as described by Okafor et al. When a diagnostic error occurred, the case was reviewed for contributing cognitive and non-cognitive factors. Finally, when a cognitive error occurred, the case was classified into faulty knowledge, faulty data gathering or faulty synthesis, as described by Graber et al. Disagreements in error type were mediated by a third EM physician. A total of 87 M&M cases were reviewed; the two reviewers agreed on 73 cases, and 14 cases required mediation by a third reviewer. Forty-eight cases involved diagnostic errors, 47 of which were cognitive errors. Of these 47 cases, 38 involved faulty synthesis, 22 involved faulty data gathering and only 11 involved faulty knowledge. Twenty cases contained more than one type of cognitive error. Twenty-nine cases involved both a resident and an attending physician, while 17 cases involved only an attending physician. Twenty-one percent of the resident cases involved all three cognitive errors, while none of the attending cases involved all three. Forty-one percent of the resident cases and only 6% of the attending cases involved faulty knowledge. One hundred percent of the resident cases and 94% of the attending cases involved faulty synthesis. Our review of 87 EM M&M cases revealed that cognitive errors are commonly involved in cases presented, and that these errors are less likely due to deficient knowledge and more likely due to faulty synthesis. M&M conferences may therefore provide an excellent forum to discuss cognitive errors and how to reduce their occurrence.
Schulze, P.A.; Capel, P.D.; Squillace, P.J.; Helsel, D.R.
1993-01-01
The usefulness and sensitivity, of a portable immunoassay test for the semiquantitative field screening of water samples was evaluated by means of laboratory and field studies. Laboratory results indicated that the tests were useful for the determination of atrazine concentrations of 0.1 to 1.5 μg/L. At a concentration of 1 μg/L, the relative standard deviation in the difference between the regression line and the actual result was about 40 percent. The immunoassay was less sensitive and produced similar errors for other triazine herbicides. After standardization, the test results were relatively insensitive to ionic content and variations in pH (range, 4 to 10), mildly sensitive to temperature changes, and quite sensitive to the timing of the final incubation step, variances in timing can be a significant source of error. Almost all of the immunoassays predicted a higher atrazine concentration in water samples when compared to results of gas chromatography. If these tests are used as a semiquantitative screening tool, this tendency for overprediction does not diminish the tests' usefulness. Generally, the tests seem to be a valuable method for screening water samples for triazine herbicides.
Homer, Collin G.; Aldridge, Cameron L.; Meyer, Debra K.; Schell, Spencer J.
2012-01-01
agebrush ecosystems in North America have experienced extensive degradation since European settlement. Further degradation continues from exotic invasive plants, altered fire frequency, intensive grazing practices, oil and gas development, and climate change – adding urgency to the need for ecosystem-wide understanding. Remote sensing is often identified as a key information source to facilitate ecosystem-wide characterization, monitoring, and analysis; however, approaches that characterize sagebrush with sufficient and accurate local detail across large enough areas to support this paradigm are unavailable. We describe the development of a new remote sensing sagebrush characterization approach for the state of Wyoming, U.S.A. This approach integrates 2.4 m QuickBird, 30 m Landsat TM, and 56 m AWiFS imagery into the characterization of four primary continuous field components including percent bare ground, percent herbaceous cover, percent litter, and percent shrub, and four secondary components including percent sagebrush (Artemisia spp.), percent big sagebrush (Artemisia tridentata), percent Wyoming sagebrush (Artemisia tridentata Wyomingensis), and shrub height using a regression tree. According to an independent accuracy assessment, primary component root mean square error (RMSE) values ranged from 4.90 to 10.16 for 2.4 m QuickBird, 6.01 to 15.54 for 30 m Landsat, and 6.97 to 16.14 for 56 m AWiFS. Shrub and herbaceous components outperformed the current data standard called LANDFIRE, with a shrub RMSE value of 6.04 versus 12.64 and a herbaceous component RMSE value of 12.89 versus 14.63. This approach offers new advancements in sagebrush characterization from remote sensing and provides a foundation to quantitatively monitor these components into the future.
da Rosa, Guilherme Nascimento; Del Fabro, Joana Possamai; Tomazoni, Fernanda; Tuchtenhagen, Simone; Alves, Luana Severo; Ardenghi, Thiago Machado
2016-03-01
The aim of this study was to assess the impact of malocclusion on children's oral health-related quality of life (COHRQoL) and self-reported happiness. A cross-sectional study was conducted in a representative sample of 12-year-old schoolchildren from Santa Maria, South Brazil. Four calibrated examiners carried out clinical exams to evaluate malocclusion [Dental Aesthetic Index (DAI)], dental caries (DMFT), and dental trauma (O'Brien classification, used in the Children's dental health survey in the UK, 1994). Participants answered the Brazilian versions of the Child Perceptions Questionnaire (CPQ11-14 ) and the Subjective Happiness Scale (SHS). Parents completed a structured questionnaire regarding socioeconomic status. Data analysis was conducted using multilevel Poisson regression models. A total of 1,134 adolescents (boys: 45.8 percent; girls: 54.1 percent) were enrolled in the study. The DAI overall score ranged from 13 to 63 (mean: 25.19, standard error: 0.19); 57.6 percent of the subjects had minor or no malocclusion and 24.4 percent had definite malocclusion. Severe malocclusion and handicapping malocclusion were found in 10.4 percent and 7.4 percent of the subjects, respectively. After adjustment, the severity of malocclusion was associated with high mean values of the CPQ11-14 overall score, and the emotional well-being and social well-being domains were the most affected. Lower levels of happiness were also associated with the severity of malocclusion: those with definite malocclusion presented lower scores on the SHS scale (Rate Ratio 0.97; 95 percent CI 0.94-0.99). Malocclusion had a negative impact on COHRQoL and happiness, mainly on the emotional and social domains. © 2015 American Association of Public Health Dentistry.
NASA Technical Reports Server (NTRS)
Kahle, A. B.; Alley, R. E.; Schieldge, J. P.
1984-01-01
The sensitivity of thermal inertia (TI) calculations to errors in the measurement or parameterization of a number of environmental factors is considered here. The factors include effects of radiative transfer in the atmosphere, surface albedo and emissivity, variations in surface turbulent heat flux density, cloud cover, vegetative cover, and topography. The error analysis is based upon data from the Heat Capacity Mapping Mission (HCMM) satellite for July 1978 at three separate test sites in the deserts of the western United States. Results show that typical errors in atmospheric radiative transfer, cloud cover, and vegetative cover can individually cause root-mean-square (RMS) errors of about 10 percent (with atmospheric effects sometimes as large as 30-40 percent) in HCMM-derived thermal inertia images of 20,000-200,000 pixels.
NASA Lewis Stirling engine computer code evaluation
NASA Technical Reports Server (NTRS)
Sullivan, Timothy J.
1989-01-01
In support of the U.S. Department of Energy's Stirling Engine Highway Vehicle Systems program, the NASA Lewis Stirling engine performance code was evaluated by comparing code predictions without engine-specific calibration factors to GPU-3, P-40, and RE-1000 Stirling engine test data. The error in predicting power output was -11 percent for the P-40 and 12 percent for the Re-1000 at design conditions and 16 percent for the GPU-3 at near-design conditions (2000 rpm engine speed versus 3000 rpm at design). The efficiency and heat input predictions showed better agreement with engine test data than did the power predictions. Concerning all data points, the error in predicting the GPU-3 brake power was significantly larger than for the other engines and was mainly a result of inaccuracy in predicting the pressure phase angle. Analysis into this pressure phase angle prediction error suggested that improvements to the cylinder hysteresis loss model could have a significant effect on overall Stirling engine performance predictions.
Error in Dasibi flight measurements of atmospheric ozone due to instrument wall-loss
NASA Technical Reports Server (NTRS)
Ainsworth, J. E.; Hagemeyer, J. R.; Reed, E. I.
1981-01-01
Theory suggests that in laminar flow the percent loss of a trace constituent to the walls of a measuring instrument varies as P to the -2/3, where P is the total gas pressure. Preliminary laboratory ozone wall-loss measurements confirm this P to the -2/3 dependence. Accurate assessment of wall-loss is thus of particular importance for those balloon-borne instruments utilizing laminar flow at ambient pressure, since the ambient pressure decreases by a factor of 350 during ascent to 40 km. Measurements and extrapolations made for a Dasibi ozone monitor modified for balloon flight indicate that the wall-loss error at 40 km was between 6 and 30 percent and that the wall-loss error in the derived total ozone column-content for the region from the surface to 40 km altitude was between 2 and 10 percent. At 1000 mb, turbulence caused an order of magnitude increase in the Dasibi wall-loss.
Olmsted, F.H.; Welch, A.H.; Van Denburgh, A.S.; Ingebritsen, S.E.
1984-01-01
A flow-routing model of the upper Schoharie Creek basin, New York, was developed and used to simulate high flows at the inlet of the Blenheim-Gilboa Reservoir. The flows from Schoharie Creek at Prattsville, the primary source of flow data in the basin, and tributary flows from the six minor basins downstream, are combined and routed along the 9.7 mile reach of the Schoharie Creek between Prattsville and the reservoir inlet. Data from five historic floods were used for model calibration and four for verification. The accuracy of the model as measured by the difference between simulated and observed total flow volumes, is within 14 percent. Results indicate that inflows to the Blenheim-Gilboa Reservoir can be predicted approximately 2 hours in advance. One of the historical floods was chosen for additional model testing to assess a hypothetical real-time model application. Total flow-volume errors ranged from 30.2 percent to -9.2 percent. Alternative methods of obtaining hydrologic data for model input are presented for use in the event that standard forms of hydrologic data are unavailable. (USGS)
Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A
2017-01-01
Abstract Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. PMID:29106476
Skinner, Kenneth D.
2009-01-01
Elevation data in riverine environments can be used in various applications for which different levels of accuracy are required. The Experimental Advanced Airborne Research LiDAR (Light Detection and Ranging) - or EAARL - system was used to obtain topographic and bathymetric data along the lower Boise River, southwestern Idaho, for use in hydraulic and habitat modeling. The EAARL data were post-processed into bare earth and bathymetric raster and point datasets. Concurrently with the EAARL data collection, real-time kinetic global positioning system and total station ground-survey data were collected in three areas within the lower Boise River basin to assess the accuracy of the EAARL elevation data in different hydrogeomorphic settings. The accuracies of the EAARL-derived elevation data, determined in open, flat terrain, to provide an optimal vertical comparison surface, had root mean square errors ranging from 0.082 to 0.138 m. Accuracies for bank, floodplain, and in-stream bathymetric data had root mean square errors ranging from 0.090 to 0.583 m. The greater root mean square errors for the latter data are the result of high levels of turbidity in the downstream ground-survey area, dense tree canopy, and horizontal location discrepancies between the EAARL and ground-survey data in steeply sloping areas such as riverbanks. The EAARL point to ground-survey comparisons produced results similar to those for the EAARL raster to ground-survey comparisons, indicating that the interpolation of the EAARL points to rasters did not introduce significant additional error. The mean percent error for the wetted cross-sectional areas of the two upstream ground-survey areas was 1 percent. The mean percent error increases to -18 percent if the downstream ground-survey area is included, reflecting the influence of turbidity in that area.
Effect of contrast on human speed perception
NASA Technical Reports Server (NTRS)
Stone, Leland S.; Thompson, Peter
1992-01-01
This study is part of an ongoing collaborative research effort between the Life Science and Human Factors Divisions at NASA ARC to measure the accuracy of human motion perception in order to predict potential errors in human perception/performance and to facilitate the design of display systems that minimize the effects of such deficits. The study describes how contrast manipulations can produce significant errors in human speed perception. Specifically, when two simultaneously presented parallel gratings are moving at the same speed within stationary windows, the lower-contrast grating appears to move more slowly. This contrast-induced misperception of relative speed is evident across a wide range of contrasts (2.5-50 percent) and does not appear to saturate (e.g., a 50 percent contrast grating appears slower than a 70 percent contrast grating moving at the same speed). The misperception is large: a 70 percent contrast grating must, on average, be slowed by 35 percent to match a 10 percent contrast grating moving at 2 deg/sec (N = 6). Furthermore, it is largely independent of the absolute contrast level and is a quasilinear function of log contrast ratio. A preliminary parametric study shows that, although spatial frequency has little effect, the relative orientation of the two gratings is important. Finally, the effect depends on the temporal presentation of the stimuli: the effects of contrast on perceived speed appears lessened when the stimuli to be matched are presented sequentially. These data constrain both physiological models of visual cortex and models of human performance. We conclude that viewing conditions that effect contrast, such as fog, may cause significant errors in speed judgments.
Classen, S; McCarthy, D P; Shechtman, O; Awadzi, K D; Lanford, D N; Okun, M S; Rodriguez, R L; Romrell, J; Bridges, S; Kluger, B; Fernandez, H H
2009-12-01
To determine the correlations of the Useful Field of View (UFOV), compared to other clinical tests of Parkinson's disease (PD); vision; and cognition with measures of on-road driving assessments and to quantify the UFOV's ability to indicate passing/failing an on-road test in people with PD. Nineteen randomly selected people with idiopathic PD, mean age = 74.8 (6.1), 14 (73.7%) men, 18 (94.7%) Caucasians, were age-matched to 104 controls without PD. The controls had a mean age of 75.4 (6.4), 59 (56.7%) men, 96 (92.3%) Caucasians. Both groups were referred for a driving evaluation after institutional review board approval. Compared to neuropsychological and clinical tests of vision and cognition, the UFOV showed the strongest correlations (r > .75, p < 0.05) with measures of failing a standardized road test and number of driving errors. Among PD patients, the UFOV Risk Index score of 3 (range 1-5) was established as the optimal cutoff value for passing the on-road test, with sensitivity 87 percent and specificity 82 percent, AUC = 92 percent (SE 0.61, p = .002). Similarly, the UFOV 2 (divided attention) optimum cutoff value is 223 ms (range 16-500 ms), sensitivity 87.5 percent, specificity 81.8 percent, AUC = 91 percent (SE 0.73, p = .003). The UFOV 3 (selected attention) optimal cutoff value is 273 ms (range 16-500 ms), sensitivity 75 percent, specificity 72.7 percent, AUC = 87 percent (SE 0.81, p = .007). In this pilot study among PD patients, the UFOV may be a superior screening measure (compared to other measures of disease, cognition, and vision) for predicting on-road driving performance but its rigor must be verified in a larger sample of people with PD.
Minimum Copies of Schrödinger’s Cat State in the Multi-Photon System
Lu, Yiping; Zhao, Qing
2016-01-01
Multi-photon entanglement has been successfully studied by many theoretical and experimental groups. However, as the number of entangled photons increases, some problems are encountered, such as the exponential increase of time necessary to prepare the same number of copies of entangled states in experiment. In this paper, a new scheme is proposed based on the Lagrange multiplier and Feedback, which cuts down the required number of copies of Schrödinger’s Cat state in multi-photon experiment, which is realized with some noise in actual measurements, and still keeps the standard deviation in the error of fidelity unchanged. It reduces about five percent of the measuring time of eight-photon Schrödinger’s Cat state compared with the scheme used in the usual planning of actual measurements, and moreover it guarantees the same low error in fidelity. In addition, we also applied the same approach to the simulation of ten-photon entanglement, and we found that it reduces in priciple about twenty two percent of the required copies of Schrödinger’s Cat state compared with the conventionally used scheme of the uniform distribution; yet the distribution of optimized copies of the ten-photon Schrödinger’s Cat state gives better fidelity estimation than the uniform distribution for the same number of copies of the ten-photon Schrödinger’s Cat state. PMID:27576585
Chenausky, Karen; Kernbach, Julius; Norton, Andrea; Schlaug, Gottfried
2017-01-01
We investigated the relationship between imaging variables for two language/speech-motor tracts and speech fluency variables in 10 minimally verbal (MV) children with autism. Specifically, we tested whether measures of white matter integrity-fractional anisotropy (FA) of the arcuate fasciculus (AF) and frontal aslant tract (FAT)-were related to change in percent syllable-initial consonants correct, percent items responded to, and percent syllable insertion errors (from best baseline to post 25 treatment sessions). Twenty-three MV children with autism spectrum disorder (ASD) received Auditory-Motor Mapping Training (AMMT), an intonation-based treatment to improve fluency in spoken output, and we report on seven who received a matched control treatment. Ten of the AMMT participants were able to undergo a magnetic resonance imaging study at baseline; their performance on baseline speech production measures is compared to that of the other two groups. No baseline differences were found between groups. A canonical correlation analysis (CCA) relating FA values for left- and right-hemisphere AF and FAT to speech production measures showed that FA of the left AF and right FAT were the largest contributors to the synthetic independent imaging-related variable. Change in percent syllable-initial consonants correct and percent syllable-insertion errors were the largest contributors to the synthetic dependent fluency-related variable. Regression analyses showed that FA values in left AF significantly predicted change in percent syllable-initial consonants correct, no FA variables significantly predicted change in percent items responded to, and FA of right FAT significantly predicted change in percent syllable-insertion errors. Results are consistent with previously identified roles for the AF in mediating bidirectional mapping between articulation and acoustics, and the FAT in its relationship to speech initiation and fluency. They further suggest a division of labor between the hemispheres, implicating the left hemisphere in accuracy of speech production and the right hemisphere in fluency in this population. Changes in response rate are interpreted as stemming from factors other than the integrity of these two fiber tracts. This study is the first to document the existence of a subgroup of MV children who experience increases in syllable- insertion errors as their speech develops in response to therapy.
24 CFR 982.604 - SRO: Voucher housing assistance payment.
Code of Federal Regulations, 2012 CFR
2012-04-01
... residing in SRO housing, the payment standard is 75 percent of the zero-bedroom payment standard amount on... payment standard is 75 percent of the HUD-approved zero-bedroom exception payment standard amount. (b) The utility allowance for an assisted person residing in SRO housing is 75 percent of the zero bedroom utility...
24 CFR 982.604 - SRO: Voucher housing assistance payment.
Code of Federal Regulations, 2014 CFR
2014-04-01
... residing in SRO housing, the payment standard is 75 percent of the zero-bedroom payment standard amount on... payment standard is 75 percent of the HUD-approved zero-bedroom exception payment standard amount. (b) The utility allowance for an assisted person residing in SRO housing is 75 percent of the zero bedroom utility...
24 CFR 982.604 - SRO: Voucher housing assistance payment.
Code of Federal Regulations, 2013 CFR
2013-04-01
... residing in SRO housing, the payment standard is 75 percent of the zero-bedroom payment standard amount on... payment standard is 75 percent of the HUD-approved zero-bedroom exception payment standard amount. (b) The utility allowance for an assisted person residing in SRO housing is 75 percent of the zero bedroom utility...
Moyer, Douglas; Hirsch, Robert M.; Hyer, Kenneth
2012-01-01
Nutrient and sediment fluxes and changes in fluxes over time are key indicators that water resource managers can use to assess the progress being made in improving the structure and function of the Chesapeake Bay ecosystem. The U.S. Geological Survey collects annual nutrient (nitrogen and phosphorus) and sediment flux data and computes trends that describe the extent to which water-quality conditions are changing within the major Chesapeake Bay tributaries. Two regression-based approaches were compared for estimating annual nutrient and sediment fluxes and for characterizing how these annual fluxes are changing over time. The two regression models compared are the traditionally used ESTIMATOR and the newly developed Weighted Regression on Time, Discharge, and Season (WRTDS). The model comparison focused on answering three questions: (1) What are the differences between the functional form and construction of each model? (2) Which model produces estimates of flux with the greatest accuracy and least amount of bias? (3) How different would the historical estimates of annual flux be if WRTDS had been used instead of ESTIMATOR? One additional point of comparison between the two models is how each model determines trends in annual flux once the year-to-year variations in discharge have been determined. All comparisons were made using total nitrogen, nitrate, total phosphorus, orthophosphorus, and suspended-sediment concentration data collected at the nine U.S. Geological Survey River Input Monitoring stations located on the Susquehanna, Potomac, James, Rappahannock, Appomattox, Pamunkey, Mattaponi, Patuxent, and Choptank Rivers in the Chesapeake Bay watershed. Two model characteristics that uniquely distinguish ESTIMATOR and WRTDS are the fundamental model form and the determination of model coefficients. ESTIMATOR and WRTDS both predict water-quality constituent concentration by developing a linear relation between the natural logarithm of observed constituent concentration and three explanatory variables—the natural log of discharge, time, and season. ESTIMATOR uses two additional explanatory variables—the square of the log of discharge and time-squared. Both models determine coefficients for variables for a series of estimation windows. ESTIMATOR establishes variable coefficients for a series of 9-year moving windows; all observed constituent concentration data within the 9-year window are used to establish each coefficient. Conversely, WRTDS establishes variable coefficients for each combination of discharge and time using only observed concentration data that are similar in time, season, and discharge to the day being estimated. As a result of these distinguishing characteristics, ESTIMATOR reproduces concentration-discharge relations that are closely approximated by a quadratic or linear function with respect to both the log of discharge and time. Conversely, the linear model form of WRTDS coupled with extensive model windowing for each combination of discharge and time allows WRTDS to reproduce observed concentration-discharge relations that are more sinuous in form. Another distinction between ESTIMATOR and WRTDS is the reporting of uncertainty associated with the model estimates of flux and trend. ESTIMATOR quantifies the standard error of prediction associated with the determination of flux and trends. The standard error of prediction enables the determination of the 95-percent confidence intervals for flux and trend as well as the ability to test whether the reported trend is significantly different from zero (where zero equals no trend). Conversely, WRTDS is unable to propagate error through the many (over 5,000) models for unique combinations of flow and time to determine a total standard error. As a result, WRTDS flux estimates are not reported with confidence intervals and a level of significance is not determined for flow-normalized fluxes. The differences between ESTIMATOR and WRTDS, with regard to model form and determination of model coefficients, have an influence on the determination of nutrient and sediment fluxes and associated changes in flux over time as a result of management activities. The comparison between the model estimates of flux and trend was made for combinations of five water-quality constituents at nine River Input Monitoring stations. The major findings with regard to nutrient and sediment fluxes are as follows: (1)WRTDS produced estimates of flux for all combinations that were more accurate, based on reduction in root mean squared error, than flux estimates from ESTIMATOR; (2) for 67 percent of the combinations, WRTDS and ESTIMATOR both produced estimates of flux that were minimally biased compared to observed fluxes(flux bias = tendency to over or underpredict flux observations); however, for 33 percent of the combinations, WRTDS produced estimates of flux that were considerably less biased (by at least 10 percent) than flux estimates from ESTIMATOR; (3) the average percent difference in annual fluxes generated by ESTIMATOR and WRTDS was less than 10 percent at 80 percent of the combinations; and (4) the greatest differences related to flux bias and annual fluxes all occurred for combinations where the pattern in observed concentration-discharge relation was sinuous (two points of inflection) rather than linear or quadratic (zero or one point of inflection). The major findings with regard to trends are as follows: (1) both models produce water-quality trends that have factored in the year-to-year variations in flow; (2) trends in water-quality condition are represented by ESTIMATOR as a trend in flow-adjusted concentration and by WRTDS as a flow normalized flux; (3) for 67 percent of the combinations with trend estimates, the WRTDS trends in flow-normalized flux are in the same direction and magnitude to the ESTIMATOR trends in flow-adjusted concentration, and at the remaining 33 percent the differences in trend magnitude and direction are related to fundamental differences between concentration and flux; and (4) the majority (85 percent) of the total nitrogen, nitrate, and orthophosphorus combinations exhibited long-term (1985 to 2010) trends in WRTDS flow-normalized flux that indicate improvement or reduction in associated flux and the majority (83 percent) of the total phosphorus (from 1985 to 2010) and suspended sediment (from 2001 to 2010) combinations exhibited trends in WRTDS flow-normalized flux that indicate degradation or increases in the flux delivered.
ERIC Educational Resources Information Center
Titus, Freddie
2010-01-01
Fifty percent of college-bound students graduate from high school underprepared for mathematics at the post-secondary level. As a result, thirty-five percent of college students take developmental mathematics courses. What is even more shocking is the high failure rate (ranging from 35 to 42 percent) of students enrolled in developmental…
40 CFR 80.1405 - What are the Renewable Fuel Standards?
Code of Federal Regulations, 2011 CFR
2011-07-01
... Renewable Fuel Standards? (a) Renewable Fuel Standards for 2011. (1) The value of the cellulosic biofuel... be 0.69 percent. (3) The value of the advanced biofuel standard for 2011 shall be 0.78 percent. (4... ER10MY10.003 ER10MY10.004 Where: StdCB,i = The cellulosic biofuel standard for year i, in percent. StdBBD,i...
Height-Error Analysis for the FAA-Air Force Replacement Radar Program (FARR)
1991-08-01
7719 Figure 1-7 CLIMATOLOGY ERRORS BY MONWTH PERCENT FREQUENCY TABLE OF ERROR BY MONTH ERROR MONTH Col Pc IJAl IFEB )MA IA R IAY JJ’N IJUL JAUG (SEP...MONTH Col Pct IJAN IFEB IMPJ JAPR 1 MM IJUN IJUL JAUG ISEP J--T IN~ IDEC I Total ----- -- - - --------------------------.. . -.. 4...MONTH ERROR MONTH Col Pct IJAN IFEB IM4AR IAPR IMAY jJum IJU JAUG ISEP JOCT IN JDEC I Total . .- 4
Analytic barrage attack model. Final report, January 1986-January 1989
DOE Office of Scientific and Technical Information (OSTI.GOV)
St Ledger, J.W.; Naegeli, R.E.; Dowden, N.A.
An analytic model is developed for a nuclear barrage attack, assuming weapons with no aiming error and a cookie-cutter damage function. The model is then extended with approximations for the effects of aiming error and distance damage sigma. The final result is a fast running model which calculates probability of damage for a barrage attack. The probability of damage is accurate to within seven percent or better, for weapon reliabilities of 50 to 100 percent, distance damage sigmas of 0.5 or less, and zero to very large circular error probabilities. FORTRAN 77 coding is included in the report for themore » analytic model and for a numerical model used to check the analytic results.« less
Estimating flood hydrographs for urban basins in North Carolina
Mason, R.R.; Bales, J.D.
1996-01-01
A dimensionless hydrograph for North Carolina was developed from data collected in 29 urban and urbanizing basins in the State. The dimen- sionless hydrograph can be used with an estimate of peak flow and basin lagtime to synthesize a design flood hydrograph for urban basins in North Carolina. Peak flows can be estimated from a number of avail- able techniques; a procedure for estimating basin lagtime from main channel length, stream slope, and percentage of impervious area was developed from data collected at 50 sites and is presented in this report. The North Carolina dimensionless hydrograph provides satis- factory predictions of flood hydrographs in all regions of the State except for basins in or near Asheville where the method overestimated 11 of 12 measured hydrographs. A previously developed dimensionless hydrograph for urban basins in the Piedmont and upper Coastal Plain of South Carolina provides better flood-hydrograph predictions for the Asheville basins and has a standard error of 21 percent as compared to 41 percent for the North Carolina dimensionless hydrograph.
Palmer, Tom M; Holmes, Michael V; Keating, Brendan J; Sheehan, Nuala A
2017-11-01
Mendelian randomization studies use genotypes as instrumental variables to test for and estimate the causal effects of modifiable risk factors on outcomes. Two-stage residual inclusion (TSRI) estimators have been used when researchers are willing to make parametric assumptions. However, researchers are currently reporting uncorrected or heteroscedasticity-robust standard errors for these estimates. We compared several different forms of the standard error for linear and logistic TSRI estimates in simulations and in real-data examples. Among others, we consider standard errors modified from the approach of Newey (1987), Terza (2016), and bootstrapping. In our simulations Newey, Terza, bootstrap, and corrected 2-stage least squares (in the linear case) standard errors gave the best results in terms of coverage and type I error. In the real-data examples, the Newey standard errors were 0.5% and 2% larger than the unadjusted standard errors for the linear and logistic TSRI estimators, respectively. We show that TSRI estimators with modified standard errors have correct type I error under the null. Researchers should report TSRI estimates with modified standard errors instead of reporting unadjusted or heteroscedasticity-robust standard errors. © The Author(s) 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health.
Safety and Cost Assessment of Connected and Automated Vehicles
DOT National Transportation Integrated Search
2018-03-29
Many light-duty vehicle crashes occur due to human error and distracted driving. The National Highway Traffic Safety Administration (NHTSA) reports that ten percent of all fatal crashes and seventeen percent of injury crashes in 2011 were a result of...
Sloat, J.V.; Gain, W.S.
1995-01-01
Index-velocity data collected with acoustic velocity meters, stage data, and cross-sectional area data were used to calculate discharge at three low-velocity, tidal streamflow stations in north-east Florida. Discharge at three streamflow stations was computed as the product of the channel cross-sectional area and the mean velocity as determined from an index velocity measured in the stream using an acoustic velocity meter. The tidal streamlflow stations used in the study were: Six Mile Creek near Picolata, Fla.; Dunns Creek near Satsuma, Fla.; and the St. Johns River at Buffalo Bluff. Cross-sectional areas at the measurement sections ranged from about 3,000 square feet at Six Mile Creek to about 18,500 square feet at St. Johns River at Buffalo Bluff. Physical characteristics for all three streams were similar except for drainage area. The topography primarily is low-relief, swampy terrain; stream velocities ranged from about -2 to 2 feet per second; and the average change in stage was about 1 foot. Instantaneous discharge was measured using a portable acoustic current meter at each of the three streams to develop a relation between the mean velocity in the stream and the index velocity measured by the acoustic velocity meter. Using least-squares linear regression, a simple linear relation between mean velocity and index velocity was determined. Index velocity was the only significant linear predictor of mean velocity for Six Mile Creek and St. Johns River at Buffalo Bluff. For Dunns Creek, both index velocity and stage were used to develop a multiple-linear predictor of mean velocity. Stage-area curves for each stream were developed from bathymetric data. Instantaneous discharge was computed by multiplying results of relations developed for cross-sectional area and mean velocity. Principal sources of error in the estimated discharge are identified as: (1) instrument errors associated with measurement of stage and index velocity, (2) errors in the representation of mean daily stage and index velocity due to natural variability over time and space, and (3) errors in cross-sectional area and mean-velocity ratings based on stage and index velocity. Standard errors for instantaneous discharge for the median cross-sectional area for Six Mile Creek, Dunns Creek, and St. Johns River at Buffalo Bluff were 94,360, and 1,980 cubic feet per second, respectively. Standard errors for mean daily discharge for the median cross-sectional area for Six Mile Creek, Dunns Creek, and St. Johns River at Buffalo Bluff were 25, 65, and 455 cubic feet per second, respectively. Mean daily discharge at the three sites ranged from about -500 to 1,500 cubic feet per second at Six Mile Creek and Dunns Creek and from about -500 to 15,000 cubic feet per second on the St. Johns River at Buffalo Bluff. For periods of high discharge, the AVM index-velocity method tended to produce estimates accurate with 2 to 6 percent. For periods of moderate discharge, errors in discharge may increase to more than 50 percent. At low flows, errors as a percentage of discharge increase toward infinity.
Forward Global Photometric Calibration of the Dark Energy Survey
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, D. L.; Rykoff, E. S.; Allam, S.
Many scientific goals for the Dark Energy Survey (DES) require calibration of optical/NIR broadbandmore » $b = grizY$ photometry that is stable in time and uniform over the celestial sky to one percent or better. It is also necessary to limit to similar accuracy systematic uncertainty in the calibrated broadband magnitudes due to uncertainty in the spectrum of the source. Here we present a "Forward Global Calibration Method (FGCM)" for photometric calibration of the DES, and we present results of its application to the first three years of the survey (Y3A1). The FGCM combines data taken with auxiliary instrumentation at the observatory with data from the broad-band survey imaging itself and models of the instrument and atmosphere to estimate the spatial- and time-dependence of the passbands of individual DES survey exposures. "Standard" passbands are chosen that are typical of the passbands encountered during the survey. The passband of any individual observation is combined with an estimate of the source spectral shape to yield a magnitude $$m_b^{\\mathrm{std}}$$ in the standard system. This "chromatic correction" to the standard system is necessary to achieve sub-percent calibrations. The FGCM achieves reproducible and stable photometric calibration of standard magnitudes $$m_b^{\\mathrm{std}}$$ of stellar sources over the multi-year Y3A1 data sample with residual random calibration errors of $$\\sigma=5-6\\,\\mathrm{mmag}$$ per exposure. In conclusion, the accuracy of the calibration is uniform across the $$5000\\,\\mathrm{deg}^2$$ DES footprint to within $$\\sigma=7\\,\\mathrm{mmag}$$. The systematic uncertainties of magnitudes in the standard system due to the spectra of sources are less than $$5\\,\\mathrm{mmag}$$ for main sequence stars with $0.5« less
Forward Global Photometric Calibration of the Dark Energy Survey
Burke, D. L.; Rykoff, E. S.; Allam, S.; ...
2017-12-28
Many scientific goals for the Dark Energy Survey (DES) require calibration of optical/NIR broadbandmore » $b = grizY$ photometry that is stable in time and uniform over the celestial sky to one percent or better. It is also necessary to limit to similar accuracy systematic uncertainty in the calibrated broadband magnitudes due to uncertainty in the spectrum of the source. Here we present a "Forward Global Calibration Method (FGCM)" for photometric calibration of the DES, and we present results of its application to the first three years of the survey (Y3A1). The FGCM combines data taken with auxiliary instrumentation at the observatory with data from the broad-band survey imaging itself and models of the instrument and atmosphere to estimate the spatial- and time-dependence of the passbands of individual DES survey exposures. "Standard" passbands are chosen that are typical of the passbands encountered during the survey. The passband of any individual observation is combined with an estimate of the source spectral shape to yield a magnitude $$m_b^{\\mathrm{std}}$$ in the standard system. This "chromatic correction" to the standard system is necessary to achieve sub-percent calibrations. The FGCM achieves reproducible and stable photometric calibration of standard magnitudes $$m_b^{\\mathrm{std}}$$ of stellar sources over the multi-year Y3A1 data sample with residual random calibration errors of $$\\sigma=5-6\\,\\mathrm{mmag}$$ per exposure. In conclusion, the accuracy of the calibration is uniform across the $$5000\\,\\mathrm{deg}^2$$ DES footprint to within $$\\sigma=7\\,\\mathrm{mmag}$$. The systematic uncertainties of magnitudes in the standard system due to the spectra of sources are less than $$5\\,\\mathrm{mmag}$$ for main sequence stars with $0.5« less
Forward Global Photometric Calibration of the Dark Energy Survey
DOE Office of Scientific and Technical Information (OSTI.GOV)
Burke, D. L.; Rykoff, E. S.; Allam, S.
2017-12-28
Many scientific goals for the Dark Energy Survey (DES) require calibration of optical/NIR broadbandmore » $b = grizY$ photometry that is stable in time and uniform over the celestial sky to one percent or better. It is also necessary to limit to similar accuracy systematic uncertainty in the calibrated broadband magnitudes due to uncertainty in the spectrum of the source. Here we present a "Forward Global Calibration Method (FGCM)" for photometric calibration of the DES, and we present results of its application to the first three years of the survey (Y3A1). The FGCM combines data taken with auxiliary instrumentation at the observatory with data from the broad-band survey imaging itself and models of the instrument and atmosphere to estimate the spatial- and time-dependence of the passbands of individual DES survey exposures. "Standard" passbands are chosen that are typical of the passbands encountered during the survey. The passband of any individual observation is combined with an estimate of the source spectral shape to yield a magnitude $$m_b^{\\mathrm{std}}$$ in the standard system. This "chromatic correction" to the standard system is necessary to achieve sub-percent calibrations. The FGCM achieves reproducible and stable photometric calibration of standard magnitudes $$m_b^{\\mathrm{std}}$$ of stellar sources over the multi-year Y3A1 data sample with residual random calibration errors of $$\\sigma=5-6\\,\\mathrm{mmag}$$ per exposure. The accuracy of the calibration is uniform across the $$5000\\,\\mathrm{deg}^2$$ DES footprint to within $$\\sigma=7\\,\\mathrm{mmag}$$. The systematic uncertainties of magnitudes in the standard system due to the spectra of sources are less than $$5\\,\\mathrm{mmag}$$ for main sequence stars with $0.5« less
Deng, Xi; Schröder, Simone; Redweik, Sabine; Wätzig, Hermann
2011-06-01
Gel electrophoresis (GE) is a very common analytical technique for proteome research and protein analysis. Despite being developed decades ago, there is still a considerable need to improve its precision. Using the fluorescence of Colloidal Coomassie Blue -stained proteins in near-infrared (NIR), the major error source caused by the unpredictable background staining is strongly reduced. This result was generalized for various types of detectors. Since GE is a multi-step procedure, standardization of every single step is required. After detailed analysis of all steps, the staining and destaining were identified as the major source of the remaining variation. By employing standardized protocols, pooled percent relative standard deviations of 1.2-3.1% for band intensities were achieved for one-dimensional separations in repetitive experiments. The analysis of variance suggests that the same batch of staining solution should be used for gels of one experimental series to minimize day-to-day variation and to obtain high precision. Copyright © 2011 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Morris, Keith B; Law, Eric F; Jefferys, Roger L; Dearth, Elizabeth C; Fabyanic, Emily B
2017-11-01
Through analysis and comparison of firing pin, breech face, and ejector impressions, where appropriate, firearm examiners may connect a cartridge case to a suspect firearm with a certain likelihood in a criminal investigation. When a firearm is not present, an examiner may use the Integrated Ballistics Identification System (IBIS ® ), an automated search and retrieval system coupled with the National Integrated Ballistics Information Network (NIBIN), a database of images showing the markings on fired cartridge cases and bullets from crime scenes along with test fired firearms. For the purpose of measurement quality control of these IBIS ® systems the National Institute of Standards and Technology (NIST) initiated the Standard Reference Material (SRM) 2460/2461 standard bullets and cartridge cases project. The aim of this study was to evaluate the overall performance of the IBIS ® system by using NIST standard cartridge cases. By evaluating the resulting correlation scores, error rates, and percent recovery, both the variability between and within examiners when using IBIS ® , in addition to any inter- and intra-variability between SRM cartridge cases was observed. Copyright © 2017 Elsevier B.V. All rights reserved.
Converting international ¼ inch tree volume to Doyle
Aaron Holley; John R. Brooks; Stuart A. Moss
2014-01-01
An equation for converting Mesavage and Girard's International ¼ inch tree volumes to the Doyle log rule is presented as a function of tree diameter. Volume error for trees having less than four logs exhibited volume prediction errors within a range of ±10 board feet. In addition, volume prediction error as a percent of actual Doyle tree volume...
NASA Astrophysics Data System (ADS)
Stout, Matthew
The purpose of this study is to explore the feasibility of yttria-stabilized zirconia (Y-TZP) in fixed lingual retention as an alternative to stainless steel. Exploratory Y-TZP specimens were milled to establish design parameters. Next, specimens were milled according to ASTM standard C1161-13 and subjected to four-point flexural test to determine materials properties. Finite Element (FE) Analysis was employed to evaluate nine novel cross-sectional designs and compared to stainless steel wire. Each design was analyzed under the loading conditions to determine von Mises and bond stress. The most promising design was fabricated to assess accuracy and precision of current CAD/CAM milling technology. The superior design had a 1.0 x 0.5 mm semi-elliptical cross section and was shown to be fabricated reliably. Overall, the milling indicated a maximum percent standard deviation of 9.3 and maximum percent error of 13.5 with a cost of $30 per specimen. Y-TZP can be reliably milled to dimensions comparable to currently available metallic retainer wires. Further research is necessary to determine the success of bonding protocol and clinical longevity of Y-TZP fixed retainers. Advanced technology is necessary to connect the intraoral scan to an aesthetic and patient-specific Y-TZP fixed retainer.
46 CFR 42.20-7 - Flooding standard: Type “B” vessel, 60 percent reduction.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 2 2012-10-01 2012-10-01 false Flooding standard: Type âBâ vessel, 60 percent reduction... DOMESTIC AND FOREIGN VOYAGES BY SEA Freeboards § 42.20-7 Flooding standard: Type “B” vessel, 60 percent... applied to the following flooding standards: (1) If the vessel is 225 meters (738 feet) or less in length...
46 CFR 42.20-7 - Flooding standard: Type “B” vessel, 60 percent reduction.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 2 2013-10-01 2013-10-01 false Flooding standard: Type âBâ vessel, 60 percent reduction... DOMESTIC AND FOREIGN VOYAGES BY SEA Freeboards § 42.20-7 Flooding standard: Type “B” vessel, 60 percent... applied to the following flooding standards: (1) If the vessel is 225 meters (738 feet) or less in length...
46 CFR 42.20-7 - Flooding standard: Type “B” vessel, 60 percent reduction.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 2 2014-10-01 2014-10-01 false Flooding standard: Type âBâ vessel, 60 percent reduction... DOMESTIC AND FOREIGN VOYAGES BY SEA Freeboards § 42.20-7 Flooding standard: Type “B” vessel, 60 percent... applied to the following flooding standards: (1) If the vessel is 225 meters (738 feet) or less in length...
Nilles, M.A.; Gordon, J.D.; Schroder, L.J.; Paulin, C.E.
1995-01-01
The U.S. Geological Survey used four programs in 1991 to provide external quality assurance for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN). An intersite-comparison program was used to evaluate onsite pH and specific-conductance determinations. The effects of routine sample handling, processing, and shipping of wet-deposition samples on analyte determinations and an estimated precision of analyte values and concentrations were evaluated in the blind-audit program. Differences between analytical results and an estimate of the analytical precision of four laboratories routinely measuring wet deposition were determined by an interlaboratory-comparison program. Overall precision estimates for the precipitation-monitoring system were determined for selected sites by a collocated-sampler program. Results of the intersite-comparison program indicated that 93 and 86 percent of the site operators met the NADP/NTN accuracy goal for pH determinations during the two intersite-comparison studies completed during 1991. The results also indicated that 96 and 97 percent of the site operators met the NADP/NTN accuracy goal for specific-conductance determinations during the two 1991 studies. The effects of routine sample handling, processing, and shipping, determined in the blind-audit program indicated significant positive bias (a=.O 1) for calcium, magnesium, sodium, potassium, chloride, nitrate, and sulfate. Significant negative bias (or=.01) was determined for hydrogen ion and specific conductance. Only ammonium determinations were not biased. A Kruskal-Wallis test indicated that there were no significant (*3t=.01) differences in analytical results from the four laboratories participating in the interlaboratory-comparison program. Results from the collocated-sampler program indicated the median relative error for cation concentration and deposition exceeded eight percent at most sites, whereas the median relative error for sample volume, sulfate, and nitrate concentration at all sites was less than four percent. The median relative error for hydrogen ion concentration and deposition ranged from 4.6 to 18.3 percent at the four sites and as indicated in previous years of the study, was inversely proportional to the acidity of the precipitation at a given site. Overall, collocated-sampling error typically was five times that of laboratory error estimates for most analytes.
Assimilation of Freeze - Thaw Observations into the NASA Catchment Land Surface Model
NASA Technical Reports Server (NTRS)
Farhadi, Leila; Reichle, Rolf H.; DeLannoy, Gabrielle J. M.; Kimball, John S.
2014-01-01
The land surface freeze-thaw (F-T) state plays a key role in the hydrological and carbon cycles and thus affects water and energy exchanges and vegetation productivity at the land surface. In this study, we developed an F-T assimilation algorithm for the NASA Goddard Earth Observing System, version 5 (GEOS-5) modeling and assimilation framework. The algorithm includes a newly developed observation operator that diagnoses the landscape F-T state in the GEOS-5 Catchment land surface model. The F-T analysis is a rule-based approach that adjusts Catchment model state variables in response to binary F-T observations, while also considering forecast and observation errors. A regional observing system simulation experiment was conducted using synthetically generated F-T observations. The assimilation of perfect (error-free) F-T observations reduced the root-mean-square errors (RMSE) of surface temperature and soil temperature by 0.206 C and 0.061 C, respectively, when compared to model estimates (equivalent to a relative RMSE reduction of 6.7 percent and 3.1 percent, respectively). For a maximum classification error (CEmax) of 10 percent in the synthetic F-T observations, the F-T assimilation reduced the RMSE of surface temperature and soil temperature by 0.178 C and 0.036 C, respectively. For CEmax=20 percent, the F-T assimilation still reduces the RMSE of model surface temperature estimates by 0.149 C but yields no improvement over the model soil temperature estimates. The F-T assimilation scheme is being developed to exploit planned operational F-T products from the NASA Soil Moisture Active Passive (SMAP) mission.
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2012 CFR
2012-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2014 CFR
2014-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2011 CFR
2011-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2013 CFR
2013-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
7 CFR 801.4 - Tolerances for dockage testers.
Code of Federal Regulations, 2010 CFR
2010-01-01
....10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Riddle separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Sieve separation ±0.10 percent, mean deviation from standard dockage tester using Hard Red Winter wheat Total...
Welch, Alan H.
1994-01-01
Aquifers in Carson and Eagle Valleys are an important source of water for human consumption and agriculture. Concentrations of major constituents in water from the principal aquifers on the west sides of Carson and Eagle Valleys appear to be a result of natural geochemical reactions with minerals derived primarily from plutonic rocks. In general, water from principal aquifers is acceptable for drinking when compared with current (1993) Nevada State drinking-water maximum contaminant level standards. Water was collected and analyzed for all inorganic constituents for which primary or secondary drinking-water standards have been established. About 3 percent of these sites had con- stituents that exceeded one or more primary or secondary drinking-water standards have been established. About 3 percent of these sites had con- stituents that exceeded one or more primary standards and water at about 10 percent of the sites had at least one constituent that surpassed a secondary standard. Arsenic exceeded the standard in water at less than 1 percent of the principal aquifer sites; nitrate surpassed its standard in water at 3 percent of 93 sites. Water from wells in the principal aquifer with high concentrations of nitrate was in areas where septic systems are used; these concentrations indicate that contamination may be entering the wells. Concentrations of naturally occurring radionuclides in water from the principal aquifers, exceed the proposed Federal standards for some constituents, but were not found t be above current (1993) State standards. The uranium concen- trations exceeded the proposed 20 micrograms per liter Federal standard at 10 percent of the sites. Of the sites analyzed for all of the inorganic constituents with primary standards plus uranium, 15 percent exceed one or more established standards. If the proposed 20 micrograms per liter standard for uranium is applied to the sampled sites, then 23 percent would exceed the standard for uranium or some other constituent with a primary drinking water standard. This represents a 50-percent increase in the frequency of exceedance. Almost all water sampled from the principal aquifers exceeds the 300 picocuries per liter proposed standard for radon. Ground-water sampling sites with the highest radon activities in water are most commonly located in the upland aquifers in the Sierra Nevada and in the principal aquifers beneath the west sides of Carson and Eagle Valleys.
NASA Astrophysics Data System (ADS)
Sarkar, Arnab; Karki, Vijay; Aggarwal, Suresh K.; Maurya, Gulab S.; Kumar, Rohit; Rai, Awadhesh K.; Mao, Xianglei; Russo, Richard E.
2015-06-01
Laser induced breakdown spectroscopy (LIBS) was applied for elemental characterization of high alloy steel using partial least squares regression (PLSR) with an objective to evaluate the analytical performance of this multivariate approach. The optimization of the number of principle components for minimizing error in PLSR algorithm was investigated. The effect of different pre-treatment procedures on the raw spectral data before PLSR analysis was evaluated based on several statistical (standard error of prediction, percentage relative error of prediction etc.) parameters. The pre-treatment with "NORM" parameter gave the optimum statistical results. The analytical performance of PLSR model improved by increasing the number of laser pulses accumulated per spectrum as well as by truncating the spectrum to appropriate wavelength region. It was found that the statistical benefit of truncating the spectrum can also be accomplished by increasing the number of laser pulses per accumulation without spectral truncation. The constituents (Co and Mo) present in hundreds of ppm were determined with relative precision of 4-9% (2σ), whereas the major constituents Cr and Ni (present at a few percent levels) were determined with a relative precision of ~ 2%(2σ).
Highly Efficient Compression Algorithms for Multichannel EEG.
Shaw, Laxmi; Rahman, Daleef; Routray, Aurobinda
2018-05-01
The difficulty associated with processing and understanding the high dimensionality of electroencephalogram (EEG) data requires developing efficient and robust compression algorithms. In this paper, different lossless compression techniques of single and multichannel EEG data, including Huffman coding, arithmetic coding, Markov predictor, linear predictor, context-based error modeling, multivariate autoregression (MVAR), and a low complexity bivariate model have been examined and their performances have been compared. Furthermore, a high compression algorithm named general MVAR and a modified context-based error modeling for multichannel EEG have been proposed. The resulting compression algorithm produces a higher relative compression ratio of 70.64% on average compared with the existing methods, and in some cases, it goes up to 83.06%. The proposed methods are designed to compress a large amount of multichannel EEG data efficiently so that the data storage and transmission bandwidth can be effectively used. These methods have been validated using several experimental multichannel EEG recordings of different subjects and publicly available standard databases. The satisfactory parametric measures of these methods, namely percent-root-mean square distortion, peak signal-to-noise ratio, root-mean-square error, and cross correlation, show their superiority over the state-of-the-art compression methods.
NASA Technical Reports Server (NTRS)
Boughner, R.; Larsen, J. C.; Natarajan, M.
1980-01-01
The influence of short lived photochemically produced species on solar occultation measurements of ClO and NO was examined. Time varying altitude profiles of ClO and NO were calculated with a time dependent photochemical model to simulate the distribution of these species during a solar occultation measurement. These distributions were subsequently used to calculate simulated radiances for various tangent paths from which mixing ratios were inferred with a conventional technique that assumes spherical symmetry. These results show that neglecting the variation of ClO in the retrieval process produces less than a 10 percent error between the true and inverted profile for both sunrise and sunset above 18 km. For NO, errors are less than 10 percent for tangent altitudes above about 35 km for sunrise and sunset; at lower altitudes, the error increases, approaching 100 percent at altitudes near 25 km. the results also show that average inhomogeneity factors, which measure the concentration variation along the tangent path and which can be calculated from a photochemical model, can indicate which species require more careful data analysis.
NASA Astrophysics Data System (ADS)
Porter, J. M.; Jeffries, J. B.; Hanson, R. K.
2009-09-01
A novel three-wavelength mid-infrared laser-based absorption/extinction diagnostic has been developed for simultaneous measurement of temperature and vapor-phase mole fraction in an evaporating hydrocarbon fuel aerosol (vapor and liquid droplets). The measurement technique was demonstrated for an n-decane aerosol with D 50˜3 μ m in steady and shock-heated flows with a measurement bandwidth of 125 kHz. Laser wavelengths were selected from FTIR measurements of the C-H stretching band of vapor and liquid n-decane near 3.4 μm (3000 cm -1), and from modeled light scattering from droplets. Measurements were made for vapor mole fractions below 2.3 percent with errors less than 10 percent, and simultaneous temperature measurements over the range 300 K< T<900 K were made with errors less than 3 percent. The measurement technique is designed to provide accurate values of temperature and vapor mole fraction in evaporating polydispersed aerosols with small mean diameters ( D 50<10 μ m), where near-infrared laser-based scattering corrections are prone to error.
ERIC Educational Resources Information Center
Huprich, Julia; Green, Ravonne
2007-01-01
The Council on Public Liberal Arts Colleges (COPLAC) libraries websites were assessed for Section 508 errors using the online WebXACT tool. Only three of the twenty-one institutions (14%) had zero accessibility errors. Eighty-six percent of the COPLAC institutions had an average of 1.24 errors. Section 508 compliance is required for institutions…
Martin, Gary R.; Fowler, Kathleen K.; Arihood, Leslie D.
2016-09-06
Information on low-flow characteristics of streams is essential for the management of water resources. This report provides equations for estimating the 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years and the harmonic-mean flow at ungaged, unregulated stream sites in Indiana. These equations were developed using the low-flow statistics and basin characteristics for 108 continuous-record streamgages in Indiana with at least 10 years of daily mean streamflow data through the 2011 climate year (April 1 through March 31). The equations were developed in cooperation with the Indiana Department of Environmental Management.Regression techniques were used to develop the equations for estimating low-flow frequency statistics and the harmonic-mean flows on the basis of drainage-basin characteristics. A geographic information system was used to measure basin characteristics for selected streamgages. A final set of 25 basin characteristics measured at all the streamgages were evaluated to choose the best predictors of the low-flow statistics.Logistic-regression equations applicable statewide are presented for estimating the probability that selected low-flow frequency statistics equal zero. These equations use the explanatory variables total drainage area, average transmissivity of the full thickness of the unconsolidated deposits within 1,000 feet of the stream network, and latitude of the basin outlet. The percentage of the streamgage low-flow statistics correctly classified as zero or nonzero using the logistic-regression equations ranged from 86.1 to 88.9 percent.Generalized-least-squares regression equations applicable statewide for estimating nonzero low-flow frequency statistics use total drainage area, the average hydraulic conductivity of the top 70 feet of unconsolidated deposits, the slope of the basin, and the index of permeability and thickness of the Quaternary surficial sediments as explanatory variables. The average standard error of prediction of these regression equations ranges from 55.7 to 61.5 percent.Regional weighted-least-squares regression equations were developed for estimating the harmonic-mean flows by dividing the State into three low-flow regions. The Northern region uses total drainage area and the average transmissivity of the entire thickness of unconsolidated deposits as explanatory variables. The Central region uses total drainage area, the average hydraulic conductivity of the entire thickness of unconsolidated deposits, and the index of permeability and thickness of the Quaternary surficial sediments. The Southern region uses total drainage area and the percent of the basin covered by forest. The average standard error of prediction for these equations ranges from 39.3 to 66.7 percent.The regional regression equations are applicable only to stream sites with low flows unaffected by regulation and to stream sites with drainage basin characteristic values within specified limits. Caution is advised when applying the equations for basins with characteristics near the applicable limits and for basins with karst drainage features and for urbanized basins. Extrapolations near and beyond the applicable basin characteristic limits will have unknown errors that may be large. Equations are presented for use in estimating the 90-percent prediction interval of the low-flow statistics estimated by use of the regression equations at a given stream site.The regression equations are to be incorporated into the U.S. Geological Survey StreamStats Web-based application for Indiana. StreamStats allows users to select a stream site on a map and automatically measure the needed basin characteristics and compute the estimated low-flow statistics and associated prediction intervals.
Importance of Geosat orbit and tidal errors in the estimation of large-scale Indian Ocean variations
NASA Technical Reports Server (NTRS)
Perigaud, Claire; Zlotnicki, Victor
1992-01-01
To improve the estimate accuracy of large-scale meridional sea-level variations, Geosat ERM data on the Indian Ocean for a 26-month period were processed using two different techniques of orbit error reduction. The first technique removes an along-track polynomial of degree 1 over about 5000 km and the second technique removes an along-track once-per-revolution sine wave about 40,000 km. Results obtained show that the polynomial technique produces stronger attenuation of both the tidal error and the large-scale oceanic signal. After filtering, the residual difference between the two methods represents 44 percent of the total variance and 23 percent of the annual variance. The sine-wave method yields a larger estimate of annual and interannual meridional variations.
NASA Technical Reports Server (NTRS)
Ulich, B. L.; Rhodes, P. J.; Davis, J. H.; Hollis, J. M.
1980-01-01
Careful observations have been made at 86.1 GHz to derive the absolute brightness temperatures of the sun (7914 + or - 192 K), Venus (357.5 + or - 13.1 K), Jupiter (179.4 + or - 4.7 K), and Saturn (153.4 + or - 4.8 K) with a standard error of about three percent. This is a significant improvement in accuracy over previous results at millimeter wavelengths. A stable transmitter and novel superheterodyne receiver were constructed and used to determine the effective collecting area of the Millimeter Wave Observatory (MWO) 4.9-m antenna relative to a previously calibrated standard gain horn. The thermal scale was set by calibrating the radiometer with carefully constructed and tested hot and cold loads. The brightness temperatures may be used to establish an absolute calibration scale and to determine the antenna aperture and beam efficiencies of other radio telescopes at 3.5-mm wavelength.
Forecasting impact injuries of unrestrained occupants in railway vehicle passenger compartments.
Xie, Suchao; Zhou, Hui
2014-01-01
In order to predict the injury parameters of the occupants corresponding to different experimental parameters and to determine impact injury indices conveniently and efficiently, a model forecasting occupant impact injury was established in this work. The work was based on finite experimental observation values obtained by numerical simulation. First, the various factors influencing the impact injuries caused by the interaction between unrestrained occupants and the compartment's internal structures were collated and the most vulnerable regions of the occupant's body were analyzed. Then, the forecast model was set up based on a genetic algorithm-back propagation (GA-BP) hybrid algorithm, which unified the individual characteristics of the back propagation-artificial neural network (BP-ANN) model and the genetic algorithm (GA). The model was well suited to studies of occupant impact injuries and allowed multiple-parameter forecasts of the occupant impact injuries to be realized assuming values for various influencing factors. Finally, the forecast results for three types of secondary collision were analyzed using forecasting accuracy evaluation methods. All of the results showed the ideal accuracy of the forecast model. When an occupant faced a table, the relative errors between the predicted and experimental values of the respective injury parameters were kept within ± 6.0 percent and the average relative error (ARE) values did not exceed 3.0 percent. When an occupant faced a seat, the relative errors between the predicted and experimental values of the respective injury parameters were kept within ± 5.2 percent and the ARE values did not exceed 3.1 percent. When the occupant faced another occupant, the relative errors between the predicted and experimental values of the respective injury parameters were kept within ± 6.3 percent and the ARE values did not exceed 3.8 percent. The injury forecast model established in this article reduced repeat experiment times and improved the design efficiency of the internal compartment's structure parameters, and it provided a new way for assessing the safety performance of the interior structural parameters in existing, and newly designed, railway vehicle compartments.
Anning, David W.; Paul, Angela P.; McKinney, Tim S.; Huntington, Jena M.; Bexfield, Laura M.; Thiros, Susan A.
2012-01-01
The National Water-Quality Assessment (NAWQA) Program of the U.S. Geological Survey (USGS) is conducting a regional analysis of water quality in the principal aquifer systems across the United States. The Southwest Principal Aquifers (SWPA) study is building a better understanding of the susceptibility and vulnerability of basin-fill aquifers in the region to groundwater contamination by synthesizing baseline knowledge of groundwater-quality conditions in 16 basins previously studied by the NAWQA Program. The improved understanding of aquifer susceptibility and vulnerability to contamination is assisting in the development of tools that water managers can use to assess and protect the quality of groundwater resources.Human-health concerns and economic considerations associated with meeting drinking-water standards motivated a study of the vulnerability of basin-fill aquifers to nitrate contamination and arsenic enrichment in the southwestern United States. Statistical models were developed by using the random forest classifier algorithm to predict concentrations of nitrate and arsenic across a model grid that represents about 190,600 square miles of basin-fill aquifers in parts of Arizona, California, Colorado, Nevada, New Mexico, and Utah. The statistical models, referred to as classifiers, reflect natural and human-related factors that affect aquifer vulnerability to contamination and relate nitrate and arsenic concentrations to explanatory variables representing local- and basin-scale measures of source, aquifer susceptibility, and geochemical conditions. The classifiers were unbiased and fit the observed data well, and misclassifications were primarily due to statistical sampling error in the training datasets.The classifiers were designed to predict concentrations to be in one of six classes for nitrate, and one of seven classes for arsenic. Each classification scheme allowed for identification of areas with concentrations that were equal to or exceeding the U.S. Environmental Protection Agency drinking-water standard. Whereas 2.4 percent of the area underlain by basin-fill aquifers in the study area was predicted to equal or exceed this standard for nitrate (10 milligrams per liter as N; mg/L), 42.7 percent was predicted to equal or exceed the standard for arsenic (10 micrograms per liter; μg/L). Areas predicted to equal or exceed the drinking-water standard for nitrate include basins in central Arizona near Phoenix; the San Joaquin, Inland, and San Jacinto basins of California; and the San Luis Valley of Colorado. Much of the area predicted to equal or exceed the drinking-water standard for arsenic is within a belt of basins along the western portion of the Basin and Range Physiographic Province in Nevada, California, and Arizona. Predicted nitrate and arsenic concentrations are substantially lower than the drinking-water standards in much of the study area—about 93.0 percent of the area underlain by basin-fill aquifers was less than one-half the standard for nitrate (5.0 mg/L), and 50.2 percent was less than one-half the standard for arsenic (5.0 μg/L).
Shcherbina, Anna; Mattsson, C. Mikael; Waggott, Daryl; Salisbury, Heidi; Christle, Jeffrey W.; Hastie, Trevor; Wheeler, Matthew T.; Ashley, Euan A.
2017-01-01
The ability to measure physical activity through wrist-worn devices provides an opportunity for cardiovascular medicine. However, the accuracy of commercial devices is largely unknown. The aim of this work is to assess the accuracy of seven commercially available wrist-worn devices in estimating heart rate (HR) and energy expenditure (EE) and to propose a wearable sensor evaluation framework. We evaluated the Apple Watch, Basis Peak, Fitbit Surge, Microsoft Band, Mio Alpha 2, PulseOn, and Samsung Gear S2. Participants wore devices while being simultaneously assessed with continuous telemetry and indirect calorimetry while sitting, walking, running, and cycling. Sixty volunteers (29 male, 31 female, age 38 ± 11 years) of diverse age, height, weight, skin tone, and fitness level were selected. Error in HR and EE was computed for each subject/device/activity combination. Devices reported the lowest error for cycling and the highest for walking. Device error was higher for males, greater body mass index, darker skin tone, and walking. Six of the devices achieved a median error for HR below 5% during cycling. No device achieved an error in EE below 20 percent. The Apple Watch achieved the lowest overall error in both HR and EE, while the Samsung Gear S2 reported the highest. In conclusion, most wrist-worn devices adequately measure HR in laboratory-based activities, but poorly estimate EE, suggesting caution in the use of EE measurements as part of health improvement programs. We propose reference standards for the validation of consumer health devices (http://precision.stanford.edu/). PMID:28538708
Shcherbina, Anna; Mattsson, C Mikael; Waggott, Daryl; Salisbury, Heidi; Christle, Jeffrey W; Hastie, Trevor; Wheeler, Matthew T; Ashley, Euan A
2017-05-24
The ability to measure physical activity through wrist-worn devices provides an opportunity for cardiovascular medicine. However, the accuracy of commercial devices is largely unknown. The aim of this work is to assess the accuracy of seven commercially available wrist-worn devices in estimating heart rate (HR) and energy expenditure (EE) and to propose a wearable sensor evaluation framework. We evaluated the Apple Watch, Basis Peak, Fitbit Surge, Microsoft Band, Mio Alpha 2, PulseOn, and Samsung Gear S2. Participants wore devices while being simultaneously assessed with continuous telemetry and indirect calorimetry while sitting, walking, running, and cycling. Sixty volunteers (29 male, 31 female, age 38 ± 11 years) of diverse age, height, weight, skin tone, and fitness level were selected. Error in HR and EE was computed for each subject/device/activity combination. Devices reported the lowest error for cycling and the highest for walking. Device error was higher for males, greater body mass index, darker skin tone, and walking. Six of the devices achieved a median error for HR below 5% during cycling. No device achieved an error in EE below 20 percent. The Apple Watch achieved the lowest overall error in both HR and EE, while the Samsung Gear S2 reported the highest. In conclusion, most wrist-worn devices adequately measure HR in laboratory-based activities, but poorly estimate EE, suggesting caution in the use of EE measurements as part of health improvement programs. We propose reference standards for the validation of consumer health devices (http://precision.stanford.edu/).
Seebeck Coefficient Metrology: Do Contemporary Protocols Measure Up?
NASA Astrophysics Data System (ADS)
Martin, Joshua; Wong-Ng, Winnie; Green, Martin L.
2015-06-01
Comparative measurements of the Seebeck coefficient are challenging due to the diversity of instrumentation and measurement protocols. With the implementation of standardized measurement protocols and the use of Standard Reference Materials (SRMs®), for example, the recently certified National Institute of Standards and Technology (NIST) SRM® 3451 ``Low Temperature Seebeck Coefficient Standard (10-390 K)'', researchers can reliably analyze and compare data, both intra- and inter-laboratory, thereby accelerating the development of more efficient thermoelectric materials and devices. We present a comparative overview of commonly adopted Seebeck coefficient measurement practices. First, we examine the influence of asynchronous temporal and spatial measurement of electric potential and temperature. Temporal asynchronicity introduces error in the absolute Seebeck coefficient of the order of ≈10%, whereas spatial asynchronicity introduces error of the order of a few percent. Second, we examine the influence of poor thermal contact between the measurement probes and the sample. This is especially critical at high temperature, wherein the prevalent mode of measuring surface temperature is facilitated by pressure contact. Each topic will include the comparison of data measured using different measurement techniques and using different probe arrangements. We demonstrate that the probe arrangement is the primary limit to high accuracy, wherein the Seebeck coefficients measured by the 2-probe arrangement and those measured by the 4-probe arrangement diverge with the increase in temperature, approaching ≈14% at 900 K. Using these analyses, we provide recommended measurement protocols to guide members of the thermoelectric materials community in performing more accurate measurements and in evaluating more comprehensive uncertainty limits.
Precision of Four Acoustic Bone Measurement Devices
NASA Technical Reports Server (NTRS)
Miller, Christopher; Feiveson, Alan H.; Shackelford, Linda; Rianon, Nahida; LeBlanc, Adrian
2000-01-01
Though many studies have quantified the precision of various acoustic bone measurement devices, it is difficult to directly compare the results among the studies, because they used disparate subject pools, did not specify the estimation methodology, or did not use consistent definitions for various precision characteristics. In this study, we used a repeated measures design protocol to directly determine the precision characteristics of four acoustic bone measurement devices: the Mechanical Response Tissue Analyzer (MRTA), the UBA-575+, the SoundScan 2000 (S2000), and the Sahara Ultrasound Done Analyzer. Ten men and ten women were scanned on all four devices by two different operators at five discrete time points: Week 1, Week 2, Week 3, Month 3 and Month 6. The percent coefficient of variation (%CV) and standardized coefficient of variation were computed for the following precision characteristics: interoperator effect, operator-subject interaction, short-term error variance, and long-term drift, The MRTA had high interoperator errors for its ulnar and tibial stiffness measures and a large long-term drift in its tibial stiffness measurement. The UBA-575+ exhibited large short-term error variances and long-term drift for all three of its measurements. The S2000's tibial speed of sound measurement showed a high short-term error variance and a significant operator-subject interaction but very good values ( < 1%) for the other precision characteristics. The Sahara seemed to have the best overall performance, but was hampered by a large %CV for short-term error variance in its broadband ultrasound attenuation measure.
Precision of Four Acoustic Bone Measurement Devices
NASA Technical Reports Server (NTRS)
Miller, Christopher; Rianon, Nahid; Feiveson, Alan; Shackelford, Linda; LeBlanc, Adrian
2000-01-01
Though many studies have quantified the precision of various acoustic bone measurement devices, it is difficult to directly compare the results among the studies, because they used disparate subject pools, did not specify the estimation methodology, or did not use consistent definitions for various precision characteristics. In this study, we used a repeated measures design protocol to directly determine the precision characteristics of four acoustic bone measurement devices: the Mechanical Response Tissue Analyzer (MRTA), the UBA-575+, the SoundScan 2000 (S2000), and the Sahara Ultrasound Bone Analyzer. Ten men and ten women were scanned on all four devices by two different operators at five discrete time points: Week 1, Week 2, Week 3, Month 3 and Month 6. The percent coefficient of variation (%CV) and standardized coefficient of variation were computed for the following precision characteristics: interoperator effect, operator-subject interaction, short-term error variance, and long-term drift. The MRTA had high interoperator errors for its ulnar and tibial stiffness measures and a large long-term drift in its tibial stiffness measurement. The UBA-575+ exhibited large short-term error variances and long-term drift for all three of its measurements. The S2000's tibial speed of sound measurement showed a high short-term error variance and a significant operator-subject interaction but very good values (less than 1%) for the other precision characteristics. The Sahara seemed to have the best overall performance, but was hampered by a large %CV for short-term error variance in its broadband ultrasound attenuation measure.
Patient identification errors are common in a simulated setting.
Henneman, Philip L; Fisher, Donald L; Henneman, Elizabeth A; Pham, Tuan A; Campbell, Megan M; Nathanson, Brian H
2010-06-01
We evaluate the frequency and accuracy of health care workers verifying patient identity before performing common tasks. The study included prospective, simulated patient scenarios with an eye-tracking device that showed where the health care workers looked. Simulations involved nurses administering an intravenous medication, technicians labeling a blood specimen, and clerks applying an identity band. Participants were asked to perform their assigned task on 3 simulated patients, and the third patient had a different date of birth and medical record number than the identity information on the artifact label specific to the health care workers' task. Health care workers were unaware that the focus of the study was patient identity. Sixty-one emergency health care workers participated--28 nurses, 16 technicians, and 17 emergency service associates--in 183 patient scenarios. Sixty-one percent of health care workers (37/61) caught the identity error (61% nurses, 94% technicians, 29% emergency service associates). Thirty-nine percent of health care workers (24/61) performed their assigned task on the wrong patient (39% nurses, 6% technicians, 71% emergency service associates). Eye-tracking data were available for 73% of the patient scenarios (133/183). Seventy-four percent of health care workers (74/100) failed to match the patient to the identity band (87% nurses, 49% technicians). Twenty-seven percent of health care workers (36/133) failed to match the artifact to the patient or the identity band before performing their task (33% nurses, 9% technicians, 33% emergency service associates). Fifteen percent (5/33) of health care workers who completed the steps to verify patient identity on the patient with the identification error still failed to recognize the error. Wide variation exists among health care workers verifying patient identity before performing everyday tasks. Education, process changes, and technology are needed to improve the frequency and accuracy of patient identification. Copyright (c) 2009. Published by Mosby, Inc.
Medication Administration Practices of School Nurses.
ERIC Educational Resources Information Center
McCarthy, Ann Marie; Kelly, Michael W.; Reed, David
2000-01-01
Assessed medication administration practices among school nurses, surveying members of the National Association of School Nurses. Respondents were extremely concerned about medication administration. Errors in administering medications were reported by 48.5 percent of respondents, with missed doses the most common error. Most nurses followed…
Map-based trigonometric parallaxes of open clusters - The Pleiades
NASA Technical Reports Server (NTRS)
Gatewood, George; Castelaz, Michael; Han, Inwoo; Persinger, Timothy; Stein, John
1990-01-01
The multichannel astrometric photometer and Thaw refractor of the University of Pittsburgh's Allegheny Observatory have been used to determine the trigonometric parallax of the Pleiades star cluster. The distance determined, 150 with a standard error of 18 parsecs, places the cluster slightly farther away than generally accepted. This suggests that the basis of many estimations of the cosmic distance scale is approximately 20 percent short. The accuracy of the determination is limited by the number and choice of reference stars. With careful attention to the selection of reference stars in several Pleiades regions, it should be possible to examine differences in the photometric and trigonometric modulus at a precision of 0.1 magnitudes.
A biometric identification system based on eigenpalm and eigenfinger features.
Ribaric, Slobodan; Fratric, Ivan
2005-11-01
This paper presents a multimodal biometric identification system based on the features of the human hand. We describe a new biometric approach to personal identification using eigenfinger and eigenpalm features, with fusion applied at the matching-score level. The identification process can be divided into the following phases: capturing the image; preprocessing; extracting and normalizing the palm and strip-like finger subimages; extracting the eigenpalm and eigenfinger features based on the K-L transform; matching and fusion; and, finally, a decision based on the (k, l)-NN classifier and thresholding. The system was tested on a database of 237 people (1,820 hand images). The experimental results showed the effectiveness of the system in terms of the recognition rate (100 percent), the equal error rate (EER = 0.58 percent), and the total error rate (TER = 0.72 percent).
Angermeier, Ingo; Dunford, Benjamin B; Boss, Alan D; Boss, R Wayne
2009-01-01
Numerous challenges confront managers in the healthcare industry, making it increasingly difficult for healthcare organizations to gain and sustain a competitive advantage. Contemporary management challenges in the industry have many different origins (e.g., economic, financial, clinical, and legal), but there is growing recognition that some of management's greatest problems have organizational roots. Thus, healthcare organizations must examine their personnel management strategies to ensure that they are optimized for fostering a highly committed and productive workforce. Drawing on a sample of 2,522 employees spread across 312 departments within a large U.S. healthcare organization, this article examines the impact of a participative management climate on four employee-level outcomes that represent some of the greatest challenges in the healthcare industry: customer service, medical errors, burnout, and turnover intentions. This study provides clear evidence that employee perceptions of the extent to which their work climate is participative rather than authoritarian have important implications for critical work attitudes and behavior. Specifically, employees in highly participative work climates provided 14 percent better customer service, committed 26 percent fewer clinical errors, demonstrated 79 percent lower burnout, and felt 61 percent lower likelihood of leaving the organization than employees in more authoritarian work climates. These findings suggest that participative management initiatives have a significant impact on the commitment and productivity of individual employees, likely improving the patient care and effectiveness of healthcare organizations as a whole.
Estimating pore and cement volumes in thin section
Halley, R.B.
1978-01-01
Point count estimates of pore, grain and cement volumes from thin sections are inaccurate, often by more than 100 percent, even though they may be surprisingly precise (reproducibility + or - 3 percent). Errors are produced by: 1) inclusion of submicroscopic pore space within solid volume and 2) edge effects caused by grain curvature within a 30-micron thick thin section. Submicroscopic porosity may be measured by various physical tests or may be visually estimated from scanning electron micrographs. Edge error takes the form of an envelope around grains and increases with decreasing grain size and sorting, increasing grain irregularity and tighter grain packing. Cements are greatly involved in edge error because of their position at grain peripheries and their generally small grain size. Edge error is minimized by methods which reduce the thickness of the sample viewed during point counting. Methods which effectively reduce thickness include use of ultra-thin thin sections or acetate peels, point counting in reflected light, or carefully focusing and counting on the upper surface of the thin section.
Pettijohn, Robert A.; Busby, John F.; Cervantes, Michael A.
1993-01-01
The U.S. Geological Survey used four programs in 1990 to provide external data quality assurance for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN). Results of the intersite- comparison program indicate that 80 and 74 percent of the site operators met the NADP/NTN goals for pH determination and 98 and 95 percent of the site operators met the NADP/NTN goals for specific- conductance determination during the two studies in 1990. The effects of routine sample handling, processing, and shipping determined in the blind-audit program indicated significant positive bias for calcium, magnesium, sodium, potassium, chloride, nitrate, and sulfate. Significant negative bias was determined for hydrogen ion and specific conductance. A Kruskal-Wallis test indicated that there were no significant (a=0.01) differences in analytical results from the three laboratories participating in the interlaboratory-comparison program. Results from the collocated-sampler study indicate the median relative error for potassium and ammonium concentration and deposition exceeded 15 percent at most sites while the median relative error for sulfate and nitrate at all sites was less than 6 percent for concentration and was less than 15 percent for deposition.
NASA Technical Reports Server (NTRS)
Anbar, A. D.; Allen, M.; Nair, H. A.
1993-01-01
We have investigated the impact of high resolution, temperature-dependent CO2 cross-section measurements, reported by Lewis and Carver (1983), on calculations of photodissociation rate coefficients in the Martian atmosphere. We find that the adoption of 50 A intervals for the purpose of computational efficiency results in errors in the calculated values for photodissociation of CO2, H2O, and O2 which are generally not above 10 percent, but as large as 20 percent in some instances. These are acceptably small errors, especially considering the uncertainties introduced by the large temperature dependence of the CO2 cross section. The inclusion of temperature-dependent CO2 cross sections is shown to lead to a decrease in the diurnally averaged rate of CO2 photodissociation as large as 33 percent at some altitudes, and increases of as much as 950 percent and 80 percent in the photodissociation rate coefficients of H2O and O2, respectively. The actual magnitude of the changes depends on the assumptions used to model the CO2 absorption spectrum at temperatures lower than the available measurements, and at wavelengths longward of 1970 A.
Genetic and Environmental Contributions to Educational Attainment in Australia.
ERIC Educational Resources Information Center
Miller, Paul; Mulvey, Charles; Martin, Nick
2001-01-01
Data from a large sample of Australian twins indicate that 50 to 65 percent of variance in educational attainments can be attributed to genetic endowments. Only about 25 to 40 percent may be due to environmental factors, depending on adjustments for measurement error and assortative mating. (Contains 51 references.) (MLH)
47 CFR 101.91 - Involuntary relocation procedures.
Code of Federal Regulations, 2010 CFR
2010-10-01
... engineering, equipment, site and FCC fees, as well as any legitimate and prudent transaction expenses incurred..., reliability is measured by the percent of time the bit error rate (BER) exceeds a desired value, and for analog or digital voice transmissions, it is measured by the percent of time that audio signal quality...
Haba, Tomonobu; Kondo, Shimpei; Hayashi, Daiki; Koyama, Shuji
2013-07-01
Detective quantum efficiency (DQE) is widely used as a comprehensive metric for X-ray image evaluation in digital X-ray units. The incident photon fluence per air kerma (SNR²(in)) is necessary for calculating the DQE. The International Electrotechnical Commission (IEC) reports the SNR²(in) under conditions of standard radiation quality, but this SNR²(in) might not be accurate as calculated from the X-ray spectra emitted by an actual X-ray tube. In this study, we evaluated the error range of the SNR²(in) presented by the IEC62220-1 report. We measured the X-ray spectra emitted by an X-ray tube under conditions of standard radiation quality of RQA5. The spectral photon fluence at each energy bin was multiplied by the photon energy and the mass energy absorption coefficient of air; then the air kerma spectrum was derived. The air kerma spectrum was integrated over the whole photon energy range to yield the total air kerma. The total photon number was then divided by the total air kerma. This value is the SNR²(in). These calculations were performed for various measurement parameters and X-ray units. The percent difference between the calculated value and the standard value of RQA5 was up to 2.9%. The error range was not negligibly small. Therefore, it is better to use the new SNR²(in) of 30694 (1/(mm(2) μGy)) than the current [Formula: see text] of 30174 (1/(mm(2) μGy)).
Method of estimating natural recharge to the Edwards Aquifer in the San Antonio area, Texas
Puente, Celso
1978-01-01
The principal errors in the estimates of annual recharge are related to errors in estimating runoff in ungaged areas, which represent about 30 percent of the infiltration area. The estimated long-term average annual recharge in each basin, however, is probably representative of the actual recharge because the averaging procedure tends to cancel out the major errors.
ERIC Educational Resources Information Center
Micceri, Theodore; Parasher, Pradnya; Waugh, Gordon W.; Herreid, Charlene
2009-01-01
An extensive review of the research literature and a study comparing over 36,000 survey responses with archival true scores indicated that one should expect a minimum of at least three percent random error for the least ambiguous of self-report measures. The Gulliver Effect occurs when a small proportion of error in a sizable subpopulation exerts…
Evaluation of Satellite and Model Precipitation Products Over Turkey
NASA Astrophysics Data System (ADS)
Yilmaz, M. T.; Amjad, M.
2017-12-01
Satellite-based remote sensing, gauge stations, and models are the three major platforms to acquire precipitation dataset. Among them satellites and models have the advantage of retrieving spatially and temporally continuous and consistent datasets, while the uncertainty estimates of these retrievals are often required for many hydrological studies to understand the source and the magnitude of the uncertainty in hydrological response parameters. In this study, satellite and model precipitation data products are validated over various temporal scales (daily, 3-daily, 7-daily, 10-daily and monthly) using in-situ measured precipitation observations from a network of 733 gauges from all over the Turkey. Tropical Rainfall Measurement Mission (TRMM) Multi-satellite Precipitation Analysis (TMPA) 3B42 version 7 and European Center of Medium-Range Weather Forecast (ECMWF) model estimates (daily, 3-daily, 7-daily and 10-daily accumulated forecast) are used in this study. Retrievals are evaluated for their mean and standard deviation and their accuracies are evaluated via bias, root mean square error, error standard deviation and correlation coefficient statistics. Intensity vs frequency analysis and some contingency table statistics like percent correct, probability of detection, false alarm ratio and critical success index are determined using daily time-series. Both ECMWF forecasts and TRMM observations, on average, overestimate the precipitation compared to gauge estimates; wet biases are 10.26 mm/month and 8.65 mm/month, respectively for ECMWF and TRMM. RMSE values of ECMWF forecasts and TRMM estimates are 39.69 mm/month and 41.55 mm/month, respectively. Monthly correlations between Gauges-ECMWF, Gauges-TRMM and ECMWF-TRMM are 0.76, 0.73 and 0.81, respectively. The model and the satellite error statistics are further compared against the gauges error statistics based on inverse distance weighting (IWD) analysis. Both the model and satellite data have less IWD errors (14.72 mm/month and 10.75 mm/month, respectively) compared to gauges IWD error (21.58 mm/month). These results show that, on average, ECMWF forecast data have higher skill than TRMM observations. Overall, both ECMWF forecast data and TRMM observations show good potential for catchment scale hydrological analysis.
NASA Technical Reports Server (NTRS)
Wilmington, R. P.; Klute, Glenn K. (Editor); Carroll, Amy E. (Editor); Stuart, Mark A. (Editor); Poliner, Jeff (Editor); Rajulu, Sudhakar (Editor); Stanush, Julie (Editor)
1992-01-01
Kinematics, the study of motion exclusive of the influences of mass and force, is one of the primary methods used for the analysis of human biomechanical systems as well as other types of mechanical systems. The Anthropometry and Biomechanics Laboratory (ABL) in the Crew Interface Analysis section of the Man-Systems Division performs both human body kinematics as well as mechanical system kinematics using the Ariel Performance Analysis System (APAS). The APAS supports both analysis of analog signals (e.g. force plate data collection) as well as digitization and analysis of video data. The current evaluations address several methodology issues concerning the accuracy of the kinematic data collection and analysis used in the ABL. This document describes a series of evaluations performed to gain quantitative data pertaining to position and constant angular velocity movements under several operating conditions. Two-dimensional as well as three-dimensional data collection and analyses were completed in a controlled laboratory environment using typical hardware setups. In addition, an evaluation was performed to evaluate the accuracy impact due to a single axis camera offset. Segment length and positional data exhibited errors within 3 percent when using three-dimensional analysis and yielded errors within 8 percent through two-dimensional analysis (Direct Linear Software). Peak angular velocities displayed errors within 6 percent through three-dimensional analyses and exhibited errors of 12 percent when using two-dimensional analysis (Direct Linear Software). The specific results from this series of evaluations and their impacts on the methodology issues of kinematic data collection and analyses are presented in detail. The accuracy levels observed in these evaluations are also presented.
Komiskey, Matthew J.; Stuntebeck, Todd D.; Cox, Amanda L.; Frame, Dennis R.
2013-01-01
The effects of longitudinal slope on the estimation of discharge in a 0.762-meter (m) (depth at flume entrance) H flume were tested under controlled conditions with slopes from −8 to +8 percent and discharges from 1.2 to 323 liters per second. Compared to the stage-discharge rating for a longitudinal flume slope of zero, computed discharges were negatively biased (maximum −31 percent) when the flume was sloped downward from the front (entrance) to the back (exit), and positively biased (maximum 44 percent) when the flume was sloped upward. Biases increased with greater flume slopes and with lower discharges. A linear empirical relation was developed to compute a corrected reference stage for a 0.762-m H flume using measured stage and flume slope. The reference stage was then used to determine a corrected discharge from the stage-discharge rating. A dimensionally homogeneous correction equation also was developed, which could theoretically be used for all standard H-flume sizes. Use of the corrected discharge computation method for a sloped H flume was determined to have errors ranging from −2.2 to 4.6 percent compared to the H-flume measured discharge at a level position. These results emphasize the importance of the measurement of and the correction for flume slope during an edge-of-field study if the most accurate discharge estimates are desired.
Short term load forecasting using a self-supervised adaptive neural network
DOE Office of Scientific and Technical Information (OSTI.GOV)
Yoo, H.; Pimmel, R.L.
The authors developed a self-supervised adaptive neural network to perform short term load forecasts (STLF) for a large power system covering a wide service area with several heavy load centers. They used the self-supervised network to extract correlational features from temperature and load data. In using data from the calendar year 1993 as a test case, they found a 0.90 percent error for hour-ahead forecasting and 1.92 percent error for day-ahead forecasting. These levels of error compare favorably with those obtained by other techniques. The algorithm ran in a couple of minutes on a PC containing an Intel Pentium --more » 120 MHz CPU. Since the algorithm included searching the historical database, training the network, and actually performing the forecasts, this approach provides a real-time, portable, and adaptable STLF.« less
(U) An Analytic Study of Piezoelectric Ejecta Mass Measurements
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tregillis, Ian Lee
2017-02-16
We consider the piezoelectric measurement of the areal mass of an ejecta cloud, for the specific case where ejecta are created by a single shock at the free surface and fly ballistically through vacuum to the sensor. To do so, we define time- and velocity-dependent ejecta “areal mass functions” at the source and sensor in terms of typically unknown distribution functions for the ejecta particles. Next, we derive an equation governing the relationship between the areal mass function at the source (which resides in the rest frame of the free surface) and at the sensor (which resides in the laboratorymore » frame). We also derive expressions for the analytic (“true”) accumulated ejecta mass at the sensor and the measured (“inferred”) value obtained via the standard method for analyzing piezoelectric voltage traces. This approach enables us to derive an exact expression for the error imposed upon a piezoelectric ejecta mass measurement (in a perfect system) by the assumption of instantaneous creation. We verify that when the ejecta are created instantaneously (i.e., when the time dependence is a delta function), the piezoelectric inference method exactly reproduces the correct result. When creation is not instantaneous, the standard piezo analysis will always overestimate the true mass. However, the error is generally quite small (less than several percent) for most reasonable velocity and time dependences. In some cases, errors exceeding 10-15% may require velocity distributions or ejecta production timescales inconsistent with experimental observations. These results are demonstrated rigorously with numerous analytic test problems.« less
Development of a traveltime prediction equation for streams in Arkansas
Funkhouser, Jaysson E.; Barks, C. Shane
2004-01-01
During 1971 and 1981 and 2001 and 2003, traveltime measurements were made at 33 sample sites on 18 streams throughout northern and western Arkansas using fluorescent dye. Most measurements were made during steady-state base-flow conditions with the exception of three measurements made during near steady-state medium-flow conditions (for the study described in this report, medium-flow is approximately 100-150 percent of the mean monthly streamflow during the month the dye trace was conducted). These traveltime data were compared to the U.S. Geological Survey?s national traveltime prediction equation and used to develop a specific traveltime prediction equation for Arkansas streams. In general, the national traveltime prediction equation yielded results that over-predicted the velocity of the streams for 29 of the 33 sites measured. The standard error for the national traveltime prediction equation was 105 percent. The coefficient of determination was 0.78. The Arkansas prediction equation developed from a regression analysis of dye-tracing results was a significant improvement over the national prediction equation. This regression analysis yielded a standard error of 46 percent and a coefficient of determination of 0.74. The predicted velocities using this equation compared better to measured velocities. Using the variables in a regression analysis, the Arkansas prediction equation derived for the peak velocity in feet per second was: (Actual Equation Shown in report) In addition to knowing when the peak concentration will arrive at a site, it is of great interest to know when the leading edge of a contaminant plume will arrive. The traveltime of the leading edge of a contaminant plume indicates when a potential problem might first develop and also defines the overall shape of the concentration response function. Previous USGS reports have shown no significant relation between any of the variables and the time from injection to the arrival of the leading edge of the dye plume. For this report, the analysis of the dye-tracing data yielded a significant correlation between traveltime of the leading edge and traveltime of the peak concentration with an R2 value of 0.99. These data indicate that the traveltime of the leading edge can be estimated from: (Actual Equation Shown in Report)
Decay properties of {sup 265}Sg(Z=106) and {sup 266}Sg(Z=106)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tuerler, A.; Dressler, R.; Eichler, B.
1998-04-01
The presently known most neutron-rich isotopes of element 106 (seaborgium, Sg), {sup 265}Sg and {sup 266}Sg, were produced in the fusion reaction {sup 22}Ne+{sup 248}Cm at beam energies of 121 and 123 MeV. Using the On-Line Gas chemistry Apparatus OLGA, a continuous separation of Sg was achieved within a few seconds. Final products were assayed by {alpha}-particle and spontaneous fission (SF) spectrometry. {sup 265}Sg and {sup 266}Sg were identified by observing time correlated {alpha}-{alpha}-({alpha}) and {alpha}-SF decay chains. A total of 13 correlated decay chains of {sup 265}Sg (with an estimated number of 2.8 random correlations) and 3 decay chainsmore » of {sup 266}Sg (0.6 random correlations) were identified. Deduced decay properties were T{sub 1/2}=7.4{sub {minus}2.7}{sup +3.3} s (68{percent} c.i.) and E{sub {alpha}}=8.69 MeV (8{percent}), 8.76 MeV (23{percent}), 8.84 MeV (46{percent}), and 8.94 MeV (23{percent}) for {sup 265}Sg; and T{sub 1/2}=21{sub {minus}12}{sup +20} s (68{percent} c.i.) and E{sub {alpha}}=8.52 MeV (33{percent}) and 8.77 MeV (66{percent}) for {sup 266}Sg. The resolution of the detectors was between 50{endash}100 keV (full width at half maximum). Upper limits for SF of {le}35{percent} and {le}82{percent} were established for {sup 265}Sg and {sup 266}Sg, respectively. The upper limits for SF are given with a 16{percent} error probability. Using the lower error limits of the half-lives of {sup 265}Sg and {sup 266}Sg, the resulting lower limits for the partial SF half-lives are T{sub 1/2}{sup SF}({sup 265}Sg){ge}13 s and T{sub 1/2}{sup SF}({sup 266}Sg){ge}11 s. Correspondingly, the partial {alpha}-decay half-lives are between T{sub 1/2}{sup {alpha}}({sup 265}Sg)=4.7{endash}16.5 s (68{percent} c.i.) and T{sub 1/2}{sup {alpha}}({sup 266}Sg)=9{endash}228 s (68{percent} c.i.), using the upper and lower error limits of the half-lives of {sup 265}Sg and {sup 266}Sg. The lower limit on the partial SF half-life of {sup 266}Sg is in good agreement with theoretical predictions. Production cross sections of about 240 pb and 25 pb for the {alpha}-decay branch in {sup 265}Sg and {sup 266}Sg were estimated, respectively. {copyright} {ital 1998} {ital The American Physical Society}« less
Health plan auditing: 100-percent-of-claims vs. random-sample audits.
Sillup, George P; Klimberg, Ronald K
2011-01-01
The objective of this study was to examine the relative efficacy of two different methodologies for auditing self-funded medical claim expenses: 100-percent-of-claims auditing versus random-sampling auditing. Multiple data sets of claim errors or 'exceptions' from two Fortune-100 corporations were analysed and compared to 100 simulated audits of 300- and 400-claim random samples. Random-sample simulations failed to identify a significant number and amount of the errors that ranged from $200,000 to $750,000. These results suggest that health plan expenses of corporations could be significantly reduced if they audited 100% of claims and embraced a zero-defect approach.
Analytical skin friction and heat transfer formula for compressible internal flows
NASA Technical Reports Server (NTRS)
Dechant, Lawrence J.; Tattar, Marc J.
1994-01-01
An analytic, closed-form friction formula for turbulent, internal, compressible, fully developed flow was derived by extending the incompressible law-of-the-wall relation to compressible cases. The model is capable of analyzing heat transfer as a function of constant surface temperatures and surface roughness as well as analyzing adiabatic conditions. The formula reduces to Prandtl's law of friction for adiabatic, smooth, axisymmetric flow. In addition, the formula reduces to the Colebrook equation for incompressible, adiabatic, axisymmetric flow with various roughnesses. Comparisons with available experiments show that the model averages roughly 12.5 percent error for adiabatic flow and 18.5 percent error for flow involving heat transfer.
Method of estimating flood-frequency parameters for streams in Idaho
Kjelstrom, L.C.; Moffatt, R.L.
1981-01-01
Skew coefficients for the log-Pearson type III distribution are generalized on the basis of some similarity of floods in the Snake River basin and other parts of Idaho. Generalized skew coefficients aid in shaping flood-frequency curves because skew coefficients computed from gaging stations having relatively short periods of peak flow records can be unreliable. Generalized skew coefficients can be obtained for a gaging station from one of three maps in this report. The map to be used depends on whether (1) snowmelt floods are domiant (generally when more than 20 percent of the drainage area is above 6,000 feet altitude), (2) rainstorm floods are dominant (generally when the mean altitude is less than 3,000 feet), or (3) either snowmelt or rainstorm floods can be the annual miximum discharge. For the latter case, frequency curves constructed using separate arrays of each type of runoff can be combined into one curve, which, for some stations, is significantly different than the frequency curve constructed using only annual maximum discharges. For 269 gaging stations, flood-frequency curves that include the generalized skew coefficients in the computation of the log-Pearson type III equation tend to fit the data better than previous analyses. Frequency curves for ungaged sites can be derived by estimating three statistics of the log-Pearson type III distribution. The mean and standard deviation of logarithms of annual maximum discharges are estimated by regression equations that use basin characteristics as independent variables. Skew coefficient estimates are the generalized skews. The log-Pearson type III equation is then applied with the three estimated statistics to compute the discharge at selected exceedance probabilities. Standard errors at the 2-percent exceedance probability range from 41 to 90 percent. (USGS)
NASA Technical Reports Server (NTRS)
Steffen, K.; Schweiger, A. J.
1990-01-01
The validation of sea ice products derived from the Special Sensor Microwave Imager (SSM/I) on board a DMSP platform is examined using data from the Landsat MSS and NOAA-AVHRR sensors. Image processing techniques for retrieving ice concentrations from each type of imagery are developed and results are intercompared to determine the ice parameter retrieval accuracy of the SSM/I NASA-Team algorithm. For case studies in the Beaufort Sea and East Greenland Sea, average retrieval errors of the SSM/I algorithm are between 1.7 percent for spring conditions and 4.3 percent during freeze up in comparison with Landsat derived ice concentrations. For a case study in the East Greenland Sea, SSM/I derived ice concentration in comparison with AVHRR imagery display a mean error of 9.6 percent.
FLIR Common Module Design Manual. Revision 1
1978-03-01
degrade off-axis. The afocal assem- bly is very critical to system performance and normally constitutes a signif- icant portion of the system...not significantly degrade the performance at 10 lp/mm because chromatic errors are about 1/2 of the diffraction error. The chromatic errors are... degradation , though only 3 percent, is unavoidable. It is caused by field curvature in the galilean afocal assembly. This field curvature is
Algebra Students' Difficulty with Fractions: An Error Analysis
ERIC Educational Resources Information Center
Brown, George; Quinn, Robert J.
2006-01-01
An analysis of the 1990 National Assessment of Educational Progress (NAEP) found that only 46 percent of all high school seniors demonstrated success with a grasp of decimals, percentages, fractions and simple algebra. This article investigates error patterns that emerge as students attempt to answer questions involving the ability to apply…
Potential sources of analytical bias and error in selected trace element data-quality analyses
Paul, Angela P.; Garbarino, John R.; Olsen, Lisa D.; Rosen, Michael R.; Mebane, Christopher A.; Struzeski, Tedmund M.
2016-09-28
Potential sources of analytical bias and error associated with laboratory analyses for selected trace elements where concentrations were greater in filtered samples than in paired unfiltered samples were evaluated by U.S. Geological Survey (USGS) Water Quality Specialists in collaboration with the USGS National Water Quality Laboratory (NWQL) and the Branch of Quality Systems (BQS).Causes for trace-element concentrations in filtered samples to exceed those in associated unfiltered samples have been attributed to variability in analytical measurements, analytical bias, sample contamination either in the field or laboratory, and (or) sample-matrix chemistry. These issues have not only been attributed to data generated by the USGS NWQL but have been observed in data generated by other laboratories. This study continues the evaluation of potential analytical bias and error resulting from matrix chemistry and instrument variability by evaluating the performance of seven selected trace elements in paired filtered and unfiltered surface-water and groundwater samples collected from 23 sampling sites of varying chemistries from six States, matrix spike recoveries, and standard reference materials.Filtered and unfiltered samples have been routinely analyzed on separate inductively coupled plasma-mass spectrometry instruments. Unfiltered samples are treated with hydrochloric acid (HCl) during an in-bottle digestion procedure; filtered samples are not routinely treated with HCl as part of the laboratory analytical procedure. To evaluate the influence of HCl on different sample matrices, an aliquot of the filtered samples was treated with HCl. The addition of HCl did little to differentiate the analytical results between filtered samples treated with HCl from those samples left untreated; however, there was a small, but noticeable, decrease in the number of instances where a particular trace-element concentration was greater in a filtered sample than in the associated unfiltered sample for all trace elements except selenium. Accounting for the small dilution effect (2 percent) from the addition of HCl, as required for the in-bottle digestion procedure for unfiltered samples, may be one step toward decreasing the number of instances where trace-element concentrations are greater in filtered samples than in paired unfiltered samples.The laboratory analyses of arsenic, cadmium, lead, and zinc did not appear to be influenced by instrument biases. These trace elements showed similar results on both instruments used to analyze filtered and unfiltered samples. The results for aluminum and molybdenum tended to be higher on the instrument designated to analyze unfiltered samples; the results for selenium tended to be lower. The matrices used to prepare calibration standards were different for the two instruments. The instrument designated for the analysis of unfiltered samples was calibrated using standards prepared in a nitric:hydrochloric acid (HNO3:HCl) matrix. The instrument designated for the analysis of filtered samples was calibrated using standards prepared in a matrix acidified only with HNO3. Matrix chemistry may have influenced the responses of aluminum, molybdenum, and selenium on the two instruments. The best analytical practice is to calibrate instruments using calibration standards prepared in matrices that reasonably match those of the samples being analyzed.Filtered and unfiltered samples were spiked over a range of trace-element concentrations from less than 1 to 58 times ambient concentrations. The greater the magnitude of the trace-element spike concentration relative to the ambient concentration, the greater the likelihood spike recoveries will be within data control guidelines (80–120 percent). Greater variability in spike recoveries occurred when trace elements were spiked at concentrations less than 10 times the ambient concentration. Spike recoveries that were considerably lower than 90 percent often were associated with spiked concentrations substantially lower than what was present in the ambient sample. Because the main purpose of spiking natural water samples with known quantities of a particular analyte is to assess possible matrix effects on analytical results, the results of this study stress the importance of spiking samples at concentrations that are reasonably close to what is expected but sufficiently high to exceed analytical variability. Generally, differences in spike recovery results between paired filtered and unfiltered samples were minimal when samples were analyzed on the same instrument.Analytical results for trace-element concentrations in ambient filtered and unfiltered samples greater than 10 and 40 μg/L, respectively, were within the data-quality objective for precision of ±25 percent. Ambient trace-element concentrations in filtered samples greater than the long-term method detection limits but less than 10 μg/L failed to meet the data-quality objective for precision for at least one trace element in about 54 percent of the samples. Similarly, trace-element concentrations in unfiltered samples greater than the long-term method detection limits but less than 40 μg/L failed to meet this data-quality objective for at least one trace-element analysis in about 58 percent of the samples. Although, aluminum and zinc were particularly problematic, limited re-analyses of filtered and unfiltered samples appeared to improve otherwise failed analytical precision.The evaluation of analytical bias using standard reference materials indicate a slight low bias for results for arsenic, cadmium, selenium, and zinc. Aluminum and molybdenum show signs of high bias. There was no observed bias, as determined using the standard reference materials, during the analysis of lead.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-07-26
... compare information across the States; provide descriptive estimates of the contribution to payment error... of whom will complete surveys); 106 State QC supervisors (3 in the pretest, 100 percent of whom will... telephone, 81 percent of whom will complete surveys); and 265 State QC reviewers (5 in the pretest, 100...
How does the cosmic large-scale structure bias the Hubble diagram?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Fleury, Pierre; Clarkson, Chris; Maartens, Roy, E-mail: pierre.fleury@uct.ac.za, E-mail: chris.clarkson@qmul.ac.uk, E-mail: roy.maartens@gmail.com
2017-03-01
The Hubble diagram is one of the cornerstones of observational cosmology. It is usually analysed assuming that, on average, the underlying relation between magnitude and redshift matches the prediction of a Friedmann-Lemaître-Robertson-Walker model. However, the inhomogeneity of the Universe generically biases these observables, mainly due to peculiar velocities and gravitational lensing, in a way that depends on the notion of average used in theoretical calculations. In this article, we carefully derive the notion of average which corresponds to the observation of the Hubble diagram. We then calculate its bias at second-order in cosmological perturbations, and estimate the consequences on themore » inference of cosmological parameters, for various current and future surveys. We find that this bias deeply affects direct estimations of the evolution of the dark-energy equation of state. However, errors in the standard inference of cosmological parameters remain smaller than observational uncertainties, even though they reach percent level on some parameters; they reduce to sub-percent level if an optimal distance indicator is used.« less
Fixed-head star tracker attitude updates on the Hubble Space Telescope
NASA Technical Reports Server (NTRS)
Nadelman, Matthew S.; Karl, Jeffrey B.; Hallock, Lou
1994-01-01
The Hubble Space Telescope (HST) was launched in April 1990 to begin observing celestial space to the edge of the universe. National Aeronautics and Space Administration (NASA) standard fixed-head star trackers (FHST's) are used operationally onboard the HST to regularly adjust ('update') the spacecraft attitude before the acquisition of guide stars for science observations. During the first 3 months of the mission, the FHST's updated the spacecraft attitude successfully only 85 percent of the time. During the other periods, the trackers were unable to find the selected stars -- either they failed to find any star, or worse, they selected incorrect stars and produced erroneous attitude updates. In July 1990, the HST project office at Goddard Space Flight Center (GSFC) requested that Computer Sciences Corporation (CSC) form an investigative 'tiger' team to examine these FHST update failures. This paper discusses the work of the FHST tiger team, describes the investigations that led the team to identify the sources of the errors, and defines the solutions that were subsequently developed, which ultimately increased the success rate of FHST updates to approximately 98 percent.
A Streamflow Statistics (StreamStats) Web Application for Ohio
Koltun, G.F.; Kula, Stephanie P.; Puskas, Barry M.
2006-01-01
A StreamStats Web application was developed for Ohio that implements equations for estimating a variety of streamflow statistics including the 2-, 5-, 10-, 25-, 50-, 100-, and 500-year peak streamflows, mean annual streamflow, mean monthly streamflows, harmonic mean streamflow, and 25th-, 50th-, and 75th-percentile streamflows. StreamStats is a Web-based geographic information system application designed to facilitate the estimation of streamflow statistics at ungaged locations on streams. StreamStats can also serve precomputed streamflow statistics determined from streamflow-gaging station data. The basic structure, use, and limitations of StreamStats are described in this report. To facilitate the level of automation required for Ohio's StreamStats application, the technique used by Koltun (2003)1 for computing main-channel slope was replaced with a new computationally robust technique. The new channel-slope characteristic, referred to as SL10-85, differed from the National Hydrography Data based channel slope values (SL) reported by Koltun (2003)1 by an average of -28.3 percent, with the median change being -13.2 percent. In spite of the differences, the two slope measures are strongly correlated. The change in channel slope values resulting from the change in computational method necessitated revision of the full-model equations for flood-peak discharges originally presented by Koltun (2003)1. Average standard errors of prediction for the revised full-model equations presented in this report increased by a small amount over those reported by Koltun (2003)1, with increases ranging from 0.7 to 0.9 percent. Mean percentage changes in the revised regression and weighted flood-frequency estimates relative to regression and weighted estimates reported by Koltun (2003)1 were small, ranging from -0.72 to -0.25 percent and -0.22 to 0.07 percent, respectively.
Carey, Daniel G; Raymond, Robert L
2008-07-01
The primary objective of this study was to assess the validity of body mass index (BMI) in predicting percent body fat and changes in percent body fat with weight loss in bariatric surgery patients. Twenty-two bariatric patients (17 female, five male) began the study designed to include 12 months of testing, including data collection within 1 week presurgery and 1 month, 3 months, 6 months, and 1 year postsurgery. Five female subjects were lost to the study between 6 months and 12 months postsurgery, resulting in 17 subjects (12 female, five male) completing the 12 months of testing. Variables measured in the study included height, weight, percent fat (% fat) by hydrostatic weighing, lean mass, fat mass, and basal metabolic rate. Regression analyses predicting % fat from BMI yielded the following results: presurgery r = 0.173, p = 0.479, standard error of estimate (SEE) = 3.86; 1 month r = 0.468, p = 0.043, SEE = 4.70; 3 months r = 0.553, p = 0.014, SEE = 6.2; 6 months r = 0.611, p = 0.005, SEE = 5.88; 12 months r = 0.596, p = 0.007, SEE = 7.13. Regression analyses predicting change in % fat from change in BMI produced the following results: presurgery to 1 month r = -0.134, p = 0.583, SEE = 2.44%; 1-3 months r = 0.265, p = 0.272, SEE = 2.36%; 3-6 months r = 0.206, p = 0.398, SEE = 3.75%; 6-12 months r = 0.784, p = 0.000, SEE = 3.20. Although some analyses resulted in significant correlation coefficients (p < 0.05), the relatively large SEE values would preclude the use of BMI in predicting % fat or change in % fat with weight loss in bariatric surgery patients.
Guay, Joel R.
2002-01-01
To better understand the rainfall-runoff characteristics of the eastern part of the San Jacinto River Basin and to estimate the effects of increased urbanization on streamflow, channel infiltration, and land-surface infiltration, a long-term (1950?98) time series of monthly flows in and out of the channels and land surfaces were simulated using the Hydrologic Simulation Program- FORTRAN (HSPF) rainfall-runoff model. Channel and land-surface infiltration includes rainfall or runoff that infiltrates past the zone of evapotranspiration and may become ground-water recharge. The study area encompasses about 256 square miles of the San Jacinto River drainage basin in Riverside County, California. Daily streamflow (for periods with available data between 1950 and 1998), and daily rainfall and evaporation (1950?98) data; monthly reservoir storage data (1961?98); and estimated mean annual reservoir inflow data (for 1974 conditions) were used to calibrate the rainfall-runoff model. Measured and simulated mean annual streamflows for the San Jacinto River near San Jacinto streamflow-gaging station (North-South Fork subbasin) for 1950?91 and 1997?98 were 14,000 and 14,200 acre-feet, respectively, a difference of 1.4 percent. The standard error of the mean for measured and simulated annual streamflow in the North-South Fork subbasin was 3,520 and 3,160 acre-feet, respectively. Measured and simulated mean annual streamflows for the Bautista Creek streamflow-gaging station (Bautista Creek subbasin) for 1950?98 were 980 acre-feet and 991 acre-feet, respectively, a difference of 1.1 percent. The standard error of the mean for measured and simulated annual streamflow in the Bautista Creek subbasin was 299 and 217 acre-feet, respectively. Measured and simulated annual streamflows for the San Jacinto River above State Street near San Jacinto streamflow-gaging station (Poppet subbasin) for 1998 were 23,400 and 23,500 acre-feet, respectively, a difference of 0.4 percent. The simulated mean annual streamflow for the State Street gaging station at the outlet of the study basin and the simulated mean annual basin infiltration (combined infiltration from all the channels and land surfaces) were 8,720 and 41,600 acre-feet, respectively, for water years 1950-98. Simulated annual streamflow at the State Street gaging station ranged from 16.8 acre-feet in water year 1961 to 70,400 acre-feet in water year 1993, and simulated basin infiltration ranged from 2,770 acre-feet in water year 1961 to 149,000 acre-feet in water year 1983.The effects of increased urbanization on the hydrology of the study basin were evaluated by increasing the size of the effective impervious and non-effective impervious urban areas simulated in the calibrated rainfall-runoff model by 50 and 100 percent, respectively. The rainfall-runoff model simulated a long-term time series of monthly flows in and out of the channels and land surfaces using daily rainfall and potential evaporation data for water years 1950?98. Increasing the effective impervious and non-effective impervious urban areas by 100 percent resulted in a 5-percent increase in simulated mean annual streamflow at the State Street gaging station, and a 2.2-percent increase in simulated basin infiltration. Results of a frequency analysis of the simulated annual streamflow at the State Street gaging station showed that when effective impervious and non-effective impervious areas were increased 100 percent, simulated annual streamflow increased about 100 percent for low-flow conditions and was unchanged for high-flow conditions. The simulated increase in streamflow at the State Street gaging station potentially could infiltrate along the stream channel further downstream, outside of the model area.
Leyden, James J
2011-10-01
A rosacea treatment system (cleanser, metronidazole 0.75% gel, hydrating complexion corrector, and sunscreen SPF30) has been developed to treat rosacea. Thirty women with mild or moderate erythema of rosacea on their facial cheeks were randomly assigned to use one of the following for 28 days: the rosacea treatment system (RTS); RTS minus metronidazole (RTS-M); or metronidazole 0.75% gel plus standard skin care (standard cleanser and standard moisturizer/sunscreen) (M+SSC). At day 28, global improvement was evident in 90 percent of patients using RTS, 60 percent using RTS-M, and 67 percent using M+SSC. Erythema was significantly lower with RTS from day 14 onward, and unchanged with M+SSC. The proportion of patients reporting their skin was easily irritated at least sometimes was 40 percent with RTS, 70 percent with RTS-M, and 89 percent with M+SSC. The rosacea treatment system may offer superior efficacy and tolerability to metronidazole plus the standard skin care used in this study.
Kunkle, Gerald A.
2016-01-07
The Sutron 8310-N-S (8310) data collection platform (DCP) manufactured by Sutron Corporation was evaluated by the U.S. Geological Survey (USGS) Hydrologic Instrumentation Facility (HIF) for conformance to the manufacturer’s specifications for recording and transmitting data. The 8310-N-S is a National Electrical Manufacturers Association (NEMA)-enclosed DCP with a built-in Geostationary Operational Environmental Satellite transmitter that operates over a temperature range of −40 to 60 degrees Celsius (°C). The evaluation procedures followed and the results obtained are described in this report for bench, temperature chamber, and outdoor deployment testing. The three units tested met the manufacturer’s stated specifications for the tested conditions, but two of the units had transmission errors either during temperature chamber or deployment testing. During outdoor deployment testing, 6.72 percent of transmissions by serial number 1206109 contained errors, resulting in missing data. Transmission errors were also observed during temperature chamber testing with serial number 1208283, at an error rate of 3.22 percent. Overall, the 8310 has good logging capabilities, but the transmission errors are a concern for users who require reliable telemetered data.
NASA Technical Reports Server (NTRS)
Loughman, R.; Flittner, D.; Herman, B.; Bhartia, P.; Hilsenrath, E.; McPeters, R.; Rault, D.
2002-01-01
The SOLSE (Shuttle Ozone Limb Sounding Experiment) and LORE (Limb Ozone Retrieval Experiment) instruments are scheduled for reflight on Space Shuttle flight STS-107 in July 2002. In addition, the SAGE III (Stratospheric Aerosol and Gas Experiment) instrument will begin to make limb scattering measurements during Spring 2002. The optimal estimation technique is used to analyze visible and ultraviolet limb scattered radiances and produce a retrieved ozone profile. The algorithm used to analyze data from the initial flight of the SOLSE/LORE instruments (on Space Shuttle flight STS-87 in November 1997) forms the basis of the current algorithms, with expansion to take advantage of the increased multispectral information provided by SOLSE/LORE-2 and SAGE III. We also present detailed sensitivity analysis for these ozone retrieval algorithms. The primary source of ozone retrieval error is tangent height misregistration (i.e., instrument pointing error), which is relevant throughout the altitude range of interest, and can produce retrieval errors on the order of 10-20 percent due to a tangent height registration error of 0.5 km at the tangent point. Other significant sources of error are sensitivity to stratospheric aerosol and sensitivity to error in the a priori ozone estimate (given assumed instrument signal-to-noise = 200). These can produce errors up to 10 percent for the ozone retrieval at altitudes less than 20 km, but produce little error above that level.
Oki, Delwyn S.; Rosa, Sarah N.; Yeung, Chiu W.
2010-01-01
This study provides an updated analysis of the magnitude and frequency of peak stream discharges in Hawai`i. Annual peak-discharge data collected by the U.S. Geological Survey during and before water year 2008 (ending September 30, 2008) at stream-gaging stations were analyzed. The existing generalized-skew value for the State of Hawai`i was retained, although three methods were used to evaluate whether an update was needed. Regional regression equations were developed for peak discharges with 2-, 5-, 10-, 25-, 50-, 100-, and 500-year recurrence intervals for unregulated streams (those for which peak discharges are not affected to a large extent by upstream reservoirs, dams, diversions, or other structures) in areas with less than 20 percent combined medium- and high-intensity development on Kaua`i, O`ahu, Moloka`i, Maui, and Hawai`i. The generalized-least-squares (GLS) regression equations relate peak stream discharge to quantified basin characteristics (for example, drainage-basin area and mean annual rainfall) that were determined using geographic information system (GIS) methods. Each of the islands of Kaua`i,O`ahu, Moloka`i, Maui, and Hawai`i was divided into two regions, generally corresponding to a wet region and a dry region. Unique peak-discharge regression equations were developed for each region. The regression equations developed for this study have standard errors of prediction ranging from 16 to 620 percent. Standard errors of prediction are greatest for regression equations developed for leeward Moloka`i and southern Hawai`i. In general, estimated 100-year peak discharges from this study are lower than those from previous studies, which may reflect the longer periods of record used in this study. Each regression equation is valid within the range of values of the explanatory variables used to develop the equation. The regression equations were developed using peak-discharge data from streams that are mainly unregulated, and they should not be used to estimate peak discharges in regulated streams. Use of a regression equation beyond its limits will produce peak-discharge estimates with unknown error and should therefore be avoided. Improved estimates of the magnitude and frequency of peak discharges in Hawai`i will require continued operation of existing stream-gaging stations and operation of additional gaging stations for areas such as Moloka`i and Hawai`i, where limited stream-gaging data are available.
Smoking, ADHD, and Problematic Video Game Use: A Structural Modeling Approach.
Lee, Hyo Jin; Tran, Denise D; Morrell, Holly E R
2018-05-01
Problematic video game use (PVGU), or addiction-like use of video games, is associated with physical and mental health problems and problems in social and occupational functioning. Possible correlates of PVGU include frequency of play, cigarette smoking, and attention deficit hyperactivity disorder (ADHD). The aim of the current study was to explore simultaneously the relationships among these variables as well as test whether two separate measures of PVGU measure the same construct, using a structural modeling approach. Secondary data analysis was conducted on 2,801 video game users (M age = 22.43 years, standard deviation [SD] age = 4.7; 93 percent male) who completed an online survey. The full model fit the data well: χ 2 (2) = 2.017, p > 0.05; root mean square error of approximation (RMSEA) = 0.002 (90% CI [0.000-0.038]); comparative fit index (CFI) = 1.000; standardized root mean square residual (SRMR) = 0.004; and all standardized residuals <|0.1|. All freely estimated paths were statistically significant. ADHD symptomatology, smoking behavior, and hours of video game use explained 41.8 percent of variance in PVGU. Tracking these variables may be useful for PVGU prevention and assessment. Young's Internet Addiction Scale, adapted for video game use, and the Problem Videogame Playing Scale both loaded strongly onto a PVGU factor, suggesting that they measure the same construct, that studies using either measure may be compared to each other, and that both measures may be used as a screener of PVGU.
TH-AB-201-12: Using Machine Log-Files for Treatment Planning and Delivery QA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Stanhope, C; Liang, J; Drake, D
2016-06-15
Purpose: To determine the segment reduction and dose resolution necessary for machine log-files to effectively replace current phantom-based patient-specific quality assurance, while minimizing computational cost. Methods: Elekta’s Log File Convertor R3.2 records linac delivery parameters (dose rate, gantry angle, leaf position) every 40ms. Five VMAT plans [4 H&N, 1 Pulsed Brain] comprised of 2 arcs each were delivered on the ArcCHECK phantom. Log-files were reconstructed in Pinnacle on the phantom geometry using 1/2/3/4° control point spacing and 2/3/4mm dose grid resolution. Reconstruction effectiveness was quantified by comparing 2%/2mm gamma passing rates of the original and log-file plans. Modulation complexity scoresmore » (MCS) were calculated for each beam to correlate reconstruction accuracy and beam modulation. Percent error in absolute dose for each plan-pair combination (log-file vs. ArcCHECK, original vs. ArcCHECK, log-file vs. original) was calculated for each arc and every diode greater than 10% of the maximum measured dose (per beam). Comparing standard deviations of the three plan-pair distributions, relative noise of the ArcCHECK and log-file systems was elucidated. Results: The original plans exhibit a mean passing rate of 95.1±1.3%. The eight more modulated H&N arcs [MCS=0.088±0.014] and two less modulated brain arcs [MCS=0.291±0.004] yielded log-file pass rates most similar to the original plan when using 1°/2mm [0.05%±1.3% lower] and 2°/3mm [0.35±0.64% higher] log-file reconstructions respectively. Log-file and original plans displayed percent diode dose errors 4.29±6.27% and 3.61±6.57% higher than measurement. Excluding the phantom eliminates diode miscalibration and setup errors; log-file dose errors were 0.72±3.06% higher than the original plans – significantly less noisy. Conclusion: For log-file reconstructed VMAT arcs, 1° control point spacing and 2mm dose resolution is recommended, however, less modulated arcs may allow less stringent reconstructions. Following the aforementioned reconstruction recommendations, the log-file technique is capable of detecting delivery errors with equivalent accuracy and less noise than ArcCHECK QA. I am funded by an Elekta Research Grant.« less
Liu, Xiaofeng Steven
2011-05-01
The use of covariates is commonly believed to reduce the unexplained error variance and the standard error for the comparison of treatment means, but the reduction in the standard error is neither guaranteed nor uniform over different sample sizes. The covariate mean differences between the treatment conditions can inflate the standard error of the covariate-adjusted mean difference and can actually produce a larger standard error for the adjusted mean difference than that for the unadjusted mean difference. When the covariate observations are conceived of as randomly varying from one study to another, the covariate mean differences can be related to a Hotelling's T(2) . Using this Hotelling's T(2) statistic, one can always find a minimum sample size to achieve a high probability of reducing the standard error and confidence interval width for the adjusted mean difference. ©2010 The British Psychological Society.
NASA Astrophysics Data System (ADS)
Rawat, Kishan Singh; Sehgal, Vinay Kumar; Pradhan, Sanatan; Ray, Shibendu S.
2018-03-01
We have estimated soil moisture (SM) by using circular horizontal polarization backscattering coefficient (σ o_{RH}), differences of circular vertical and horizontal σ o (σ o_{RV} {-} σ o_{RH}) from FRS-1 data of Radar Imaging Satellite (RISAT-1) and surface roughness in terms of RMS height ({RMS}_{height}). We examined the performance of FRS-1 in retrieving SM under wheat crop at tillering stage. Results revealed that it is possible to develop a good semi-empirical model (SEM) to estimate SM of the upper soil layer using RISAT-1 SAR data rather than using existing empirical model based on only single parameter, i.e., σ o. Near surface SM measurements were related to σ o_{RH}, σ o_{RV} {-} σ o_{RH} derived using 5.35 GHz (C-band) image of RISAT-1 and {RMS}_{height}. The roughness component derived in terms of {RMS}_{height} showed a good positive correlation with σ o_{RV} {-} σ o_{RH} (R2 = 0.65). By considering all the major influencing factors (σ o_{RH}, σ o_{RV} {-} σ o_{RH}, and {RMS}_{height}), an SEM was developed where SM (volumetric) predicted values depend on σ o_{RH}, σ o_{RV} {-} σ o_{RH}, and {RMS}_{height}. This SEM showed R2 of 0.87 and adjusted R2 of 0.85, multiple R=0.94 and with standard error of 0.05 at 95% confidence level. Validation of the SM derived from semi-empirical model with observed measurement ({SM}_{Observed}) showed root mean square error (RMSE) = 0.06, relative-RMSE (R-RMSE) = 0.18, mean absolute error (MAE) = 0.04, normalized RMSE (NRMSE) = 0.17, Nash-Sutcliffe efficiency (NSE) = 0.91 ({≈ } 1), index of agreement (d) = 1, coefficient of determination (R2) = 0.87, mean bias error (MBE) = 0.04, standard error of estimate (SEE) = 0.10, volume error (VE) = 0.15, variance of the distribution of differences ({S}d2) = 0.004. The developed SEM showed better performance in estimating SM than Topp empirical model which is based only on σ o. By using the developed SEM, top soil SM can be estimated with low mean absolute percent error (MAPE) = 1.39 and can be used for operational applications.
Olson, Scott A.; with a section by Veilleux, Andrea G.
2014-01-01
This report provides estimates of flood discharges at selected annual exceedance probabilities (AEPs) for streamgages in and adjacent to Vermont and equations for estimating flood discharges at AEPs of 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent (recurrence intervals of 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-years, respectively) for ungaged, unregulated, rural streams in Vermont. The equations were developed using generalized least-squares regression. Flood-frequency and drainage-basin characteristics from 145 streamgages were used in developing the equations. The drainage-basin characteristics used as explanatory variables in the regression equations include drainage area, percentage of wetland area, and the basin-wide mean of the average annual precipitation. The average standard errors of prediction for estimating the flood discharges at the 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent AEP with these equations are 34.9, 36.0, 38.7, 42.4, 44.9, 47.3, 50.7, and 55.1 percent, respectively. Flood discharges at selected AEPs for streamgages were computed by using the Expected Moments Algorithm. To improve estimates of the flood discharges for given exceedance probabilities at streamgages in Vermont, a new generalized skew coefficient was developed. The new generalized skew for the region is a constant, 0.44. The mean square error of the generalized skew coefficient is 0.078. This report describes a technique for using results from the regression equations to adjust an AEP discharge computed from a streamgage record. This report also describes a technique for using a drainage-area adjustment to estimate flood discharge at a selected AEP for an ungaged site upstream or downstream from a streamgage. The final regression equations and the flood-discharge frequency data used in this study will be available in StreamStats. StreamStats is a World Wide Web application providing automated regression-equation solutions for user-selected sites on streams.
Clinical height measurements are unreliable: a call for improvement.
Mikula, A L; Hetzel, S J; Binkley, N; Anderson, P A
2016-10-01
Height measurements are currently used to guide imaging decisions that assist in osteoporosis care, but their clinical reliability is largely unknown. We found both clinical height measurements and electronic health record height data to be unreliable. Improvement in height measurement is needed to improve osteoporosis care. The aim of this study is to assess the accuracy and reliability of clinical height measurement in a university healthcare clinical setting. Electronic health record (EHR) review, direct measurement of clinical stadiometer accuracy, and observation of staff height measurement technique at outpatient facilities of the University of Wisconsin Hospital and Clinics. We examined 32 clinical stadiometers for reliability and observed 34 clinic staff perform height measurements at 12 outpatient primary care and specialty clinics. An EHR search identified 4711 men and women age 43 to 89 with no known metabolic bone disease who had more than one height measurement over 3 months. The short study period and exclusion were selected to evaluate change in recorded height not due to pathologic processes. Mean EHR recorded height change (first to last measurement) was -0.02 cm (SD 1.88 cm). Eighteen percent of patients had height measurement differences noted in the EHR of ≥2 cm over 3 months. The technical error of measurement (TEM) was 1.77 cm with a relative TEM of 1.04 %. None of the staff observed performing height measurements followed all recommended height measurement guidelines. Fifty percent of clinic staff reported they on occasion enter patient reported height into the EHR rather than performing a measurement. When performing direct measurements on stadiometers, the mean difference from a gold standard length was 0.24 cm (SD 0.80). Nine percent of stadiometers examined had an error of >1.5 cm. Clinical height measurements and EHR recorded height results are unreliable. Improvement in this measure is needed as an adjunct to improve osteoporosis care.
Standard Errors and Confidence Intervals of Norm Statistics for Educational and Psychological Tests.
Oosterhuis, Hannah E M; van der Ark, L Andries; Sijtsma, Klaas
2016-11-14
Norm statistics allow for the interpretation of scores on psychological and educational tests, by relating the test score of an individual test taker to the test scores of individuals belonging to the same gender, age, or education groups, et cetera. Given the uncertainty due to sampling error, one would expect researchers to report standard errors for norm statistics. In practice, standard errors are seldom reported; they are either unavailable or derived under strong distributional assumptions that may not be realistic for test scores. We derived standard errors for four norm statistics (standard deviation, percentile ranks, stanine boundaries and Z-scores) under the mild assumption that the test scores are multinomially distributed. A simulation study showed that the standard errors were unbiased and that corresponding Wald-based confidence intervals had good coverage. Finally, we discuss the possibilities for applying the standard errors in practical test use in education and psychology. The procedure is provided via the R function check.norms, which is available in the mokken package.
Green-Ampt approximations: A comprehensive analysis
NASA Astrophysics Data System (ADS)
Ali, Shakir; Islam, Adlul; Mishra, P. K.; Sikka, Alok K.
2016-04-01
Green-Ampt (GA) model and its modifications are widely used for simulating infiltration process. Several explicit approximate solutions to the implicit GA model have been developed with varying degree of accuracy. In this study, performance of nine explicit approximations to the GA model is compared with the implicit GA model using the published data for broad range of soil classes and infiltration time. The explicit GA models considered are Li et al. (1976) (LI), Stone et al. (1994) (ST), Salvucci and Entekhabi (1994) (SE), Parlange et al. (2002) (PA), Barry et al. (2005) (BA), Swamee et al. (2012) (SW), Ali et al. (2013) (AL), Almedeij and Esen (2014) (AE), and Vatankhah (2015) (VA). Six statistical indicators (e.g., percent relative error, maximum absolute percent relative error, average absolute percent relative errors, percent bias, index of agreement, and Nash-Sutcliffe efficiency) and relative computer computation time are used for assessing the model performance. Models are ranked based on the overall performance index (OPI). The BA model is found to be the most accurate followed by the PA and VA models for variety of soil classes and infiltration periods. The AE, SW, SE, and LI model also performed comparatively better. Based on the overall performance index, the explicit models are ranked as BA > PA > VA > LI > AE > SE > SW > ST > AL. Results of this study will be helpful in selection of accurate and simple explicit approximate GA models for solving variety of hydrological problems.
SIRTF Focal Plane Survey: A Pre-flight Error Analysis
NASA Technical Reports Server (NTRS)
Bayard, David S.; Brugarolas, Paul B.; Boussalis, Dhemetrios; Kang, Bryan H.
2003-01-01
This report contains a pre-flight error analysis of the calibration accuracies expected from implementing the currently planned SIRTF focal plane survey strategy. The main purpose of this study is to verify that the planned strategy will meet focal plane survey calibration requirements (as put forth in the SIRTF IOC-SV Mission Plan [4]), and to quantify the actual accuracies expected. The error analysis was performed by running the Instrument Pointing Frame (IPF) Kalman filter on a complete set of simulated IOC-SV survey data, and studying the resulting propagated covariances. The main conclusion of this study is that the all focal plane calibration requirements can be met with the currently planned survey strategy. The associated margins range from 3 to 95 percent, and tend to be smallest for frames having a 0.14" requirement, and largest for frames having a more generous 0.28" (or larger) requirement. The smallest margin of 3 percent is associated with the IRAC 3.6 and 5.8 micron array centers (frames 068 and 069), and the largest margin of 95 percent is associated with the MIPS 160 micron array center (frame 087). For pointing purposes, the most critical calibrations are for the IRS Peakup sweet spots and short wavelength slit centers (frames 019, 023, 052, 028, 034). Results show that these frames are meeting their 0.14" requirements with an expected accuracy of approximately 0.1", which corresponds to a 28 percent margin.
Hypothesis Testing Using Factor Score Regression
Devlieger, Ines; Mayer, Axel; Rosseel, Yves
2015-01-01
In this article, an overview is given of four methods to perform factor score regression (FSR), namely regression FSR, Bartlett FSR, the bias avoiding method of Skrondal and Laake, and the bias correcting method of Croon. The bias correcting method is extended to include a reliable standard error. The four methods are compared with each other and with structural equation modeling (SEM) by using analytic calculations and two Monte Carlo simulation studies to examine their finite sample characteristics. Several performance criteria are used, such as the bias using the unstandardized and standardized parameterization, efficiency, mean square error, standard error bias, type I error rate, and power. The results show that the bias correcting method, with the newly developed standard error, is the only suitable alternative for SEM. While it has a higher standard error bias than SEM, it has a comparable bias, efficiency, mean square error, power, and type I error rate. PMID:29795886
Coniferous forest classification and inventory using Landsat and digital terrain data
NASA Technical Reports Server (NTRS)
Franklin, J.; Logan, T. L.; Woodcock, C. E.; Strahler, A. H.
1986-01-01
Machine-processing techniques were used in a Forest Classification and Inventory System (FOCIS) procedure to extract and process tonal, textural, and terrain information from registered Landsat multispectral and digital terrain data. Using FOCIS as a basis for stratified sampling, the softwood timber volumes of the Klamath National Forest and Eldorado National Forest were estimated within standard errors of 4.8 and 4.0 percent, respectively. The accuracy of these large-area inventories is comparable to the accuracy yielded by use of conventional timber inventory methods, but, because of automation, the FOCIS inventories are more rapid (9-12 months compared to 2-3 years for conventional manual photointerpretation, map compilation and drafting, field sampling, and data processing) and are less costly.
NASA Technical Reports Server (NTRS)
Warshawsky, I.
1972-01-01
Total pressure in a calibration chamber is determined by measuring the force on a disk suspended in an orifice in the baseplate of the chamber. The disk forms a narrow annular gap with the orifice. A continuous flow of calibration gas passes through the chamber and annulus to a downstream pumping system. The ratio of pressures on the two faces of the disk exceeds 100:1, so that chamber pressure is substantially equal to the product of disk area and net force on the disk. This force is measured with an electrodynamometer that can be calibrated in situ with dead weights. Probable error in pressure measurement is plus or minus (0.5 microtorr + 0.6 percent).
Generation of complete source samples from the Slew Survey
NASA Technical Reports Server (NTRS)
Schachter, Jonathan
1992-01-01
The Einstein Slew Survey consists of 819 bright X-ray sources, of which 636 (or 78 percent) are identified with counterparts in standard catalogs. We argue for the importance of bright X-ray surveys, and compare the slew results to the ROSAT all-sky survey. Also, we discuss statistical techniques for minimizing confusion in arcminute error circles in digitized data. We describe the 238 Slew Survey AGN, clusters, and BL Lac objects identified to date and their implications for logN-logS and source evolution studies. Also given is a catalog of 1075 sources detected in the Einstein Imaging Proportional Counter (IPC) Slew Survey of the X-ray sky. Five hundred fifty-four of these sources were not previously known as X-ray sources.
Polak, Rani; Pober, David; Morris, Avigail; Arieli, Rakefet; Moore, Margaret; Berry, Elliot; Ziv, Mati
The Community Culinary Coaching Program is a community-based participatory program aimed at improving communal settlement residents' nutrition. The residents, central kitchens, preschools, and communal dining rooms were identified as areas for intervention. Evaluation included goals accomplishment assessed by food purchases by the central kitchens, and residents' feedback through focus groups. Purchasing included more vegetables (mean (standard error) percent change), (+7% (4); P = .32), fish (+115% (11); P < .001), whole grains, and legumes (+77% (9); P < .001); and less soup powders (-40% (9); P < .05), processed beef (-55% (8); P < .001), and margarine (-100% (4); P < .001). Residents recommended continuing the program beyond the project duration. This model might be useful in organizations with communal dining facilities.
AVIRIS calibration using the cloud-shadow method
NASA Technical Reports Server (NTRS)
Carder, K. L.; Reinersman, P.; Chen, R. F.
1993-01-01
More than 90 percent of the signal at an ocean-viewing, satellite sensor is due to the atmosphere, so a 5 percent sensor-calibration error viewing a target that contributes but 10 percent of the signal received at the sensor may result in a target-reflectance error of more than 50 percent. Since prelaunch calibration accuracies of 5 percent are typical of space-sensor requirements, recalibration of the sensor using ground-base methods is required for low-signal target. Known target reflectance or water-leaving radiance spectra and atmospheric correction parameters are required. In this article we describe an atmospheric-correction method that uses cloud shadowed pixels in combination with pixels in a neighborhood region of similar optical properties to remove atmospheric effects from ocean scenes. These neighboring pixels can then be used as known reflectance targets for validation of the sensor calibration and atmospheric correction. The method uses the difference between water-leaving radiance values for these two regions. This allows nearly identical optical contributions to the two signals (e.g., path radiance and Fresnel-reflected skylight) to be removed, leaving mostly solar photons backscattered from beneath the sea to dominate the residual signal. Normalization by incident solar irradiance reaching the sea surface provides the remote-sensing reflectance of the ocean at the location of the neighbor region.
Oates, R P; Mcmanus, Michelle; Subbiah, Seenivasan; Klein, David M; Kobelski, Robert
2017-07-14
Internal standards are essential in electrospray ionization liquid chromatography-mass spectrometry (ESI-LC-MS) to correct for systematic error associated with ionization suppression and/or enhancement. A wide array of instrument setups and interfaces has created difficulty in comparing the quantitation of absolute analyte response across laboratories. This communication demonstrates the use of primary standards as operational qualification standards for LC-MS instruments and their comparison with commonly accepted internal standards. In monitoring the performance of internal standards for perfluorinated compounds, potassium hydrogen phthalate (KHP) presented lower inter-day variability in instrument response than a commonly accepted deuterated perfluorinated internal standard (d3-PFOS), with percent relative standard deviations less than or equal to 6%. The inter-day precision of KHP was greater than d3-PFOS over a 28-day monitoring of perfluorooctanesulfonic acid (PFOS), across concentrations ranging from 0 to 100μg/L. The primary standard trometamol (Trizma) performed as well as known internal standards simeton and tris (2-chloroisopropyl) phosphate (TCPP), with intra-day precision of Trizma response as low as 7% RSD on day 28. The inter-day precision of Trizma response was found to be greater than simeton and TCPP, across concentrations of neonicotinoids ranging from 1 to 100μg/L. This study explores the potential of primary standards to be incorporated into LC-MS/MS methodology to improve the quantitative accuracy in environmental contaminant analysis. Copyright © 2017 Elsevier B.V. All rights reserved.
24 CFR 982.604 - SRO: Voucher housing assistance payment.
Code of Federal Regulations, 2010 CFR
2010-04-01
... URBAN DEVELOPMENT SECTION 8 TENANT BASED ASSISTANCE: HOUSING CHOICE VOUCHER PROGRAM Special Housing... residing in SRO housing, the payment standard is 75 percent of the zero-bedroom payment standard amount on... payment standard is 75 percent of the HUD-approved zero-bedroom exception payment standard amount. (b) The...
24 CFR 982.604 - SRO: Voucher housing assistance payment.
Code of Federal Regulations, 2011 CFR
2011-04-01
... URBAN DEVELOPMENT SECTION 8 TENANT BASED ASSISTANCE: HOUSING CHOICE VOUCHER PROGRAM Special Housing... residing in SRO housing, the payment standard is 75 percent of the zero-bedroom payment standard amount on... payment standard is 75 percent of the HUD-approved zero-bedroom exception payment standard amount. (b) The...
Acceptance sampling for attributes via hypothesis testing and the hypergeometric distribution
NASA Astrophysics Data System (ADS)
Samohyl, Robert Wayne
2017-10-01
This paper questions some aspects of attribute acceptance sampling in light of the original concepts of hypothesis testing from Neyman and Pearson (NP). Attribute acceptance sampling in industry, as developed by Dodge and Romig (DR), generally follows the international standards of ISO 2859, and similarly the Brazilian standards NBR 5425 to NBR 5427 and the United States Standards ANSI/ASQC Z1.4. The paper evaluates and extends the area of acceptance sampling in two directions. First, by suggesting the use of the hypergeometric distribution to calculate the parameters of sampling plans avoiding the unnecessary use of approximations such as the binomial or Poisson distributions. We show that, under usual conditions, discrepancies can be large. The conclusion is that the hypergeometric distribution, ubiquitously available in commonly used software, is more appropriate than other distributions for acceptance sampling. Second, and more importantly, we elaborate the theory of acceptance sampling in terms of hypothesis testing rigorously following the original concepts of NP. By offering a common theoretical structure, hypothesis testing from NP can produce a better understanding of applications even beyond the usual areas of industry and commerce such as public health and political polling. With the new procedures, both sample size and sample error can be reduced. What is unclear in traditional acceptance sampling is the necessity of linking the acceptable quality limit (AQL) exclusively to the producer and the lot quality percent defective (LTPD) exclusively to the consumer. In reality, the consumer should also be preoccupied with a value of AQL, as should the producer with LTPD. Furthermore, we can also question why type I error is always uniquely associated with the producer as producer risk, and likewise, the same question arises with consumer risk which is necessarily associated with type II error. The resolution of these questions is new to the literature. The article presents R code throughout.
Helicopter sling load accident/incident survey: 1968 - 1974
NASA Technical Reports Server (NTRS)
Shaughnessy, J. D.; Pardue, M. D.
1977-01-01
During the period considered a mean of eleven accidents per year occurred and a mean of eleven persons were killed or seriously injured per year. Forty-one percent of the accidents occurred during hover, and 63 percent of the accidents had pilot error listed as a cause/factor. Many accidents involved pilots losing control of the helicopter or allowing a collision with obstructions to occur. There was a mean of 58 incidents each year and 51 percent of these occurred during cruise.
Taylor, Diane M; Chow, Fotini K; Delkash, Madjid; Imhoff, Paul T
2016-10-01
Landfills are a significant contributor to anthropogenic methane emissions, but measuring these emissions can be challenging. This work uses numerical simulations to assess the accuracy of the tracer dilution method, which is used to estimate landfill emissions. Atmospheric dispersion simulations with the Weather Research and Forecast model (WRF) are run over Sandtown Landfill in Delaware, USA, using observation data to validate the meteorological model output. A steady landfill methane emissions rate is used in the model, and methane and tracer gas concentrations are collected along various transects downwind from the landfill for use in the tracer dilution method. The calculated methane emissions are compared to the methane emissions rate used in the model to find the percent error of the tracer dilution method for each simulation. The roles of different factors are examined: measurement distance from the landfill, transect angle relative to the wind direction, speed of the transect vehicle, tracer placement relative to the hot spot of methane emissions, complexity of topography, and wind direction. Results show that percent error generally decreases with distance from the landfill, where the tracer and methane plumes become well mixed. Tracer placement has the largest effect on percent error, and topography and wind direction both have significant effects, with measurement errors ranging from -12% to 42% over all simulations. Transect angle and transect speed have small to negligible effects on the accuracy of the tracer dilution method. These tracer dilution method simulations provide insight into measurement errors that might occur in the field, enhance understanding of the method's limitations, and aid interpretation of field data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Impact of device level faults in a digital avionic processor
NASA Technical Reports Server (NTRS)
Suk, Ho Kim
1989-01-01
This study describes an experimental analysis of the impact of gate and device-level faults in the processor of a Bendix BDX-930 flight control system. Via mixed mode simulation, faults were injected at the gate (stuck-at) and at the transistor levels and, their propagation through the chip to the output pins was measured. The results show that there is little correspondence between a stuck-at and a device-level fault model, as far as error activity or detection within a functional unit is concerned. In so far as error activity outside the injected unit and at the output pins are concerned, the stuck-at and device models track each other. The stuck-at model, however, overestimates, by over 100 percent, the probability of fault propagation to the output pins. An evaluation of the Mean Error Durations and the Mean Time Between Errors at the output pins shows that the stuck-at model significantly underestimates (by 62 percent) the impact of an internal chip fault on the output pins. Finally, the study also quantifies the impact of device fault by location, both internally and at the output pins.
7 CFR 29.2662 - Heavy Leaf (B Group).
Code of Federal Regulations, 2011 CFR
2011-01-01
... Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY STANDARDS AND STANDARD CONTAINER... percent uniform, and 40 percent injury tolerance. B3M Good Mixed Color or Variegated Heavy Leaf. Medium to...
7 CFR 29.2662 - Heavy Leaf (B Group).
Code of Federal Regulations, 2010 CFR
2010-01-01
... Agriculture Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE COMMODITY STANDARDS AND STANDARD CONTAINER... percent uniform, and 40 percent injury tolerance. B3M Good Mixed Color or Variegated Heavy Leaf. Medium to...
Willem W.S. van Hees
2002-01-01
Comparisons of estimated standard error for a ratio-of-means (ROM) estimator are presented for forest resource inventories conducted in southeast Alaska between 1995 and 2000. Estimated standard errors for the ROM were generated by using a traditional variance estimator and also approximated by bootstrap methods. Estimates of standard error generated by both...
SU-E-T-88: Comprehensive Automated Daily QA for Hypo- Fractionated Treatments
DOE Office of Scientific and Technical Information (OSTI.GOV)
McGuinness, C; Morin, O
2014-06-01
Purpose: The trend towards more SBRT treatments with fewer high dose fractions places increased importance on daily QA. Patient plan specific QA with 3%/3mm gamma analysis and daily output constancy checks may not be enough to guarantee the level of accuracy required for SBRT treatments. But increasing the already extensive amount of QA procedures that are required is a daunting proposition. We performed a feasibility study for more comprehensive automated daily QA that could improve the diagnostic capabilities of QA without increasing workload. Methods: We performed the study on a Siemens Artiste linear accelerator using the integrated flat panel EPID.more » We included square fields, a picket fence, overlap and representative IMRT fields to measure output, flatness, symmetry, beam center, and percent difference from the standard. We also imposed a set of machine errors: MLC leaf position, machine output, and beam steering to compare with the standard. Results: Daily output was consistent within +/− 1%. Change in steering current by 1.4% and 2.4% resulted in a 3.2% and 6.3% change in flatness. 1 and 2mm MLC leaf offset errors were visibly obvious in difference plots, but passed a 3%/3mm gamma analysis. A simple test of transmission in a picket fence can catch a leaf offset error of a single leaf by 1mm. The entire morning QA sequence is performed in less than 30 minutes and images are automatically analyzed. Conclusion: Automated QA procedures could be used to provide more comprehensive information about the machine with less time and human involvement. We have also shown that other simple tests are better able to catch MLC leaf position errors than a 3%/3mm gamma analysis commonly used for IMRT and modulated arc treatments. Finally, this information could be used to watch trends of the machine and predict problems before they lead to costly machine downtime.« less
Vinciarelli, Alessandro
2005-12-01
This work presents categorization experiments performed over noisy texts. By noisy, we mean any text obtained through an extraction process (affected by errors) from media other than digital texts (e.g., transcriptions of speech recordings extracted with a recognition system). The performance of a categorization system over the clean and noisy (Word Error Rate between approximately 10 and approximately 50 percent) versions of the same documents is compared. The noisy texts are obtained through handwriting recognition and simulation of optical character recognition. The results show that the performance loss is acceptable for Recall values up to 60-70 percent depending on the noise sources. New measures of the extraction process performance, allowing a better explanation of the categorization results, are proposed.
NASA Technical Reports Server (NTRS)
Srivastava, R. C.; Coen, J. L.
1992-01-01
The traditional explicit growth equation has been widely used to calculate the growth and evaporation of hydrometeors by the diffusion of water vapor. This paper reexamines the assumptions underlying the traditional equation and shows that large errors (10-30 percent in some cases) result if it is used carelessly. More accurate explicit equations are derived by approximating the saturation vapor-density difference as a quadratic rather than a linear function of the temperature difference between the particle and ambient air. These new equations, which reduce the error to less than a few percent, merit inclusion in a broad range of atmospheric models.
Toward Joint Hypothesis-Tests Seismic Event Screening Analysis: Ms|mb and Event Depth
DOE Office of Scientific and Technical Information (OSTI.GOV)
Anderson, Dale; Selby, Neil
2012-08-14
Well established theory can be used to combine single-phenomenology hypothesis tests into a multi-phenomenology event screening hypothesis test (Fisher's and Tippett's tests). Commonly used standard error in Ms:mb event screening hypothesis test is not fully consistent with physical basis. Improved standard error - Better agreement with physical basis, and correctly partitions error to include Model Error as a component of variance, correctly reduces station noise variance through network averaging. For 2009 DPRK test - Commonly used standard error 'rejects' H0 even with better scaling slope ({beta} = 1, Selby et al.), improved standard error 'fails to rejects' H0.
Maassen, Gerard H
2010-08-01
In this Journal, Lewis and colleagues introduced a new Reliable Change Index (RCI(WSD)), which incorporated the within-subject standard deviation (WSD) of a repeated measurement design as the standard error. In this note, two opposite errors in using WSD this way are demonstrated. First, being the standard error of measurement of only a single assessment makes WSD too small when practice effects are absent. Then, too many individuals will be designated reliably changed. Second, WSD can grow unlimitedly to the extent that differential practice effects occur. This can even make RCI(WSD) unable to detect any reliable change.
Hydrograph simulation models of the Hillsborough and Alafia Rivers, Florida: a preliminary report
Turner, James F.
1972-01-01
Mathematical (digital) models that simulate flood hydrographs from rainfall records have been developed for the following gaging stations in the Hillsborough and Alafia River basins of west-central Florida: Hillsborough River near Tampa, Alafia River at Lithia, and north Prong Alafia River near Keysville. These models, which were developed from historical streamflow and and rainfall records, are based on rainfall-runoff and unit-hydrograph procedures involving an arbitrary separation of the flood hydrograph. These models assume the flood hydrograph to be composed of only two flow components, direct (storm) runoff, and base flow. Expressions describing these two flow components are derived from streamflow and rainfall records and are combined analytically to form algorithms (models), which are programmed for processing on a digital computing system. Most Hillsborough and Alafia River flood discharges can be simulated with expected relative errors less than or equal to 30 percent and flood peaks can be simulated with average relative errors less than 15 percent. Because of the inadequate rainfall network that is used in obtaining input data for the North Prong Alafia River model, simulated peaks are frequently in error by more than 40 percent, particularly for storms having highly variable areal rainfall distribution. Simulation errors are the result of rainfall sample errors and, to a lesser extent, model inadequacy. Data errors associated with the determination of mean basin precipitation are the result of the small number and poor areal distribution of rainfall stations available for use in the study. Model inadequacy, however, is attributed to the basic underlying theory, particularly the rainfall-runoff relation. These models broaden and enhance existing water-management capabilities within these basins by allowing the establishment and implementation of programs providing for continued development in these areas. Specifically, the models serve not only as a basis for forecasting floods, but also for simulating hydrologic information needed in flood-plain mapping and delineating and evaluating alternative flood control and abatement plans.
Patient safety education at Japanese medical schools: results of a nationwide survey.
Maeda, Shoichi; Kamishiraki, Etsuko; Starkey, Jay
2012-05-10
Patient safety education, including error prevention strategies and management of adverse events, has become a topic of worldwide concern. The importance of the patient safety is also recognized in Japan following two serious medical accidents in 1999. Furthermore, educational curriculum guideline revisions in 2008 by relevant the Ministry of Education includes patient safety as part of the core medical curriculum. However, little is known about the patient safety education in Japanese medical schools partly because a comprehensive study has not yet been conducted in this field. Therefore, we have conducted a nationwide survey in order to clarify the current status of patient safety education at medical schools in Japan. Response rate was 60.0% (n = 48/80). Ninety-eight-percent of respondents (n = 47/48) reported integration of patient safety education into their curricula. Thirty-nine percent reported devoting less than five hours to the topic. All schools that teach patient safety reported use of lecture based teaching methods while few used alternative methods, such as role-playing or in-hospital training. Topics related to medical error theory and legal ramifications of error are widely taught while practical topics related to error analysis such as root cause analysis are less often covered. Based on responses to our survey, most Japanese medical schools have incorporated the topic of patient safety into their curricula. However, the number of hours devoted to the patient safety education is far from the sufficient level with forty percent of medical schools that devote five hours or less to it. In addition, most medical schools employ only the lecture based learning, lacking diversity in teaching methods. Although most medical schools cover basic error theory, error analysis is taught at fewer schools. We still need to make improvements to our medical safety curricula. We believe that this study has the implications for the rest of the world as a model of what is possible and a sounding board for what topics might be important.
Akazawa, Manabu; Stearns, Sally C; Biddle, Andrea K
2008-01-01
Objective To assess costs, effectiveness, and cost-effectiveness of inhaled corticosteroids (ICS) augmenting bronchodilator treatment for chronic obstructive pulmonary disease (COPD). Data Sources Claims between 1997 and 2005 from a large managed care database. Study Design Individual-level, fixed-effects regression models estimated the effects of initiating ICS on medical expenses and likelihood of severe exacerbation. Bootstrapping provided estimates of the incremental cost per severe exacerbation avoided. Data Extraction Methods COPD patients aged 40 or older with ≥15 months of continuous eligibility were identified. Monthly observations for 1 year before and up to 2 years following initiation of bronchodilators were constructed. Principal Findings ICS treatment reduced monthly risk of severe exacerbation by 25 percent. Total costs with ICS increased for 16 months, but declined thereafter. ICS use was cost saving 46 percent of the time, with an incremental cost-effectiveness ratio of $2,973 per exacerbation avoided; for patients ≥50 years old, ICS was cost saving 57 percent of time. Conclusions ICS treatment reduces exacerbations, with an increase in total costs initially for the full sample. Compared with younger patients with COPD, patients aged 50 or older have reduced costs and improved outcomes. The estimated cost per severe exacerbation avoided, however, may be high for either group because of uncertainty as reflected by the large standard errors of the parameter estimates. PMID:18671750
Semenova, Vera A.; Steward-Clark, Evelene; Maniatis, Panagiotis; Epperson, Monica; Sabnis, Amit; Schiffer, Jarad
2017-01-01
To improve surge testing capability for a response to a release of Bacillus anthracis, the CDC anti-Protective Antigen (PA) IgG Enzyme-Linked Immunosorbent Assay (ELISA) was re-designed into a high throughput screening format. The following assay performance parameters were evaluated: goodness of fit (measured as the mean reference standard r2), accuracy (measured as percent error), precision (measured as coefficient of variance (CV)), lower limit of detection (LLOD), lower limit of quantification (LLOQ), dilutional linearity, diagnostic sensitivity (DSN) and diagnostic specificity (DSP). The paired sets of data for each sample were evaluated by Concordance Correlation Coefficient (CCC) analysis. The goodness of fit was 0.999; percent error between the expected and observed concentration for each sample ranged from −4.6% to 14.4%. The coefficient of variance ranged from 9.0% to 21.2%. The assay LLOQ was 2.6 μg/mL. The regression analysis results for dilutional linearity data were r2 = 0.952, slope = 1.02 and intercept = −0.03. CCC between assays was 0.974 for the median concentration of serum samples. The accuracy and precision components of CCC were 0.997 and 0.977, respectively. This high throughput screening assay is precise, accurate, sensitive and specific. Anti-PA IgG concentrations determined using two different assays proved high levels of agreement. The method will improve surge testing capability 18-fold from 4 to 72 sera per assay plate. PMID:27814939
Semenova, Vera A; Steward-Clark, Evelene; Maniatis, Panagiotis; Epperson, Monica; Sabnis, Amit; Schiffer, Jarad
2017-01-01
To improve surge testing capability for a response to a release of Bacillus anthracis, the CDC anti-Protective Antigen (PA) IgG Enzyme-Linked Immunosorbent Assay (ELISA) was re-designed into a high throughput screening format. The following assay performance parameters were evaluated: goodness of fit (measured as the mean reference standard r 2 ), accuracy (measured as percent error), precision (measured as coefficient of variance (CV)), lower limit of detection (LLOD), lower limit of quantification (LLOQ), dilutional linearity, diagnostic sensitivity (DSN) and diagnostic specificity (DSP). The paired sets of data for each sample were evaluated by Concordance Correlation Coefficient (CCC) analysis. The goodness of fit was 0.999; percent error between the expected and observed concentration for each sample ranged from -4.6% to 14.4%. The coefficient of variance ranged from 9.0% to 21.2%. The assay LLOQ was 2.6 μg/mL. The regression analysis results for dilutional linearity data were r 2 = 0.952, slope = 1.02 and intercept = -0.03. CCC between assays was 0.974 for the median concentration of serum samples. The accuracy and precision components of CCC were 0.997 and 0.977, respectively. This high throughput screening assay is precise, accurate, sensitive and specific. Anti-PA IgG concentrations determined using two different assays proved high levels of agreement. The method will improve surge testing capability 18-fold from 4 to 72 sera per assay plate. Published by Elsevier Ltd.
40 CFR 80.1405 - What are the Renewable Fuel Standards?
Code of Federal Regulations, 2010 CFR
2010-07-01
... standard for 2010 shall be 0.004 percent. (2) The value of the biomass-based diesel standard for 2010 shall... = The biomass-based diesel standard for year i, in percent. StdAB,i = The advanced biofuel standard for... volume of biomass-based diesel required by section 211(o)(2)(B) of the Clean Air Act for year i, in...
Sapkota, K; Pirouzian, A; Matta, N S
2013-01-01
Refractive error is a common cause of amblyopia. To determine prevalence of amblyopia and the pattern and the types of refractive error in children with amblyopia in a tertiary eye hospital of Nepal. A retrospective chart review of children diagnosed with amblyopia in the Nepal Eye Hospital (NEH) from July 2006 to June 2011 was conducted. Children of age 13+ or who had any ocular pathology were excluded. Cycloplegic refraction and an ophthalmological examination was performed for all children. The pattern of refractive error and the association between types of refractive error and types of amblyopia were determined. Amblyopia was found in 0.7 % (440) of 62,633 children examined in NEH during this period. All the amblyopic eyes of the subjects had refractive error. Fifty-six percent (248) of the patients were male and the mean age was 7.74 ± 2.97 years. Anisometropia was the most common cause of amblyopia (p less than 0.001). One third (29 %) of the subjects had bilateral amblyopia due to high ametropia. Forty percent of eyes had severe amblyopia with visual acuity of 20/120 or worse. About twothirds (59.2 %) of the eyes had astigmatism. The prevalence of amblyopia in the Nepal Eye Hospital is 0.7%. Anisometropia is the most common cause of amblyopia. Astigmatism is the most common types of refractive error in amblyopic eyes. © NEPjOPH.
Bakken, Suzanne; Cimino, James J.; Haskell, Robert; Kukafka, Rita; Matsumoto, Cindi; Chan, Garrett K.; Huff, Stanley M.
2000-01-01
Objective: The purpose of this study was to test the adequacy of the Clinical LOINC (Logical Observation Identifiers, Names, and Codes) semantic structure as a terminology model for standardized assessment measures. Methods: After extension of the definitions, 1,096 items from 35 standardized assessment instruments were dissected into the elements of the Clinical LOINC semantic structure. An additional coder dissected at least one randomly selected item from each instrument. When multiple scale types occurred in a single instrument, a second coder dissected one randomly selected item representative of each scale type. Results: The results support the adequacy of the Clinical LOINC semantic structure as a terminology model for standardized assessments. Using the revised definitions, the coders were able to dissect into the elements of Clinical LOINC all the standardized assessment items in the sample instruments. Percentage agreement for each element was as follows: component, 100 percent; property, 87.8 percent; timing, 82.9 percent; system/sample, 100 percent; scale, 92.6 percent; and method, 97.6 percent. Discussion: This evaluation was an initial step toward the representation of standardized assessment items in a manner that facilitates data sharing and re-use. Further clarification of the definitions, especially those related to time and property, is required to improve inter-rater reliability and to harmonize the representations with similar items already in LOINC. PMID:11062226
Design of a Pneumatic Tool for Manual Drilling Operations in Confined Spaces
NASA Astrophysics Data System (ADS)
Janicki, Benjamin
This master's thesis describes the design process and testing results for a pneumatically actuated, manually-operated tool for confined space drilling operations. The purpose of this device is to back-drill pilot holes inside a commercial airplane wing. It is lightweight, and a "locator pin" enables the operator to align the drill over a pilot hole. A suction pad stabilizes the system, and an air motor and flexible drive shaft power the drill. Two testing procedures were performed to determine the practicality of this prototype. The first was the "offset drill test", which qualified the exit hole position error due to an initial position error relative to the original pilot hole. The results displayed a linear relationship, and it was determined that position errors of less than .060" would prevent the need for rework, with errors of up to .030" considered acceptable. For the second test, a series of holes were drilled with the pneumatic tool and analyzed for position error, diameter range, and cycle time. The position errors and hole diameter range were within the allowed tolerances. The average cycle time was 45 seconds, 73 percent of which was for drilling the hole, and 27 percent of which was for positioning the device. Recommended improvements are discussed in the conclusion, and include a more durable flexible drive shaft, a damper for drill feed control, and a more stable locator pin.
Waltemeyer, Scott D.
2008-01-01
Estimates of the magnitude and frequency of peak discharges are necessary for the reliable design of bridges, culverts, and open-channel hydraulic analysis, and for flood-hazard mapping in New Mexico and surrounding areas. The U.S. Geological Survey, in cooperation with the New Mexico Department of Transportation, updated estimates of peak-discharge magnitude for gaging stations in the region and updated regional equations for estimation of peak discharge and frequency at ungaged sites. Equations were developed for estimating the magnitude of peak discharges for recurrence intervals of 2, 5, 10, 25, 50, 100, and 500 years at ungaged sites by use of data collected through 2004 for 293 gaging stations on unregulated streams that have 10 or more years of record. Peak discharges for selected recurrence intervals were determined at gaging stations by fitting observed data to a log-Pearson Type III distribution with adjustments for a low-discharge threshold and a zero skew coefficient. A low-discharge threshold was applied to frequency analysis of 140 of the 293 gaging stations. This application provides an improved fit of the log-Pearson Type III frequency distribution. Use of the low-discharge threshold generally eliminated the peak discharge by having a recurrence interval of less than 1.4 years in the probability-density function. Within each of the nine regions, logarithms of the maximum peak discharges for selected recurrence intervals were related to logarithms of basin and climatic characteristics by using stepwise ordinary least-squares regression techniques for exploratory data analysis. Generalized least-squares regression techniques, an improved regression procedure that accounts for time and spatial sampling errors, then were applied to the same data used in the ordinary least-squares regression analyses. The average standard error of prediction, which includes average sampling error and average standard error of regression, ranged from 38 to 93 percent (mean value is 62, and median value is 59) for the 100-year flood. The 1996 investigation standard error of prediction for the flood regions ranged from 41 to 96 percent (mean value is 67, and median value is 68) for the 100-year flood that was analyzed by using generalized least-squares regression analysis. Overall, the equations based on generalized least-squares regression techniques are more reliable than those in the 1996 report because of the increased length of record and improved geographic information system (GIS) method to determine basin and climatic characteristics. Flood-frequency estimates can be made for ungaged sites upstream or downstream from gaging stations by using a method that transfers flood-frequency data at the gaging station to the ungaged site by using a drainage-area ratio adjustment equation. The peak discharge for a given recurrence interval at the gaging station, drainage-area ratio, and the drainage-area exponent from the regional regression equation of the respective region is used to transfer the peak discharge for the recurrence interval to the ungaged site. Maximum observed peak discharge as related to drainage area was determined for New Mexico. Extreme events are commonly used in the design and appraisal of bridge crossings and other structures. Bridge-scour evaluations are commonly made by using the 500-year peak discharge for these appraisals. Peak-discharge data collected at 293 gaging stations and 367 miscellaneous sites were used to develop a maximum peak-discharge relation as an alternative method of estimating peak discharge of an extreme event such as a maximum probable flood.
Lesnik, Keaton Larson; Liu, Hong
2017-09-19
The complex interactions that occur in mixed-species bioelectrochemical reactors, like microbial fuel cells (MFCs), make accurate predictions of performance outcomes under untested conditions difficult. While direct correlations between any individual waste stream characteristic or microbial community structure and reactor performance have not been able to be directly established, the increase in sequencing data and readily available computational power enables the development of alternate approaches. In the current study, 33 MFCs were evaluated under a range of conditions including eight separate substrates and three different wastewaters. Artificial Neural Networks (ANNs) were used to establish mathematical relationships between wastewater/solution characteristics, biofilm communities, and reactor performance. ANN models that incorporated biotic interactions predicted reactor performance outcomes more accurately than those that did not. The average percent error of power density predictions was 16.01 ± 4.35%, while the average percent error of Coulombic efficiency and COD removal rate predictions were 1.77 ± 0.57% and 4.07 ± 1.06%, respectively. Predictions of power density improved to within 5.76 ± 3.16% percent error through classifying taxonomic data at the family versus class level. Results suggest that the microbial communities and performance of bioelectrochemical systems can be accurately predicted using data-mining, machine-learning techniques.
Simmons, B R; Chukwumerije, O; Stewart, J T
1997-11-01
13-Cis retinoic acid (Accutane) was extracted from a cream, gel, capsule and beadlet dosage from using supercritical carbon dioxide modified with 5% methanol as the mobile phase. The pump pressure and the extraction chamber and restrictor temperature were experimentally optimized at 325 atm and 45 degrees C, respectively. A 2.5-min static and 5-min dynamic extraction time were used. The supercritical fluid extraction (SFE) eluent was trapped in methanol, injected into the high-performance liquid chromatographic (HPLC) system, and quantitated by ultraviolet detection at 360 nm. Application of the SFE method to spiked placebo dosage forms gave 13-cis retinoic acid recoveries of 98.8, 98.9, 98.8 and 100% for the cream, gel, capsule and beadlet, respectively, with R.S.D.s in the range 0.6-0.9% (n = 4). Inter-day percent error and precision of the extraction were 1.1-2.0 and 0.2-2.4% (n = 3), respectively, and intra-day percent error and precision were 1.0-3.0 and 0.3-2.1% (n = 8), respectively. Percent error and precision data for spiked celite samples in the 0.05-1.0 microgram ml-1 range were 0.59-4.75 and 1.8-2.1% (n = 3), respectively. The extraction method was applied to commercial 13-cis retinoic acid dosage forms and the results compared to unextracted samples. Linear regression analysis of concentration versus peak height gave a correlation coefficient of 0.9991 with a slope of 7.468 and a y-intercept of 0.1923. The percent error and precision data were 1.3-5.3 and 0.2-1.5% (n = 4), respectively. The photoisomers of 13-cis retinoic acid were also extracted with the method and recoveries of 90.4-92.4% with R.S.D.s of 1.5-3.4% were obtained (n = 4).
McCleskey, R. Blaine; Nordstrom, D. Kirk; Naus, Cheryl A.
2004-01-01
The Questa baseline and pre-mining ground-water quality investigation has the main objective of inferring the ground-water chemistry at an active mine site. Hence, existing ground-water chemistry and its quality assurance and quality control is of crucial importance to this study and a substantial effort was spent on this activity. Analyses of seventy-two blanks demonstrated that contamination from processing, handling, and analyses were minimal. Blanks collected using water deionized with anion and cation exchange resins contained elevated concentrations of boron (0.17 milligrams per liter (mg/L)) and silica (3.90 mg/L), whereas double-distilled water did not. Boron and silica were not completely retained by the resins because they can exist as uncharged species in water. Chloride was detected in ten blanks, the highest being 3.9 mg/L, probably as the result of washing bottles, filter apparatuses, and tubing with hydrochloric acid. Sulfate was detected in seven blanks; the highest value was 3.0 mg/L, most likely because of carryover from the high sulfate waters sampled. With only a few exceptions, the remaining blank analyses were near or below method detection limits. Analyses of standard reference water samples by cold-vapor atomic fluorescence spectrometry, ion chromatography, inductively coupled plasma-optical emission spectrometry, inductively coupled plasma-mass spectrometry, FerroZine, graphite furnace atomic absorption spectrometry, hydride generation atomic spectrometry, and titration provided an accuracy check. For constituents greater than 10 times the detection limit, 95 percent of the samples had a percent error of less than 8.5. For constituents within 10 percent of the detection limit, the percent error often increased as a result of measurement imprecision. Charge imbalance was calculated using WATEQ4F and 251 out of 257 samples had a charge imbalance less than 11.8 percent. The charge imbalance for all samples ranged from -16 to 16 percent. Spike recoveries were performed by spiking ground-water samples from SC2B, SC3A, SC3B, CC2A, and Hottentot with a mixed-element standard and then analyzing them by ICP-OES. The mean recovery for all the constituents by ICP-OES was 103 percent with a standard deviation of 16 percent. Fifteen surface- and ground-water sequential duplicates were collected from Straight Creek, Hottentot, and the Red River from 2002 to 2003. Except for chloride from well SC5B and low concentrations of iron (<0.05 mg/L) and aluminum (<0.01 mg/L), constituents of sequential duplicates are generally within 10 percent of each other. Analytical results from different methods and different laboratories, with rare exceptions, were within 10 percent. Chromium analyses were in poor agreement when comparing analyses from the USGS and a contract laboratory, but USGS analyses by ICP-OES and ICP-MS were usually within 10 percent for chromium concentrations above 0.03 mg/L and analyses by ICP-OES and GFAAS were usually within 15 percent for chromium concentrations as much as 0.1 mg/L.Filtration studies also were performed to study the effects of filtration apparatuses (Minitan, plate, capsule, and syringe), pore sizes, and timing on dissolved metal concentrations. Except for iron and aluminum, constituents with concentrations greater than about 0.05 mg/L were generally not affected by the filtration apparatus, membrane pore-size, and filtration delays. Iron, aluminum, and some dissolved metals concentrations less than about 0.05 mg/L, especially copper, were generally lowest in filtrates from the tangential flow Minitan system containing a filter membrane with a pore size of 10,000 Daltons. As part of a filtration timing study, grab samples were collected from two sites along the Red River and were processed immediately and then again 1 to 3 hours later. Aluminum and iron colloids formed during the delay in the sample collected at the USGS gaging station and, after the delay, 0.1-ìm filtrate aluminum and iron concentrations approached the ultrafiltrate (Minitan) concentrations. In the upstream site below Fawn Lakes, aluminum in the 0.1-ìm filtrate decreased but did not decrease in the 0.45-ìm filtrate, signifying that the colloids formed during the delay are between 0.1 and 0.45 ìm. Dissolved nickel and pH also decreased in both samples during the delay. Except for ferrous iron and barium, a sequential filtration study 2 demonstrated that water collected from the Red River at the gage did not affect dissolved metal concentrations with increasing sample volume passing through a plate filter with 0.45- or 0.1-ìm membranes. Barium and ferrous iron both slightly decreased in the filtrate from the 0.45-ìm filter.
Evaluation of a voice recognition system for the MOTAS pseudo pilot station function
NASA Technical Reports Server (NTRS)
Houck, J. A.
1982-01-01
The Langley Research Center has undertaken a technology development activity to provide a capability, the mission oriented terminal area simulation (MOTAS), wherein terminal area and aircraft systems studies can be performed. An experiment was conducted to evaluate state-of-the-art voice recognition technology and specifically, the Threshold 600 voice recognition system to serve as an aircraft control input device for the MOTAS pseudo pilot station function. The results of the experiment using ten subjects showed a recognition error of 3.67 percent for a 48-word vocabulary tested against a programmed vocabulary of 103 words. After the ten subjects retrained the Threshold 600 system for the words which were misrecognized or rejected, the recognition error decreased to 1.96 percent. The rejection rates for both cases were less than 0.70 percent. Based on the results of the experiment, voice recognition technology and specifically the Threshold 600 voice recognition system were chosen to fulfill this MOTAS function.
Estimating the Magnitude and Frequency of Floods in Small Urban Streams in South Carolina, 2001
Feaster, Toby D.; Guimaraes, Wladimir B.
2004-01-01
The magnitude and frequency of floods at 20 streamflowgaging stations on small, unregulated urban streams in or near South Carolina were estimated by fitting the measured wateryear peak flows to a log-Pearson Type-III distribution. The period of record (through September 30, 2001) for the measured water-year peak flows ranged from 11 to 25 years with a mean and median length of 16 years. The drainage areas of the streamflow-gaging stations ranged from 0.18 to 41 square miles. Based on the flood-frequency estimates from the 20 streamflow-gaging stations (13 in South Carolina; 4 in North Carolina; and 3 in Georgia), generalized least-squares regression was used to develop regional regression equations. These equations can be used to estimate the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows for small urban streams in the Piedmont, upper Coastal Plain, and lower Coastal Plain physiographic provinces of South Carolina. The most significant explanatory variables from this analysis were mainchannel length, percent impervious area, and basin development factor. Mean standard errors of prediction for the regression equations ranged from -25 to 33 percent for the 10-year recurrence-interval flows and from -35 to 54 percent for the 100-year recurrence-interval flows. The U.S. Geological Survey has developed a Geographic Information System application called StreamStats that makes the process of computing streamflow statistics at ungaged sites faster and more consistent than manual methods. This application was developed in the Massachusetts District and ongoing work is being done in other districts to develop a similar application using streamflow statistics relative to those respective States. Considering the future possibility of implementing StreamStats in South Carolina, an alternative set of regional regression equations was developed using only main channel length and impervious area. This was done because no digital coverages are currently available for basin development factor and, therefore, it could not be included in the StreamStats application. The average mean standard error of prediction for the alternative equations was 2 to 5 percent larger than the standard errors for the equations that contained basin development factor. For the urban streamflow-gaging stations in South Carolina, measured water-year peak flows were compared with those from an earlier urban flood-frequency investigation. The peak flows from the earlier investigation were computed using a rainfall-runoff model. At many of the sites, graphical comparisons indicated that the variance of the measured data was much less than the variance of the simulated data. Several statistical tests were applied to compare the variances and the means of the measured and simulated data for each site. The results indicated that the variances were significantly different for 11 of the 13 South Carolina streamflow-gaging stations. For one streamflow-gaging station, the test for normality, which is one of the assumptions of the data when comparing variances, indicated that neither the measured data nor the simulated data were distributed normally; therefore, the test for differences in the variances was not used for that streamflow-gaging station. Another statistical test was used to test for statistically significant differences in the means of the measured and simulated data. The results indicated that for 5 of the 13 urban streamflowgaging stations in South Carolina there was a statistically significant difference in the means of the two data sets. For comparison purposes and to test the hypothesis that there may have been climatic differences between the period in which the measured peak-flow data were measured and the period for which historic rainfall data were used to compute the simulated peak flows, 16 rural streamflow-gaging stations with long-term records were reviewed using similar techniques as those used for the measured an
Topping, David J.; Rubin, David M.; Wright, Scott A.; Melis, Theodore S.
2011-01-01
Several common methods for measuring suspended-sediment concentration in rivers in the United States use depth-integrating samplers to collect a velocity-weighted suspended-sediment sample in a subsample of a river cross section. Because depth-integrating samplers are always moving through the water column as they collect a sample, and can collect only a limited volume of water and suspended sediment, they collect only minimally time-averaged data. Four sources of error exist in the field use of these samplers: (1) bed contamination, (2) pressure-driven inrush, (3) inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration, and (4) inadequate time averaging. The first two of these errors arise from misuse of suspended-sediment samplers, and the third has been the subject of previous study using data collected in the sand-bedded Middle Loup River in Nebraska. Of these four sources of error, the least understood source of error arises from the fact that depth-integrating samplers collect only minimally time-averaged data. To evaluate this fourth source of error, we collected suspended-sediment data between 1995 and 2007 at four sites on the Colorado River in Utah and Arizona, using a P-61 suspended-sediment sampler deployed in both point- and one-way depth-integrating modes, and D-96-A1 and D-77 bag-type depth-integrating suspended-sediment samplers. These data indicate that the minimal duration of time averaging during standard field operation of depth-integrating samplers leads to an error that is comparable in magnitude to that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. This random error arising from inadequate time averaging is positively correlated with grain size and does not largely depend on flow conditions or, for a given size class of suspended sediment, on elevation above the bed. Averaging over time scales >1 minute is the likely minimum duration required to result in substantial decreases in this error. During standard two-way depth integration, a depth-integrating suspended-sediment sampler collects a sample of the water-sediment mixture during two transits at each vertical in a cross section: one transit while moving from the water surface to the bed, and another transit while moving from the bed to the water surface. As the number of transits is doubled at an individual vertical, this error is reduced by ~30 percent in each size class of suspended sediment. For a given size class of suspended sediment, the error arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration depends only on the number of verticals collected, whereas the error arising from inadequate time averaging depends on both the number of verticals collected and the number of transits collected at each vertical. Summing these two errors in quadrature yields a total uncertainty in an equal-discharge-increment (EDI) or equal-width-increment (EWI) measurement of the time-averaged velocity-weighted suspended-sediment concentration in a river cross section (exclusive of any laboratory-processing errors). By virtue of how the number of verticals and transits influences the two individual errors within this total uncertainty, the error arising from inadequate time averaging slightly dominates that arising from inadequate sampling of the cross-stream spatial structure in suspended-sediment concentration. Adding verticals to an EDI or EWI measurement is slightly more effective in reducing the total uncertainty than adding transits only at each vertical, because a new vertical contributes both temporal and spatial information. However, because collection of depth-integrated samples at more transits at each vertical is generally easier and faster than at more verticals, addition of a combination of verticals and transits is likely a more practical approach to reducing the total uncertainty in most field situatio
Estimating the magnitude and frequency of floods for streams in west-central Florida, 2001
Hammett, Kathleen M.; DelCharco, Michael J.
2005-01-01
Flood discharges were estimated for recurrence intervals of 2, 5, 10, 25, 50, 100, 200, and 500 years for 94 streamflow stations in west-central Florida. Most of the stations are located within the 10,000 square-mile, 16-county area that forms the Southwest Florida Water Management District. All stations had at least 10 years of homogeneous record, and none have flood discharges that are significantly affected by regulation or urbanization. Guidelines established by the U.S. Water Resources Council in Bulletin 17B were used to estimate flood discharges from gaging station records. Multiple linear regression analysis was then used to mathematically relate estimates of flood discharge for selected recurrence intervals to explanatory basin characteristics. Contributing drainage area, channel slope, and the percent of total drainage area covered by lakes (percent lake area) were the basin characteristics that provided the best regression estimates. The study area was subdivided into four geographic regions to further refine the regression equations. Region 1 at the northern end of the study area includes large rivers that are characteristic of the rolling karst terrain of northern Florida. Only a small part of Region 1 lies within the boundaries of the Southwest Florida Water Management District. Contributing drainage area and percent lake area were the most statistically significant basin characteristics in Region 1; the prediction error of the regression equations varied with the recurrence interval and ranged from 57 to 69 percent. In the three other regions of the study area, contributing drainage area, channel slope, and percent lake area were the most statistically significant basin characteristics, and are the three characteristics that can be used to best estimate the magnitude and frequency of floods on most streams within the Southwest Florida Water Management District. The Withlacoochee River Basin dominates Region 2; the prediction error of the regression models in the region ranged from 65 to 68 percent. The basins that drain into the northern part of Tampa Bay and the upper reaches of the Peace River Basin are in Region 3, which had prediction errors ranging from 54 to 74 percent. Region 4, at the southern end of the study area, had prediction errors that ranged from 40 to 56 percent. Estimates of flood discharge become more accurate as longer periods of record are used for analyses; results of this study should be used in lieu of results from earlier U.S. Geological Survey studies of flood magnitude and frequency in west-central Florida. A comparison of current results with earlier studies indicates that use of a longer period of record with additional high-water events produces substantially higher flood-discharge estimates for many gaging stations. Another comparison indicates that the use of a computed, generalized skew in a previous study in 1979 tended to overestimate flood discharges.
All-Sky Spectrally Matched UBVRI - ZY and u‧ g‧ r‧ i‧ z‧ Magnitudes for Stars in the Tycho2 Catalog
NASA Astrophysics Data System (ADS)
Pickles, A.; Depagne, É.
2010-12-01
We present fitted UBVRI - ZY and u‧ g‧ r‧ i‧ z‧ magnitudes, spectral types, and distances for 2.4 million stars, derived from synthetic photometry of a library spectrum that best matches the Tycho2 BT VT, NOMAD RN, and 2MASS JHK2/S catalog magnitudes. We present similarly synthesized multifilter magnitudes, types, and distances for 4.8 million stars with 2MASS and SDSS photometry to g < 16 within the Sloan survey region, for Landolt and Sloan primary standards, and for Sloan northern (photometric telescope) and southern secondary standards. The synthetic magnitude zero points for BT VT, UBVRI, ZV YV, JHK2/S, JHKMKO, Stromgren uvby, Sloan u‧ g‧ r‧ i‧ z‧, and ugriz are calibrated on 20 CALSPEC spectrophotometric standards. The UBVRI and ugriz zero points have dispersions of 1-3%, for standards covering a range of color from -0.3 < V - I < 4.6 those for other filters are in the range of 2-5%. The spectrally matched fits to Tycho2 stars provide estimated 1σ errors per star of ˜0.2, 0.15, 0.12, 0.10, and 0.08 mag, respectively, in either UBVRI or u‧ g‧ r‧ i‧ z‧ those for at least 70% of the SDSS survey region to g < 16 have estimated 1σ errors per star of ˜0.2, 0.06, 0.04, 0.04, and 0.05 in u‧ g‧ r‧ i‧ z‧ or UBVRI. The density of Tycho2 stars, averaging about 60 stars per square degree, provides sufficient stars to enable automatic flux calibrations for most digital images with fields of view of 0.5° or more. Using several such standards per field, automatic flux calibration can be achieved to a few percent in any filter, at any air mass, in most workable observing conditions, to facilitate intercomparison of data from different sites, telescopes, and instruments.
Telemetry location error in a forested habitat
Chu, D.S.; Hoover, B.A.; Fuller, M.R.; Geissler, P.H.; Amlaner, Charles J.
1989-01-01
The error associated with locations estimated by radio-telemetry triangulation can be large and variable in a hardwood forest. We assessed the magnitude and cause of telemetry location errors in a mature hardwood forest by using a 4-element Yagi antenna and compass bearings toward four transmitters, from 21 receiving sites. The distance error from the azimuth intersection to known transmitter locations ranged from 0 to 9251 meters. Ninety-five percent of the estimated locations were within 16 to 1963 meters, and 50% were within 99 to 416 meters of actual locations. Angles with 20o of parallel had larger distance errors than other angles. While angle appeared most important, greater distances and the amount of vegetation between receivers and transmitters also contributed to distance error.
ERIC Educational Resources Information Center
Lord, Frederic M.; Stocking, Martha
A general Computer program is described that will compute asymptotic standard errors and carry out significance tests for an endless variety of (standard and) nonstandard large-sample statistical problems, without requiring the statistician to derive asymptotic standard error formulas. The program assumes that the observations have a multinormal…
7 CFR 801.6 - Tolerances for moisture meters.
Code of Federal Regulations, 2010 CFR
2010-01-01
... moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat Mid ±0.05 percent moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat High ±0.05 percent moisture, mean deviation from National standard moisture meter using Hard Red Winter wheat...
Martin, Jeffrey D.
2002-01-01
Correlation analysis indicates that for most pesticides and concentrations, pooled estimates of relative standard deviation rather than pooled estimates of standard deviation should be used to estimate variability because pooled estimates of relative standard deviation are less affected by heteroscedasticity. The 2 Variability of Pesticide Detections and Concentrations in Field Replicate Water Samples, 1992–97 median pooled relative standard deviation was calculated for all pesticides to summarize the typical variability for pesticide data collected for the NAWQA Program. The median pooled relative standard deviation was 15 percent at concentrations less than 0.01 micrograms per liter (µg/L), 13 percent at concentrations near 0.01 µg/L, 12 percent at concentrations near 0.1 µg/L, 7.9 percent at concentrations near 1 µg/L, and 2.7 percent at concentrations greater than 5 µg/L. Pooled estimates of standard deviation or relative standard deviation presented in this report are larger than estimates based on averages, medians, smooths, or regression of the individual measurements of standard deviation or relative standard deviation from field replicates. Pooled estimates, however, are the preferred method for characterizing variability because they provide unbiased estimates of the variability of the population. Assessments of variability based on standard deviation (rather than variance) underestimate the true variability of the population. Because pooled estimates of variability are larger than estimates based on other approaches, users of estimates of variability must be cognizant of the approach used to obtain the estimate and must use caution in the comparison of estimates based on different approaches.
Ozone measurement system for NASA global air sampling program
NASA Technical Reports Server (NTRS)
Tiefermann, M. W.
1979-01-01
The ozone measurement system used in the NASA Global Air Sampling Program is described. The system uses a commercially available ozone concentration monitor that was modified and repackaged so as to operate unattended in an aircraft environment. The modifications required for aircraft use are described along with the calibration techniques, the measurement of ozone loss in the sample lines, and the operating procedures that were developed for use in the program. Based on calibrations with JPL's 5-meter ultraviolet photometer, all previously published GASP ozone data are biased high by 9 percent. A system error analysis showed that the total system measurement random error is from 3 to 8 percent of reading (depending on the pump diaphragm material) or 3 ppbv, whichever are greater.
Mathematical morphology for automated analysis of remotely sensed objects in radar images
NASA Technical Reports Server (NTRS)
Daida, Jason M.; Vesecky, John F.
1991-01-01
A symbiosis of pyramidal segmentation and morphological transmission is described. The pyramidal segmentation portion of the symbiosis has resulted in low (2.6 percent) misclassification error rate for a one-look simulation. Other simulations indicate lower error rates (1.8 percent for a four-look image). The morphological transformation portion has resulted in meaningful partitions with a minimal loss of fractal boundary information. An unpublished version of Thicken, suitable for watersheds transformations of fractal objects, is also presented. It is demonstrated that the proposed symbiosis works with SAR (synthetic aperture radar) images: in this case, a four-look Seasat image of sea ice. It is concluded that the symbiotic forms of both segmentation and morphological transformation seem well suited for unsupervised geophysical analysis.
SAGE 2-Umkehr case study of ozone differences and aerosol effects from October 1984 to April 1989
NASA Technical Reports Server (NTRS)
Newchurch, M. J.; Cunnold, D. M.
1994-01-01
A comparison of 1262 cases of coincident ozone profiles derived from 666 Umkehrs at 17 different stations and 901 SAGE 2 profiles within 1000 km and 12 hours between October 1984 and April 1989 indicates the following layer percentage differences with 2-sigma error bars: layer three 14.6 plus/minus 3.3 percent, layer four 17.6 plus/minus 1.1 percent, layer five -1.3 plus/minus 0.5 percent, layer six -5.7 plus/minus 0.7 percent, layer seven -1.0 plus/minus 0.7 percent, layer eight 4.2 plus/minus 0.7 percent, and layer nine 6.8 plus/minus 1.2 percent. Comparing SAGE 2-Umkehr differences to SAGE 1 version 5.5-Umkehr differences shows SAGE 2 higher than or equal to SAGE 1 relative to Umkehr in all layers except layer three. Adjustment for this bias would produce trends derived from SAGE 2-SAGE 1 differences and Umkehr observations in the 1980s more nearly equal to each other in layers six, seven, and eight. A possible explanation of these differences is a systematic shift in the reference altitude between SAGE 1 and SAGE 2, but there is no independent evidence of this. While the shape of the vertical profile of differences at 17 individual Umkehr stations (mostly in mid-latitudes) is generally consistent at all stations except at Poker Flat, Seoul, and Lauder, significant variation does exists among the stations. The profile of mean difference is similar to previously observed differences between Umkehr and both SAGE 2 and SBUV and also to an eigenvector analysis, but with site-dependent amplitude discrepancies. Because of the close correspondence of stratospheric aerosol optical depth at the SAGE 2-measured 0.525 micron wavelength and the extrapolated 0.32 Umkehr wavelength determined in this study, we use the 0.525 micron data to determine the aerosol effect of Umkehr profiles. The aerosol errors to the Umkehr ozone amounts in percent ozone amount per 0.01 stratospheric aerosol optical depth range from plus 2 percent in layer six to minus 3 percent in layer nine. These results agree with previous theoretical and empirical studies within their respective error bounds in layers nine, eight, and five. The result in layer six differs significantly from previous works. In view of the fact that SAGE 2 and Umkehr produce different ozone retrievals in layers eight and nine and because the intra-layer correlation of SAGE 2 ozone and aerosol in layers eight and nine in non-zero, one must exercise some caution in attributing the entire SAGE 2-Umkehr differences in the upper layers to an aerosol effect.
Rainfall-Runoff and Water-Balance Models for Management of the Fena Valley Reservoir, Guam
Yeung, Chiu W.
2005-01-01
The U.S. Geological Survey's Precipitation-Runoff Modeling System (PRMS) and a generalized water-balance model were calibrated and verified for use in estimating future availability of water in the Fena Valley Reservoir in response to various combinations of water withdrawal rates and rainfall conditions. Application of PRMS provides a physically based method for estimating runoff from the Fena Valley Watershed during the annual dry season, which extends from January through May. Runoff estimates from the PRMS are used as input to the water-balance model to estimate change in water levels and storage in the reservoir. A previously published model was calibrated for the Maulap and Imong River watersheds using rainfall data collected outside of the watershed. That model was applied to the Almagosa River watershed by transferring calibrated parameters and coefficients because information on daily diversions at the Almagosa Springs upstream of the gaging station was not available at the time. Runoff from the ungaged land area was not modeled. For this study, the availability of Almagosa Springs diversion data allowed the calibration of PRMS for the Almagosa River watershed. Rainfall data collected at the Almagosa rain gage since 1992 also provided better estimates of rainfall distribution in the watershed. In addition, the discontinuation of pan-evaporation data collection in 1998 required a change in the evapotranspiration estimation method used in the PRMS model. These reasons prompted the update of the PRMS for the Fena Valley Watershed. Simulated runoff volume from the PRMS compared reasonably with measured values for gaging stations on Maulap, Almagosa, and Imong Rivers, tributaries to the Fena Valley Reservoir. On the basis of monthly runoff simulation for the dry seasons included in the entire simulation period (1992-2001), the total volume of runoff can be predicted within -3.66 percent at Maulap River, within 5.37 percent at Almagosa River, and within 10.74 percent at Imong River. Month-end reservoir volumes simulated by the reservoir water-balance model for both calibration and verification periods compared closely with measured reservoir volumes. Errors for the calibration periods ranged from 4.51 percent [208.7 acre-feet (acre-ft) or 68.0 million gallons (Mgal)] to -5.90 percent (-317.8 acre-ft or -103.6 Mgal). For the verification periods, errors ranged from 1.69 percent (103.5 acre-ft or 33.7 Mgal) to -4.60 percent (-178.7 acre-ft or -58.2 Mgal). Monthly simulation bias ranged from -0.19 percent for the calibration period to -0.98 percent for the verification period; relative error ranged from -0.37 to -1.12 percent, respectively. Relatively small bias indicated that the model did not consistently overestimate or underestimate reservoir volume.
Optical surface pressure measurements: Accuracy and application field evaluation
NASA Astrophysics Data System (ADS)
Bukov, A.; Mosharov, V.; Orlov, A.; Pesetsky, V.; Radchenko, V.; Phonov, S.; Matyash, S.; Kuzmin, M.; Sadovskii, N.
1994-07-01
Optical pressure measurement (OPM) is a new pressure measurement method rapidly developed in several aerodynamic research centers: TsAGI (Russia), Boeing, NASA, McDonnell Douglas (all USA), and DLR (Germany). Present level of OPM-method provides its practice as standard experimental method of aerodynamic investigations in definite application fields. Applications of OPM-method are determined mainly by its accuracy. The accuracy of OPM-method is determined by the errors of three following groups: (1) errors of the luminescent pressure sensor (LPS) itself, such as uncompensated temperature influence, photo degradation, temperature and pressure hysteresis, variation of the LPS parameters from point to point on the model surface, etc.; (2) errors of the measurement system, such as noise of the photodetector, nonlinearity and nonuniformity of the photodetector, time and temperature offsets, etc.; and (3) methodological errors, owing to displacement and deformation of the model in an airflow, a contamination of the model surface, scattering of the excitation and luminescent light from the model surface and test section walls, etc. OPM-method allows getting total error of measured pressure not less than 1 percent. This accuracy is enough to visualize the pressure field and allows determining total and distributed aerodynamic loads and solving some problems of local aerodynamic investigations at transonic and supersonic velocities. OPM is less effective at low subsonic velocities (M less than 0.4), and for precise measurements, for example, an airfoil optimization. Current limitations of the OPM-method are discussed on an example of the surface pressure measurements and calculations of the integral loads on the wings of canard-aircraft model. The pressure measurement system and data reduction methods used on these tests are also described.
a Climatology of Global Precipitation.
NASA Astrophysics Data System (ADS)
Legates, David Russell
A global climatology of mean monthly precipitation has been developed using traditional land-based gage measurements as well as derived oceanic data. These data have been screened for coding errors and redundant entries have been removed. Oceanic precipitation estimates are most often extrapolated from coastal and island observations because few gage estimates of oceanic precipitation exist. One such procedure, developed by Dorman and Bourke and used here, employs a derived relationship between observed rainfall totals and the "current weather" at coastal stations. The combined data base contains 24,635 independent terrestial station records and 2223 oceanic grid-point records. Raingage catches are known to underestimate actual precipitation. Errors in the gage catch result from wind -field deformation, wetting losses, and evaporation from the gage and can amount to nearly 8, 2, and 1 percent of the global catch, respectively. A procedure has been developed to correct many of these errors and has been used to adjust the gage estimates of global precipitation. Space-time variations in gage type, air temperature, wind speed, and natural vegetation were incorporated into the correction procedure. Corrected data were then interpolated to the nodes of a 0.5^circ of latitude by 0.5^circ of longitude lattice using a spherically-based interpolation algorithm. Interpolation errors are largest in areas of low station density, rugged topography, and heavy precipitation. Interpolated estimates also were compared with a digital filtering technique to access the aliasing of high-frequency "noise" into the lower frequency signals. Isohyetal maps displaying the mean annual, seasonal, and monthly precipitation are presented. Gage corrections and the standard error of the corrected estimates also are mapped. Results indicate that mean annual global precipitation is 1123 mm with 1251 mm falling over the oceans and 820 mm over land. Spatial distributions of monthly precipitation generally are consistent with existing precipitation climatologies.
Eisenberg, Dan T A; Kuzawa, Christopher W; Hayes, M Geoffrey
2015-01-01
Telomere length (TL) is commonly measured using quantitative PCR (qPCR). Although, easier than the southern blot of terminal restriction fragments (TRF) TL measurement method, one drawback of qPCR is that it introduces greater measurement error and thus reduces the statistical power of analyses. To address a potential source of measurement error, we consider the effect of well position on qPCR TL measurements. qPCR TL data from 3,638 people run on a Bio-Rad iCycler iQ are reanalyzed here. To evaluate measurement validity, correspondence with TRF, age, and between mother and offspring are examined. First, we present evidence for systematic variation in qPCR TL measurements in relation to thermocycler well position. Controlling for these well-position effects consistently improves measurement validity and yields estimated improvements in statistical power equivalent to increasing sample sizes by 16%. We additionally evaluated the linearity of the relationships between telomere and single copy gene control amplicons and between qPCR and TRF measures. We find that, unlike some previous reports, our data exhibit linear relationships. We introduce the standard error in percent, a superior method for quantifying measurement error as compared to the commonly used coefficient of variation. Using this measure, we find that excluding samples with high measurement error does not improve measurement validity in our study. Future studies using block-based thermocyclers should consider well position effects. Since additional information can be gleaned from well position corrections, rerunning analyses of previous results with well position correction could serve as an independent test of the validity of these results. © 2015 Wiley Periodicals, Inc.
46 CFR 42.20-8 - Flooding standard: Type “B” vessel, 100 percent reduction.
Code of Federal Regulations, 2013 CFR
2013-10-01
... 46 Shipping 2 2013-10-01 2013-10-01 false Flooding standard: Type âBâ vessel, 100 percent... LINES DOMESTIC AND FOREIGN VOYAGES BY SEA Freeboards § 42.20-8 Flooding standard: Type “B” vessel, 100...-11 as applied to the following flooding standards: (1) If the vessel is 225 meters (738 feet) or less...
46 CFR 42.20-8 - Flooding standard: Type “B” vessel, 100 percent reduction.
Code of Federal Regulations, 2014 CFR
2014-10-01
... 46 Shipping 2 2014-10-01 2014-10-01 false Flooding standard: Type âBâ vessel, 100 percent... LINES DOMESTIC AND FOREIGN VOYAGES BY SEA Freeboards § 42.20-8 Flooding standard: Type “B” vessel, 100...-11 as applied to the following flooding standards: (1) If the vessel is 225 meters (738 feet) or less...
46 CFR 42.20-8 - Flooding standard: Type “B” vessel, 100 percent reduction.
Code of Federal Regulations, 2012 CFR
2012-10-01
... 46 Shipping 2 2012-10-01 2012-10-01 false Flooding standard: Type âBâ vessel, 100 percent... LINES DOMESTIC AND FOREIGN VOYAGES BY SEA Freeboards § 42.20-8 Flooding standard: Type “B” vessel, 100...-11 as applied to the following flooding standards: (1) If the vessel is 225 meters (738 feet) or less...
46 CFR 42.20-8 - Flooding standard: Type “B” vessel, 100 percent reduction.
Code of Federal Regulations, 2011 CFR
2011-10-01
... 46 Shipping 2 2011-10-01 2011-10-01 false Flooding standard: Type âBâ vessel, 100 percent...-11 as applied to the following flooding standards: (1) If the vessel is 225 meters (738 feet) or less... standard of paragraph (a)(1) of this section must be applied, treating the machinery space, taken alone, as...
46 CFR 42.20-8 - Flooding standard: Type “B” vessel, 100 percent reduction.
Code of Federal Regulations, 2010 CFR
2010-10-01
... 46 Shipping 2 2010-10-01 2010-10-01 false Flooding standard: Type âBâ vessel, 100 percent...-11 as applied to the following flooding standards: (1) If the vessel is 225 meters (738 feet) or less... standard of paragraph (a)(1) of this section must be applied, treating the machinery space, taken alone, as...
Wiley, Jeffrey B.; Curran, Janet H.
2003-01-01
Methods for estimating daily mean flow-duration statistics for seven regions in Alaska and low-flow frequencies for one region, southeastern Alaska, were developed from daily mean discharges for streamflow-gaging stations in Alaska and conterminous basins in Canada. The 15-, 10-, 9-, 8-, 7-, 6-, 5-, 4-, 3-, 2-, and 1-percent duration flows were computed for the October-through-September water year for 222 stations in Alaska and conterminous basins in Canada. The 98-, 95-, 90-, 85-, 80-, 70-, 60-, and 50-percent duration flows were computed for the individual months of July, August, and September for 226 stations in Alaska and conterminous basins in Canada. The 98-, 95-, 90-, 85-, 80-, 70-, 60-, and 50-percent duration flows were computed for the season July-through-September for 65 stations in southeastern Alaska. The 7-day, 10-year and 7-day, 2-year low-flow frequencies for the season July-through-September were computed for 65 stations for most of southeastern Alaska. Low-flow analyses were limited to particular months or seasons in order to omit winter low flows, when ice effects reduce the quality of the records and validity of statistical assumptions. Regression equations for estimating the selected high-flow and low-flow statistics for the selected months and seasons for ungaged sites were developed from an ordinary-least-squares regression model using basin characteristics as independent variables. Drainage area and precipitation were significant explanatory variables for high flows, and drainage area, precipitation, mean basin elevation, and area of glaciers were significant explanatory variables for low flows. The estimating equations can be used at ungaged sites in Alaska and conterminous basins in Canada where streamflow regulation, streamflow diversion, urbanization, and natural damming and releasing of water do not affect the streamflow data for the given month or season. Standard errors of estimate ranged from 15 to 56 percent for high-duration flow statistics, 25 to greater than 500 percent for monthly low-duration flow statistics, 32 to 66 percent for seasonal low-duration flow statistics, and 53 to 64 percent for low-flow frequency statistics.
A colinear backscattering Mueller matrix microscope for reflection Muller matrix imaging
NASA Astrophysics Data System (ADS)
Chen, Zhenhua; Yao, Yue; Zhu, Yuanhuan; Ma, Hui
2018-02-01
In a recent attempt, we developed a colinear backscattering Mueller matrix microscope by adding polarization state generator (PSG) and polarization state analyzer (PSA) into the illumination and detection optical paths of a commercial metallurgical microscope. It is found that specific efforts have to be made to reduce the artifacts due to the intrinsic residual polarizations of the optical system, particularly the dichroism due to the 45 degrees beam splitter. In this paper, we present a new calibration method based on numerical reconstruction of the instrument matrix to remove the artifacts introduced by beam splitter. Preliminary tests using a mirror as a standard sample show that the maximum Muller matrix element error of the colinear backscattering Muller matrix microscope can be reduced to a few percent.
NASA Astrophysics Data System (ADS)
Battista, L.; Scorza, A.; Botta, F.; Sciuto, S. A.
2016-02-01
Published standards for the performance evaluation of pulmonary ventilators are mainly directed to manufacturers rather than to end-users and often considered inadequate or not comprehensive. In order to contribute to overcome the problems above, a novel measurement system was proposed and tested with waveforms of mechanical ventilation by means of experimental trials carried out with infant ventilators typically used in neonatal intensive care units: the main quantities of mechanical ventilation in newborns are monitored, i.e. air flow rate, differential pressure and volume from infant ventilator are measured by means of two novel fiber-optic sensors (OFSs) developed and characterized by the authors, while temperature and relative humidity of air mass are obtained by two commercial transducers. The proposed fiber-optic sensors (flow sensor Q-OFS, pressure sensor P-OFS) showed measurement ranges of air flow and pressure typically encountered in neonatal mechanical ventilation, i.e. the air flow rate Q ranged from 3 l min-1 to 18 l min-1 (inspiratory) and from -3 l min-1 to -18 l min-1 (expiratory), the differential pressure ΔP ranged from -15 cmH2O to 15 cmH2O. In each experimental trial carried out with different settings of the ventilator, outputs of the OFSs are compared with data from two reference sensors (reference flow sensor RF, reference pressure sensor RP) and results are found consistent: flow rate Q showed a maximum error between Q-OFS and RF up to 13 percent, with an output ratio Q RF/Q OFS of not more than 1.06 ± 0.09 (least square estimation, 95 percent confidence level, R 2 between 0.9822 and 0.9931). On the other hand the maximum error between P-OFS and RP on differential pressure ΔP was lower than 10 percent, with an output ratio ΔP RP/ΔP OFS between 0.977 ± 0.022 and 1.0 ± 0.8 (least square estimation, 95 percent confidence level, R 2 between 0.9864 and 0.9876). Despite the possible improvements, results were encouraging and suggested the proposed measurement system can be considered suitable for performances evaluation of neonatal ventilators and useful for both end-users and manufacturers.
Pedometer accuracy in slow walking older adults.
Martin, Jessica B; Krč, Katarina M; Mitchell, Emily A; Eng, Janice J; Noble, Jeremy W
2012-07-03
The purpose of this study was to determine pedometer accuracy during slow overground walking in older adults (Mean age = 63.6 years). A total of 18 participants (6 males, 12 females) wore 5 different brands of pedometers over 3 pre-set cadences that elicited walking speeds between 0.3 and 0.9 m/s and one self-selected cadence over 80 meters of indoor track. Pedometer accuracy decreased with slower walking speeds with mean percent errors across all devices combined of 56%, 40%, 19% and 9% at cadences of 50, 66, and 80 steps/min, and self selected cadence, respectively. Percent error ranged from 45.3% for Omron HJ105 to 66.9% for Yamax Digiwalker 200. Due to the high level of error across the slowest cadences of all 5 devices, the use of pedometers to monitor step counts in healthy older adults with slower gait speeds is problematic. Further research is required to develop pedometer mechanisms that accurately measure steps at slower walking speeds.
Pedometer accuracy in slow walking older adults
Martin, Jessica B.; Krč, Katarina M.; Mitchell, Emily A.; Eng, Janice J.; Noble, Jeremy W.
2013-01-01
The purpose of this study was to determine pedometer accuracy during slow overground walking in older adults (Mean age = 63.6 years). A total of 18 participants (6 males, 12 females) wore 5 different brands of pedometers over 3 pre-set cadences that elicited walking speeds between 0.3 and 0.9 m/s and one self-selected cadence over 80 meters of indoor track. Pedometer accuracy decreased with slower walking speeds with mean percent errors across all devices combined of 56%, 40%, 19% and 9% at cadences of 50, 66, and 80 steps/min, and self selected cadence, respectively. Percent error ranged from 45.3% for Omron HJ105 to 66.9% for Yamax Digiwalker 200. Due to the high level of error across the slowest cadences of all 5 devices, the use of pedometers to monitor step counts in healthy older adults with slower gait speeds is problematic. Further research is required to develop pedometer mechanisms that accurately measure steps at slower walking speeds. PMID:24795762
Wing Shape Sensing from Measured Strain
NASA Technical Reports Server (NTRS)
Pak, Chan-Gi
2015-01-01
A new two-step theory is investigated for predicting the deflection and slope of an entire structure using strain measurements at discrete locations. In the first step, a measured strain is fitted using a piecewise least-squares curve fitting method together with the cubic spline technique. These fitted strains are integrated twice to obtain deflection data along the fibers. In the second step, computed deflection along the fibers are combined with a finite element model of the structure in order to interpolate and extrapolate the deflection and slope of the entire structure through the use of the System Equivalent Reduction and Expansion Process. The theory is first validated on a computational model, a cantilevered rectangular plate wing. The theory is then applied to test data from a cantilevered swept-plate wing model. Computed results are compared with finite element results, results using another strain-based method, and photogrammetry data. For the computational model under an aeroelastic load, maximum deflection errors in the fore and aft, lateral, and vertical directions are -3.2 percent, 0.28 percent, and 0.09 percent, respectively; and maximum slope errors in roll and pitch directions are 0.28 percent and -3.2 percent, respectively. For the experimental model, deflection results at the tip are shown to be accurate to within 3.8 percent of the photogrammetry data and are accurate to within 2.2 percent in most cases. In general, excellent matching between target and computed values are accomplished in this study. Future refinement of this theory will allow it to monitor the deflection and health of an entire aircraft in real time, allowing for aerodynamic load computation, active flexible motion control, and active induced drag reduction..
Cooper, Lisa A; Ghods Dinoso, Bri K; Ford, Daniel E; Roter, Debra L; Primm, Annelle B; Larson, Susan M; Gill, James M; Noronha, Gary J; Shaya, Elias K; Wang, Nae-Yuh
2013-01-01
Objective To compare the effectiveness of standard and patient-centered, culturally tailored collaborative care (CC) interventions for African American patients with major depressive disorder (MDD) over 12 months of follow-up. Data Sources/Study Setting Twenty-seven primary care clinicians and 132 African American patients with MDD in urban community-based practices in Maryland and Delaware. Study Design Cluster randomized trial with patient-level, intent-to-treat analyses. Data Collection/Extraction Methods Patients completed screener and baseline, 6-, 12-, and 18-month interviews to assess depression severity, mental health functioning, health service utilization, and patient ratings of care. Principal Findings Patients in both interventions showed statistically significant improvements over 12 months. Compared with standard, patient-centered CC patients had similar reductions in depression symptom levels (−2.41 points; 95 percent confidence interval (CI), −7.7, 2.9), improvement in mental health functioning scores (+3.0 points; 95 percent CI, −2.2, 8.3), and odds of rating their clinician as participatory (OR, 1.48, 95 percent CI, 0.53, 4.17). Treatment rates increased among standard (OR = 1.8, 95 percent CI 1.0, 3.2), but not patient-centered (OR = 1.0, 95 percent CI 0.6, 1.8) CC patients. However, patient-centered CC patients rated their care manager as more helpful at identifying their concerns (OR, 3.00; 95 percent CI, 1.23, 7.30) and helping them adhere to treatment (OR, 2.60; 95 percent CI, 1.11, 6.08). Conclusions Patient-centered and standard CC approaches to depression care showed similar improvements in clinical outcomes for African Americans with depression; standard CC resulted in higher rates of treatment, and patient-centered CC resulted in better ratings of care. PMID:22716199
Computation of Standard Errors
Dowd, Bryan E; Greene, William H; Norton, Edward C
2014-01-01
Objectives We discuss the problem of computing the standard errors of functions involving estimated parameters and provide the relevant computer code for three different computational approaches using two popular computer packages. Study Design We show how to compute the standard errors of several functions of interest: the predicted value of the dependent variable for a particular subject, and the effect of a change in an explanatory variable on the predicted value of the dependent variable for an individual subject and average effect for a sample of subjects. Empirical Application Using a publicly available dataset, we explain three different methods of computing standard errors: the delta method, Krinsky–Robb, and bootstrapping. We provide computer code for Stata 12 and LIMDEP 10/NLOGIT 5. Conclusions In most applications, choice of the computational method for standard errors of functions of estimated parameters is a matter of convenience. However, when computing standard errors of the sample average of functions that involve both estimated parameters and nonstochastic explanatory variables, it is important to consider the sources of variation in the function's values. PMID:24800304
Federal Register 2010, 2011, 2012, 2013, 2014
2012-03-14
... error on page 12215 in row 16 of the third column. The 2012 A season directed fishing allowance for the... \\1\\ 2013 B season \\1\\ season \\1\\ Area and sector 2012 2013 Allocations A season SCA harvest B season... allocated as a DFA as follows: inshore sector--50 percent, catcher/processor sector (C/P)--40 percent, and...
Technology, design, simulation, and evaluation for SEP-hardened circuits
NASA Technical Reports Server (NTRS)
Adams, J. R.; Allred, D.; Barry, M.; Rudeck, P.; Woodruff, R.; Hoekstra, J.; Gardner, H.
1991-01-01
This paper describes the technology, design, simulation, and evaluation for improvement of the Single Event Phenomena (SEP) hardness of gate-array and SRAM cells. Through the use of design and processing techniques, it is possible to achieve an SEP error rate less than 1.0 x 10(exp -10) errors/bit-day for a 9O percent worst-case geosynchronous orbit environment.
Estimating actual evapotranspiration for forested sites: modifications to the Thornthwaite Model
Randall K. Kolka; Ann T. Wolf
1998-01-01
A previously coded version of the Thornthwaite water balance model was used to estimate annual actual evapotranspiration (AET) for 29 forested sites between 1900 and 1993 in the Upper Great Lakes area. Approximately 8 percent of the data sets calculated AET in error. Errors were detected in months when estimated AET was greater than potential evapotranspiration. Annual...
Science support for the Earth radiation budget experiment
NASA Technical Reports Server (NTRS)
Coakley, James A., Jr.
1994-01-01
The work undertaken as part of the Earth Radiation Budget Experiment (ERBE) included the following major components: The development and application of a new cloud retrieval scheme to assess errors in the radiative fluxes arising from errors in the ERBE identification of cloud conditions. The comparison of the anisotropy of reflected sunlight and emitted thermal radiation with the anisotropy predicted by the Angular Dependence Models (ADM's) used to obtain the radiative fluxes. Additional studies included the comparison of calculated longwave cloud-free radiances with those observed by the ERBE scanner and the use of ERBE scanner data to track the calibration of the shortwave channels of the Advanced Very High Resolution Radiometer (AVHRR). Major findings included: the misidentification of cloud conditions by the ERBE scene identification algorithm could cause 15 percent errors in the shortwave flux reflected by certain scene types. For regions containing mixtures of scene types, the errors were typically less than 5 percent, and the anisotropies of the shortwave and longwave radiances exhibited a spatial scale dependence which, because of the growth of the scanner field of view from nadir to limb, gave rise to a view zenith angle dependent bias in the radiative fluxes.
Wind-Tunnel Tests of Seven Static-Pressure Probes at Transonic Speeds
NASA Technical Reports Server (NTRS)
Capone, Francis J.
1961-01-01
Wind-tunnel tests have been conducted to determine the errors of 3 seven static-pressure probes mounted very close to the nose of a body of revolution simulating a missile forebody. The tests were conducted at Mach numbers from 0.80 to 1.08 and at angles of attack from -1.7 deg to 8.4 deg. The test Reynolds number per foot varied from 3.35 x 10(exp 6) to 4.05 x 10(exp 6). For three 4-vane, gimbaled probes, the static-pressure errors remained constant throughout the test angle-of-attack range for all Mach numbers except 1.02. For two single-vane, self-rotating probes having two orifices at +/-37.5 deg. from the plane of symmetry on the lower surface of the probe body, the static-pressure error varied as much as 1.5 percent of free-stream static pressure through the test angle-of- attack range for all Mach numbers. For two fixed, cone-cylinder probes of short length and large diameter, the static-pressure error varied over the test angle-of-attack range at constant Mach numbers as much as 8 to 10 percent of free-stream static pressure.
The Infinitesimal Jackknife with Exploratory Factor Analysis
ERIC Educational Resources Information Center
Zhang, Guangjian; Preacher, Kristopher J.; Jennrich, Robert I.
2012-01-01
The infinitesimal jackknife, a nonparametric method for estimating standard errors, has been used to obtain standard error estimates in covariance structure analysis. In this article, we adapt it for obtaining standard errors for rotated factor loadings and factor correlations in exploratory factor analysis with sample correlation matrices. Both…
ERIC Educational Resources Information Center
Woodruff, David; Traynor, Anne; Cui, Zhongmin; Fang, Yu
2013-01-01
Professional standards for educational testing recommend that both the overall standard error of measurement and the conditional standard error of measurement (CSEM) be computed on the score scale used to report scores to examinees. Several methods have been developed to compute scale score CSEMs. This paper compares three methods, based on…
NASA Technical Reports Server (NTRS)
Gracey, William; Jewel, Joseph W., Jr.; Carpenter, Gene T.
1960-01-01
The overall errors of the service altimeter installations of a variety of civil transport, military, and general-aviation airplanes have been experimentally determined during normal landing-approach and take-off operations. The average height above the runway at which the data were obtained was about 280 feet for the landings and about 440 feet for the take-offs. An analysis of the data obtained from 196 airplanes during 415 landing approaches and from 70 airplanes during 152 take-offs showed that: 1. The overall error of the altimeter installations in the landing- approach condition had a probable value (50 percent probability) of +/- 36 feet and a maximum probable value (99.7 percent probability) of +/- 159 feet with a bias of +10 feet. 2. The overall error in the take-off condition had a probable value of +/- 47 feet and a maximum probable value of +/- 207 feet with a bias of -33 feet. 3. The overall errors of the military airplanes were generally larger than those of the civil transports in both the landing-approach and take-off conditions. In the landing-approach condition the probable error and the maximum probable error of the military airplanes were +/- 43 and +/- 189 feet, respectively, with a bias of +15 feet, whereas those for the civil transports were +/- 22 and +/- 96 feet, respectively, with a bias of +1 foot. 4. The bias values of the error distributions (+10 feet for the landings and -33 feet for the take-offs) appear to represent a measure of the hysteresis characteristics (after effect and recovery) and friction of the instrument and the pressure lag of the tubing-instrument system.
Cost implications of organizing nursing home workforce in teams.
Mukamel, Dana B; Cai, Shubing; Temkin-Greener, Helena
2009-08-01
To estimate the costs associated with formal and self-managed daily practice teams in nursing homes. Medicaid cost reports for 135 nursing homes in New York State in 2006 and survey data for 6,137 direct care workers. A retrospective statistical analysis: We estimated hybrid cost functions that include team penetration variables. Inference was based on robust standard errors. Formal and self-managed team penetration (i.e., percent of staff working in a team) were calculated from survey responses. Annual variable costs, beds, case mix-adjusted days, admissions, home care visits, outpatient clinic visits, day care days, wages, and ownership were calculated from the cost reports. Formal team penetration was significantly associated with costs, while self-managed teams penetration was not. Costs declined with increasing penetration up to 13 percent of formal teams, and increased above this level. Formal teams in nursing homes in the upward sloping range of the curve were more diverse, with a larger number of participating disciplines and more likely to include physicians. Organization of workforce in formal teams may offer nursing homes a cost-saving strategy. More research is required to understand the relationship between team composition and costs.
A Final Approach Trajectory Model for Current Operations
NASA Technical Reports Server (NTRS)
Gong, Chester; Sadovsky, Alexander
2010-01-01
Predicting accurate trajectories with limited intent information is a challenge faced by air traffic management decision support tools in operation today. One such tool is the FAA's Terminal Proximity Alert system which is intended to assist controllers in maintaining safe separation of arrival aircraft during final approach. In an effort to improve the performance of such tools, two final approach trajectory models are proposed; one based on polynomial interpolation, the other on the Fourier transform. These models were tested against actual traffic data and used to study effects of the key final approach trajectory modeling parameters of wind, aircraft type, and weight class, on trajectory prediction accuracy. Using only the limited intent data available to today's ATM system, both the polynomial interpolation and Fourier transform models showed improved trajectory prediction accuracy over a baseline dead reckoning model. Analysis of actual arrival traffic showed that this improved trajectory prediction accuracy leads to improved inter-arrival separation prediction accuracy for longer look ahead times. The difference in mean inter-arrival separation prediction error between the Fourier transform and dead reckoning models was 0.2 nmi for a look ahead time of 120 sec, a 33 percent improvement, with a corresponding 32 percent improvement in standard deviation.
NASA Technical Reports Server (NTRS)
Dardner, B. R.; Blad, B. L.; Thompson, D. R.; Henderson, K. E.
1985-01-01
Reflectance and agronomic Thematic Mapper (TM) data were analyzed to determine possible data transformations for evaluating several plant parameters of corn. Three transformation forms were used: the ratio of two TM bands, logarithms of two-band ratios, and normalized differences of two bands. Normalized differences and logarithms of two-band ratios responsed similarly in the equations for estimating the plant growth parameters evaluated in this study. Two-term equations were required to obtain the maximum predictability of percent ground cover, canopy moisture content, and total wet phytomass. Standard error of estimate values were 15-26 percent lower for two-term estimates of these parameters than for one-term estimates. The terms log(TM4/TM2) and (TM4/TM5) produced the maximum predictability for leaf area and dry green leaf weight, respectively. The middle infrared bands TM5 and TM7 are essential for maximizing predictability for all measured plant parameters except leaf area index. The estimating models were evaluated over bare soil to discriminate between equations which are statistically similar. Qualitative interpretations of the resulting prediction equations are consistent with general agronomic and remote sensing theory.
Reliability and concurrent validity of Futrex and bioelectrical impedance.
Vehrs, P; Morrow, J R; Butte, N
1998-11-01
Thirty Caucasian males (aged 19-32yr) participated in this study designed to investigate the reliability of multiple bioelectrical impedance analysis (BIA) and near-infrared spectroscopy (Futrex, FTX) measurements and the validity of BIA and FTX estimations of hydrostatically (UW) determined percent body fat (%BF). Two BIA and two FTX instruments were used to make 6 measurements each of resistance (R) and optical density (OD) respectively over a 30 min period on two consecutive days. Repeated measures ANOVA indicated that FTX and BIA, using manufacturer's equations, significantly (p<0.01) under predicted UW by 2.4 and 3.8%BF respectively. Standard error of estimate (SEE) and total error (TE) terms provided by regression analysis for FTX (4.6 and 5.31%BF respectively) and BIA (5.65 and 6.95%BF, respectively) were high. Dependent t-tests revealed no significant differences in either FTX or BIA predictions of %BF using two machines. Intraclass reliabilities for BIA and FTX estimates of UW %BF across trials, days, and machines all exceeded 0.97. A significant random error term associated with FTX and a significant subject-by-day interaction associated with BIA was revealed using the generalizability model. Although FTX and BIA estimates of UW %BF were reliable, due to the significant underestimation of UW %BF and high SEE and TE, neither FTX nor BIA were considered valid estimates of hydrostatically determined %BF.
Artificial intelligence techniques for automatic screening of amblyogenic factors.
Van Eenwyk, Jonathan; Agah, Arvin; Giangiacomo, Joseph; Cibis, Gerhard
2008-01-01
To develop a low-cost automated video system to effectively screen children aged 6 months to 6 years for amblyogenic factors. In 1994 one of the authors (G.C.) described video vision development assessment, a digitizable analog video-based system combining Brückner pupil red reflex imaging and eccentric photorefraction to screen young children for amblyogenic factors. The images were analyzed manually with this system. We automated the capture of digital video frames and pupil images and applied computer vision and artificial intelligence to analyze and interpret results. The artificial intelligence systems were evaluated by a tenfold testing method. The best system was the decision tree learning approach, which had an accuracy of 77%, compared to the "gold standard" specialist examination with a "refer/do not refer" decision. Criteria for referral were strabismus, including microtropia, and refractive errors and anisometropia considered to be amblyogenic. Eighty-two percent of strabismic individuals were correctly identified. High refractive errors were also correctly identified and referred 90% of the time, as well as significant anisometropia. The program was less correct in identifying more moderate refractive errors, below +5 and less than -7. Although we are pursuing a variety of avenues to improve the accuracy of the automated analysis, the program in its present form provides acceptable cost benefits for detecting ambylogenic factors in children aged 6 months to 6 years.
21 CFR 352.70 - Standard sunscreen.
Code of Federal Regulations, 2013 CFR
2013-04-01
... test product to be considered valid, the SPF of the standard sunscreen must fall within the standard... Percent by weight Preparation A Lanolin 5.00 Homosalate 8.00 White petrolatum 2.50 Stearic acid 4.00... volume with the assay solvent and mix well to make a 1-percent solution. (3) Preparation of the test...
21 CFR 352.70 - Standard sunscreen.
Code of Federal Regulations, 2014 CFR
2014-04-01
... test product to be considered valid, the SPF of the standard sunscreen must fall within the standard... Percent by weight Preparation A Lanolin 5.00 Homosalate 8.00 White petrolatum 2.50 Stearic acid 4.00... volume with the assay solvent and mix well to make a 1-percent solution. (3) Preparation of the test...
21 CFR 352.70 - Standard sunscreen.
Code of Federal Regulations, 2012 CFR
2012-04-01
... test product to be considered valid, the SPF of the standard sunscreen must fall within the standard... Percent by weight Preparation A Lanolin 5.00 Homosalate 8.00 White petrolatum 2.50 Stearic acid 4.00... volume with the assay solvent and mix well to make a 1-percent solution. (3) Preparation of the test...
40 CFR 60.42c - Standard for sulfur dioxide (SO2).
Code of Federal Regulations, 2014 CFR
2014-07-01
... 40 Protection of Environment 7 2014-07-01 2014-07-01 false Standard for sulfur dioxide (SO2). 60...-Commercial-Institutional Steam Generating Units § 60.42c Standard for sulfur dioxide (SO2). (a) Except as... percent sulfur. The percent reduction requirements are not applicable to affected facilities under this...
40 CFR 60.42c - Standard for sulfur dioxide (SO2).
Code of Federal Regulations, 2012 CFR
2012-07-01
... 40 Protection of Environment 7 2012-07-01 2012-07-01 false Standard for sulfur dioxide (SO2). 60...-Commercial-Institutional Steam Generating Units § 60.42c Standard for sulfur dioxide (SO2). (a) Except as... percent sulfur. The percent reduction requirements are not applicable to affected facilities under this...
40 CFR 60.42c - Standard for sulfur dioxide (SO2).
Code of Federal Regulations, 2013 CFR
2013-07-01
... 40 Protection of Environment 7 2013-07-01 2013-07-01 false Standard for sulfur dioxide (SO2). 60...-Commercial-Institutional Steam Generating Units § 60.42c Standard for sulfur dioxide (SO2). (a) Except as... percent sulfur. The percent reduction requirements are not applicable to affected facilities under this...
Code of Federal Regulations, 2010 CFR
2010-07-01
... standards for pultrusion operations subject to the 60 weight percent organic HAP emissions reductions... National Emissions Standards for Hazardous Air Pollutants: Reinforced Plastic Composites Production Options... operations subject to the 60 weight percent organic HAP emissions reductions requirement? You must use one or...
Statistical models for estimating daily streamflow in Michigan
Holtschlag, D.J.; Salehi, Habib
1992-01-01
Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.
Nonparametric Estimation of Standard Errors in Covariance Analysis Using the Infinitesimal Jackknife
ERIC Educational Resources Information Center
Jennrich, Robert I.
2008-01-01
The infinitesimal jackknife provides a simple general method for estimating standard errors in covariance structure analysis. Beyond its simplicity and generality what makes the infinitesimal jackknife method attractive is that essentially no assumptions are required to produce consistent standard error estimates, not even the requirement that the…
Factor Rotation and Standard Errors in Exploratory Factor Analysis
ERIC Educational Resources Information Center
Zhang, Guangjian; Preacher, Kristopher J.
2015-01-01
In this article, we report a surprising phenomenon: Oblique CF-varimax and oblique CF-quartimax rotation produced similar point estimates for rotated factor loadings and factor correlations but different standard error estimates in an empirical example. Influences of factor rotation on asymptotic standard errors are investigated using a numerical…
A Regional CO2 Observing System Simulation Experiment for the ASCENDS Satellite Mission
NASA Technical Reports Server (NTRS)
Wang, J. S.; Kawa, S. R.; Eluszkiewicz, J.; Baker, D. F.; Mountain, M.; Henderson, J.; Nehrkorn, T.; Zaccheo, T. S.
2014-01-01
Top-down estimates of the spatiotemporal variations in emissions and uptake of CO2 will benefit from the increasing measurement density brought by recent and future additions to the suite of in situ and remote CO2 measurement platforms. In particular, the planned NASA Active Sensing of CO2 Emissions over Nights, Days, and Seasons (ASCENDS) satellite mission will provide greater coverage in cloudy regions, at high latitudes, and at night than passive satellite systems, as well as high precision and accuracy. In a novel approach to quantifying the ability of satellite column measurements to constrain CO2 fluxes, we use a portable library of footprints (surface influence functions) generated by the WRF-STILT Lagrangian transport model in a regional Bayesian synthesis inversion. The regional Lagrangian framework is well suited to make use of ASCENDS observations to constrain fluxes at high resolution, in this case at 1 degree latitude x 1 degree longitude and weekly for North America. We consider random measurement errors only, modeled as a function of mission and instrument design specifications along with realistic atmospheric and surface conditions. We find that the ASCENDS observations could potentially reduce flux uncertainties substantially at biome and finer scales. At the 1 degree x 1 degree, weekly scale, the largest uncertainty reductions, on the order of 50 percent, occur where and when there is good coverage by observations with low measurement errors and the a priori uncertainties are large. Uncertainty reductions are smaller for a 1.57 micron candidate wavelength than for a 2.05 micron wavelength, and are smaller for the higher of the two measurement error levels that we consider (1.0 ppm vs. 0.5 ppm clear-sky error at Railroad Valley, Nevada). Uncertainty reductions at the annual, biome scale range from 40 percent to 75 percent across our four instrument design cases, and from 65 percent to 85 percent for the continent as a whole. Our uncertainty reductions at various scales are substantially smaller than those from a global ASCENDS inversion on a coarser grid, demonstrating how quantitative results can depend on inversion methodology. The a posteriori flux uncertainties we obtain, ranging from 0.01 to 0.06 Pg C yr-1 across the biomes, would meet requirements for improved understanding of long-term carbon sinks suggested by a previous study.
Damrau, D.L.
1993-01-01
Increased awareness of the quality of water in the United States has led to the development of a method for determining low levels (0.2-5.0 microg/L) of silver in water samples. Use of graphite furnace atomic absorption spectrophotometry provides a sensitive, precise, and accurate method for determining low-level silver in samples of low ionic-strength water, precipitation water, and natural water. The minimum detection limit determined for low-level silver is 0.2 microg/L. Precision data were collected on natural-water samples and SRWS (Standard Reference Water Samples). The overall percent relative standard deviation for natural-water samples with silver concentrations more than 0.2 microg/L was less than 40 percent throughout the analytical range. For the SRWS with concentrations more than 0.2 microg/L, the overall percent relative standard deviation was less than 25 percent throughout the analytical range. The accuracy of the results was determined by spiking 6 natural-water samples with different known concentrations of the silver standard. The recoveries ranged from 61 to 119 percent at the 0.5-microg/L spike level. At the 1.25-microg/L spike level, the recoveries ranged from 92 to 106 percent. For the high spike level at 3.0 microg/L, the recoveries ranged from 65 to 113 percent. The measured concentrations of silver obtained from known samples were within the Branch of Quality Assurance accepted limits of 1 1/2 standard deviations on the basis of the SRWS program for Inter-Laboratory studies.
Realtime mitigation of GPS SA errors using Loran-C
NASA Technical Reports Server (NTRS)
Braasch, Soo Y.
1994-01-01
The hybrid use of Loran-C with the Global Positioning System (GPS) was shown capable of providing a sole-means of enroute air radionavigation. By allowing pilots to fly direct to their destinations, use of this system is resulting in significant time savings and therefore fuel savings as well. However, a major error source limiting the accuracy of GPS is the intentional degradation of the GPS signal known as Selective Availability (SA). SA-induced position errors are highly correlated and far exceed all other error sources (horizontal position error: 100 meters, 95 percent). Realtime mitigation of SA errors from the position solution is highly desirable. How that can be achieved is discussed. The stability of Loran-C signals is exploited to reduce SA errors. The theory behind this technique is discussed and results using bench and flight data are given.
Test and evaluation of the 2.4-micron photorefractor ocular screening system
NASA Technical Reports Server (NTRS)
Richardson, J. R.
1985-01-01
An improved 2.4-m photorefractor ocular screening system was tested and evaluated. The photorefractor system works on the principal of obtaining a colored photograph of both human eyes; and, by analysis of the retinal reflex images, certain ocular defects can be detected such a refractive error, strabismus, and lens obstructions. The 2.4-m photorefractory system uses a 35-mm camera with a telephoto lens and an electronic flash attachment. Retinal reflex images obtained from the new 2.4-m system are significantly improved over earlier systems in image quality. Other features were also improved, notably portability and reduction in mass. A total of 706 school age children were photorefracted, 211 learning disabled and 495 middle school students. The total students having abnormal retinal reflexes were 156 or 22 percent, and 133 or 85 percent of the abnormal had refractive error indicated. Ophthalmological examination was performed on 60 of these students and refractive error was verified in 57 or 95 percent of those examined. The new 2.4-m system has a NASA patent pending and is authorized by the FDA. It provides a reliable means of rapidly screening the eyes of children and young adults for vision problems. It is especially useful for infants and other non-communicative children who cannot be screened by the more conventional methods such as the familiar E chart.
Mandell, Jacob C; Rhodes, Jeffrey A; Shah, Nehal; Gaviola, Glenn C; Gomoll, Andreas H; Smith, Stacy E
2017-11-01
Accurate assessment of knee articular cartilage is clinically important. Although 3.0 Tesla (T) MRI is reported to offer improved diagnostic performance, literature regarding the clinical impact of MRI field strength is lacking. The purpose of this study is to compare the diagnostic performance of clinical MRI reports for assessment of cartilage at 1.5 and 3.0 T in comparison to arthroscopy. This IRB-approved retrospective study consisted of 300 consecutive knees in 297 patients who had routine clinical MRI and arthroscopy. Descriptions of cartilage from MRI reports of 165 knees at 1.5 T and 135 at 3.0 T were compared with arthroscopy. The sensitivity, specificity, percent of articular surfaces graded concordantly, and percent of articular surfaces graded within one grade of the arthroscopic grading were calculated for each articular surface at 1.5 and 3.0 T. Agreement between MRI and arthroscopy was calculated with the weighted-kappa statistic. Significance testing was performed utilizing the z-test after bootstrapping to obtain the standard error. The sensitivity, specificity, percent of articular surfaces graded concordantly, and percent of articular surfaces graded within one grade were 61.4%, 82.7%, 62.2%, and 77.5% at 1.5 T and 61.8%, 80.6%, 59.5%, and 75.6% at 3.0 T, respectively. The weighted kappa statistic was 0.56 at 1.5 T and 0.55 at 3.0 T. There was no statistically significant difference in any of these parameters between 1.5 and 3.0 T. Factors potentially contributing to the lack of diagnostic advantage of 3.0 T MRI are discussed.
NASA Astrophysics Data System (ADS)
Siirila, E. R.; Fernandez-Garcia, D.; Sanchez-Vila, X.
2014-12-01
Particle tracking (PT) techniques, often considered favorable over Eulerian techniques due to artificial smoothening in breakthrough curves (BTCs), are evaluated in a risk-driven framework. Recent work has shown that given a relatively few number of particles (np), PT methods can yield well-constructed BTCs with kernel density estimators (KDEs). This work compares KDE and non-KDE BTCs simulated as a function of np (102-108) and averaged as a function of the exposure duration, ED. Results show that regardless of BTC shape complexity, un-averaged PT BTCs show a large bias over several orders of magnitude in concentration (C) when compared to the KDE results, remarkably even when np is as low as 102. With the KDE, several orders of magnitude less np are required to obtain the same global error in BTC shape as the PT technique. PT and KDE BTCs are averaged as a function of the ED with standard and new methods incorporating the optimal h (ANA). The lowest error curve is obtained through the ANA method, especially for smaller EDs. Percent error of peak of averaged-BTCs, important in a risk framework, is approximately zero for all scenarios and all methods for np ≥105, but vary between the ANA and PT methods, when np is lower. For fewer np, the ANA solution provides a lower error fit except when C oscillations are present during a short time frame. We show that obtaining a representative average exposure concentration is reliant on an accurate representation of the BTC, especially when data is scarce.
Lincoln, Tricia A.; Horan-Ross, Debra A.; McHale, Michael R.; Lawrence, Gregory B.
2009-01-01
The laboratory for analysis of low-ionic-strength water at the U.S. Geological Survey (USGS) Water Science Center in Troy, N.Y., analyzes samples collected by USGS projects throughout the Northeast. The laboratory's quality-assurance program is based on internal and interlaboratory quality-assurance samples and quality-control procedures that were developed to ensure proper sample collection, processing, and analysis. The quality-assurance and quality-control data were stored in the laboratory's Lab Master data-management system, which provides efficient review, compilation, and plotting of data. This report presents and discusses results of quality-assurance and quality control samples analyzed from July 2003 through June 2005. Results for the quality-control samples for 20 analytical procedures were evaluated for bias and precision. Control charts indicate that data for five of the analytical procedures were occasionally biased for either high-concentration or low-concentration samples but were within control limits; these procedures were: acid-neutralizing capacity, total monomeric aluminum, pH, silicon, and sodium. Seven of the analytical procedures were biased throughout the analysis period for the high-concentration sample, but were within control limits; these procedures were: dissolved organic carbon, chloride, nitrate (ion chromatograph), nitrite, silicon, sodium, and sulfate. The calcium and magnesium procedures were biased throughout the analysis period for the low-concentration sample, but were within control limits. The total aluminum and specific conductance procedures were biased for the high-concentration and low-concentration samples, but were within control limits. Results from the filter-blank and analytical-blank analyses indicate that the procedures for 17 of 18 analytes were within control limits, although the concentrations for blanks were occasionally outside the control limits. The data-quality objective was not met for dissolved organic carbon. Sampling and analysis precision are evaluated herein in terms of the coefficient of variation obtained for triplicate samples in the procedures for 18 of the 22 analytes. At least 85 percent of the samples met data-quality objectives for all analytes except total monomeric aluminum (82 percent of samples met objectives), total aluminum (77 percent of samples met objectives), chloride (80 percent of samples met objectives), fluoride (76 percent of samples met objectives), and nitrate (ion chromatograph) (79 percent of samples met objectives). The ammonium and total dissolved nitrogen did not meet the data-quality objectives. Results of the USGS interlaboratory Standard Reference Sample (SRS) Project indicated good data quality over the time period, with ratings for each sample in the satisfactory, good, and excellent ranges or less than 10 percent error. The P-sample (low-ionic-strength constituents) analysis had one marginal and two unsatisfactory ratings for the chloride procedure. The T-sample (trace constituents)analysis had two unsatisfactory ratings and one high range percent error for the aluminum procedure. The N-sample (nutrient constituents) analysis had one marginal rating for the nitrate procedure. Results of Environment Canada's National Water Research Institute (NWRI) program indicated that at least 84 percent of the samples met data-quality objectives for 11 of the 14 analytes; the exceptions were ammonium, total aluminum, and acid-neutralizing capacity. The ammonium procedure did not meet data quality objectives in all studies. Data-quality objectives were not met in 23 percent of samples analyzed for total aluminum and 45 percent of samples analyzed acid-neutralizing capacity. Results from blind reference-sample analyses indicated that data-quality objectives were met by at least 86 percent of the samples analyzed for calcium, chloride, fluoride, magnesium, pH, potassium, sodium, and sulfate. Data-quality objectives were not met by samples analyzed for fluoride.
The effects of changing exercise levels on weight and age-relatedweight gain
DOE Office of Scientific and Technical Information (OSTI.GOV)
Williams, Paul T.; Wood, Peter D.
2004-06-01
To determine prospectively whether physical activity canprevent age-related weight gain and whether changing levels of activityaffect body weight. DESIGN/SUBJECTS: The study consisted of 8,080 maleand 4,871 female runners who completed two questionnaires an average(+/-standard deviation (s.d.)) of 3.20+/-2.30 and 2.59+/-2.17 yearsapart, respectively, as part of the National Runners' Health Study.RESULTS: Changes in running distance were inversely related to changes inmen's and women's body mass indices (BMIs) (slope+/-standard error(s.e.): -0.015+/-0.001 and -0.009+/-0.001 kg/m(2) per Deltakm/week,respectively), waist circumferences (-0.030+/-0.002 and -0.022+/-0.005 cmper Deltakm/week, respectively) and percent changes in body weight(-0.062+/-0.003 and -0.041+/-0.003 percent per Deltakm/week,respectively, all P<0.0001). The regression slopes were significantlysteepermore » (more negative) in men than women for DeltaBMI and Deltapercentbody weight (P<0.0001). A longer history of running diminishedthe impact of changing running distance on men's weights. When adjustedfor Deltakm/week, years of aging in men and years of aging in women wereassociated with increases of 0.066+/-0.005 and 0.056+/-0.006 kg/m(2) inBMI, respectively, increases of 0.294+/-0.019 and 0.279+/-0.028 percentin Delta percentbody weight, respectively, and increases of 0.203+/-0.016and 0.271+/-0.033 cm in waist circumference, respectively (allP<0.0001). These regression slopes suggest that vigorous exercise mayneed to increase 4.4 km/week annually in men and 6.2 km/week annually inwomen to compensate for the expected gain in weight associated with aging(2.7 and 3.9 km/week annually when correct for the attenuation due tomeasurement error). CONCLUSIONS: Age-related weight gain occurs evenamong the most active individuals when exercise is constant.Theoretically, vigorous exercise must increase significantly with age tocompensate for the expected gain in weight associated withaging.« less
Clark, R R; Kuta, J M; Sullivan, J C
1993-04-01
The purpose of this study was to compare the prediction of percent body fat (%FAT) by dual energy x-ray absorptiometry (DXA), skinfolds (SF), and hydrostatic weighing (HW) in adult males. Subjects were 35 adult male Caucasians (mean +/- SD; age: 39.1 +/- 14.0 yr, height: 180.6 +/- 5.3 cm, weight: 81.0 +/- 11.1 kg). %FAT, determined by HW with residual volume determined via O2 dilution, served as the criterion. DXA %FAT was determined by the Norland XR-26 (XR-26) bone densitometer and by the SF equations of Jackson and Pollock (JP) (1978), and Lohman (LOH) (1981). Criterion referenced validation included analyzing mean (+/- SD) %FAT values using a one-way ANOVA for significance, comparison of mean differences (MD), correlations (r), standard error of estimates (SEE), and total errors (TE). Significant differences were found between means of each method. The r (0.91) and SEE (3.0 %FAT) for DXA compare favorably with the established SF methods of JP and LOH for predicting %FAT; however, DXA demonstrated the largest MD (3.9 %FAT) and TE (5.2 %FAT). Regression analysis yields HW = 0.79* DXA + 0.56. The results do not support earlier research that found no significant difference between HW and DXA %FAT in males. The study suggests the density of the fat-free body (DFFB) is not constant, and that the variation in bone mineral content affects the DFFB, which contributes to the differences between DXA and HW %FAT. We recommend further research to identify inconsistencies between manufacturers of DXA equipment in prediction of %FAT in males.
NASA Technical Reports Server (NTRS)
Ford, Holland C.; Ciardullo, Robin
1988-01-01
Nova shells are characteristically prolate with equatorial bands and polar caps. Failure to account for the geometry can lead to large errors in expansion parallaxes for individual novae. When simple prescriptions are used for deriving expansion parallaxes from an ensemble of randomly oriented prolate spheroids, the average distance will be too small by factors of 10 to 15 percent. The absolute magnitudes of the novae will be underestimated and the resulting distance scale will be too small by the same factors. If observations of partially resolved nova shells select for large inclinations, the systematic error in the resulting distance scale could easily be 20 to 30 percent. Extinction by dust in the bulge of M31 may broaden and shift the intrinsic distribution of maximum nova magnitudes versus decay rates. We investigated this possibility by projecting Arp's and Rosino's novae onto a composite B - 6200A color map of M31's bulge. Thirty two of the 86 novae projected onto a smooth background with no underlying structure due to the presence of a dust cloud along the line of sight. The distribution of maximum magnitudes versus fade rates for these unreddened novae is indistinguishable from the distribution for the entire set of novae. It is concluded that novae suffer very little extinction from the filamentary and patchy distribution of dust seen in the bulge of M31. Time average B and H alpha nova luminosity functions are potentially powerful new ways to use novae as standard candles. Modern CCD observations and the photographic light curves of M31 novae found during the last 60 years were analyzed to show that these functions are power laws. Consequently, unless the eruption times for novae are known, the data cannot be used to obtain distances.
NASA Technical Reports Server (NTRS)
Jutte, Christine V.; Ko, William L.; Stephens, Craig A.; Bakalyar, John A.; Richards, W. Lance
2011-01-01
A ground loads test of a full-scale wing (175-ft span) was conducted using a fiber optic strain-sensing system to obtain distributed surface strain data. These data were input into previously developed deformed shape equations to calculate the wing s bending and twist deformation. A photogrammetry system measured actual shape deformation. The wing deflections reached 100 percent of the positive design limit load (equivalent to 3 g) and 97 percent of the negative design limit load (equivalent to -1 g). The calculated wing bending results were in excellent agreement with the actual bending; tip deflections were within +/- 2.7 in. (out of 155-in. max deflection) for 91 percent of the load steps. Experimental testing revealed valuable opportunities for improving the deformed shape equations robustness to real world (not perfect) strain data, which previous analytical testing did not detect. These improvements, which include filtering methods developed in this work, minimize errors due to numerical anomalies discovered in the remaining 9 percent of the load steps. As a result, all load steps attained +/- 2.7 in. accuracy. Wing twist results were very sensitive to errors in bending and require further development. A sensitivity analysis and recommendations for fiber implementation practices, along with, effective filtering methods are included
Research on Standard Errors of Equating Differences. Research Report. ETS RR-10-25
ERIC Educational Resources Information Center
Moses, Tim; Zhang, Wenmin
2010-01-01
In this paper, the "standard error of equating difference" (SEED) is described in terms of originally proposed kernel equating functions (von Davier, Holland, & Thayer, 2004) and extended to incorporate traditional linear and equipercentile functions. These derivations expand on prior developments of SEEDs and standard errors of equating and…
ERIC Educational Resources Information Center
Guo, Ling-Yu; Schneider, Phyllis
2016-01-01
Purpose: To determine the diagnostic accuracy of the finite verb morphology composite (FVMC), number of errors per C-unit (Errors/CU), and percent grammatical C-units (PGCUs) in differentiating school-aged children with language impairment (LI) and those with typical language development (TL). Method: Participants were 61 six-year-olds (50 TL, 11…
Patient safety education at Japanese medical schools: results of a nationwide survey
2012-01-01
Background Patient safety education, including error prevention strategies and management of adverse events, has become a topic of worldwide concern. The importance of the patient safety is also recognized in Japan following two serious medical accidents in 1999. Furthermore, educational curriculum guideline revisions in 2008 by relevant the Ministry of Education includes patient safety as part of the core medical curriculum. However, little is known about the patient safety education in Japanese medical schools partly because a comprehensive study has not yet been conducted in this field. Therefore, we have conducted a nationwide survey in order to clarify the current status of patient safety education at medical schools in Japan. Results Response rate was 60.0% (n = 48/80). Ninety-eight-percent of respondents (n = 47/48) reported integration of patient safety education into their curricula. Thirty-nine percent reported devoting less than five hours to the topic. All schools that teach patient safety reported use of lecture based teaching methods while few used alternative methods, such as role-playing or in-hospital training. Topics related to medical error theory and legal ramifications of error are widely taught while practical topics related to error analysis such as root cause analysis are less often covered. Conclusions Based on responses to our survey, most Japanese medical schools have incorporated the topic of patient safety into their curricula. However, the number of hours devoted to the patient safety education is far from the sufficient level with forty percent of medical schools that devote five hours or less to it. In addition, most medical schools employ only the lecture based learning, lacking diversity in teaching methods. Although most medical schools cover basic error theory, error analysis is taught at fewer schools. We still need to make improvements to our medical safety curricula. We believe that this study has the implications for the rest of the world as a model of what is possible and a sounding board for what topics might be important. PMID:22574712
Code of Federal Regulations, 2010 CFR
2010-07-01
... from a designated facility is 400 micrograms per dry standard cubic meter, corrected to 7 percent... discharged to the atmosphere from a designated facility is 27 milligrams per dry standard cubic meter... standard cubic meter, corrected to 7 percent oxygen. (ii) [Reserved] (iii) The emission limit for opacity...
Suomi-NPP VIIRS Solar Diffuser Stability Monitor Performance
NASA Technical Reports Server (NTRS)
Fulbright, Jon; Lei, Ning; Efremova, Boryana; Xiong, Xiaoxiong
2015-01-01
When illuminated by the Sun, the onboard solar diffuser (SD) panel provides a known spectral radiance source to calibrate the reflective solar bands of the Visible Infrared Imaging Radiometer Suite on the Suomi-NPP satellite. The SD bidirectional reflectance distribution function (BRDF) degrades over time due to solar exposure, and this degradation is measured using the SD stability monitor (SDSM). The SDSM acts as a ratioing radiometer, comparing solar irradiance measurements off the SD panel to those from a direct Sun view. We discuss the design and operations of the SDSM, the SDSM data analysis, including improvements incorporated since launch, and present the results through 1000 days after launch. After 1000 days, the band-dependent H-factors, a quantity describing the relative degradation of the BRDF of the SD panel since launch, range from 0.716 at 412 nanometers to 0.989 at 926 nanometers. The random uncertainty of these H-factors is about 0.1 percent, which is confirmed by the similar standard deviation values computed from the residuals of quadratic exponential fits to the H-factor time trends. The SDSM detector gains have temperature sensitivity of up to about 0.36 percent per kelvin, but this does not affect the derived H-factors. An initial error in the solar vector caused a seasonal bias to the H-factors of up to 0.5 percent. The total exposure of the SD panel to UV light after 1000 orbits is equivalent to about 100 hours of direct sunlight illumination perpendicular to the SD panel surface.
Sea ice classification using fast learning neural networks
NASA Technical Reports Server (NTRS)
Dawson, M. S.; Fung, A. K.; Manry, M. T.
1992-01-01
A first learning neural network approach to the classification of sea ice is presented. The fast learning (FL) neural network and a multilayer perceptron (MLP) trained with backpropagation learning (BP network) were tested on simulated data sets based on the known dominant scattering characteristics of the target class. Four classes were used in the data simulation: open water, thick lossy saline ice, thin saline ice, and multiyear ice. The BP network was unable to consistently converge to less than 25 percent error while the FL method yielded an average error of approximately 1 percent on the first iteration of training. The fast learning method presented can significantly reduce the CPU time necessary to train a neural network as well as consistently yield higher classification accuracy than BP networks.
Multiphase computer-generated holograms for full-color image generation
NASA Astrophysics Data System (ADS)
Choi, Kyong S.; Choi, Byong S.; Choi, Yoon S.; Kim, Sun I.; Kim, Jong Man; Kim, Nam; Gil, Sang K.
2002-06-01
Multi-phase and binary-phase computer-generated holograms were designed and demonstrated for full-color image generation. Optimize a phase profile of the hologram that achieves each color image, we employed a simulated annealing method. The design binary phase hologram had the diffraction efficiency of 33.23 percent and the reconstruction error of 0.367 X 10-2. And eight phase hologram had the diffraction efficiency of 67.92 percent and the reconstruction error of 0.273 X 10-2. The designed BPH was fabricated by micro photolithographic technique with a minimum pixel width of 5micrometers . And the it was reconstructed using by two Ar-ion lasers and a He-Ne laser. In addition, the color dispersion characteristic of the fabricate grating and scaling problem of the reconstructed image were discussed.
Beck, James D; Youngblood, Marston; Atkinson, Jane C; Mauriello, Sally; Kaste, Linda M; Badner, Victor M; Beaver, Shirley; Becerra, Karen; Singer, Richard
2014-06-01
The Hispanic and Latino population is projected to increase from 16.7 percent to 30.0 percent by 2050. Previous U.S. national surveys had minimal representation of Hispanic and Latino participants other than Mexicans, despite evidence suggesting that Hispanic or Latino country of origin and degree of acculturation influence health outcomes in this population. In this article, the authors describe the prevalence and mean number of cavitated, decayed and filled surfaces, missing teeth and edentulism among Hispanics and Latinos of different national origins. Investigators in the Hispanic Community Health Study/Study of Latinos (HCHS/SOL)-a multicenter epidemiologic study funded by the National Heart, Lung, and Blood Institute with funds transferred from six other institutes, including the National Institute of Dental and Craniofacial Research-conducted in-person examinations and interviews with more than 16,000 participants aged 18 to 74 years in four U.S. cities between March 2008 and June 2011. The investigators identified missing, filled and decayed teeth according to a modified version of methods used in the National Health and Nutrition Examination Survey. The authors computed prevalence estimates (weighted percentages), weighted means and standard errors for measures. The prevalence of decayed surfaces ranged from 20.2 percent to 35.5 percent, depending on Hispanic or Latino background, whereas the prevalence of decayed and filled surfaces ranged from 82.7 percent to 87.0 percent, indicating substantial amounts of dental treatment. The prevalence of missing teeth ranged from 49.8 percent to 63.8 percent and differed according to Hispanic or Latino background. Significant differences in the mean number of decayed surfaces, decayed or filled surfaces and missing teeth according to Hispanic and Latino background existed within each of the age groups and between women and men. Oral health status differs according to Hispanic or Latino background, even with adjustment for age, sex and other characteristics. These data indicate that Hispanics and Latinos in the United States receive restorative dental treatment and that practitioners should consider the association between Hispanic or Latino origin and oral health status. This could mean that dental practices in areas dominated by patients from a single Hispanic or Latino background can anticipate a practice based on a specific pattern of treatment needs.
The Calibration of Gloss Reference Standards
NASA Astrophysics Data System (ADS)
Budde, W.
1980-04-01
In present international and national standards for the measurement of specular gloss the primary and secondary reference standards are defined for monochromatic radiation. However the glossmeter specified is using polychromatic radiation (CIE Standard Illuminant C) and the CIE Standard Photometric Observer. This produces errors in practical gloss measurements of up to 0.5%. Although this may be considered small as compared to the accuracy of most practical gloss measurements, such an error should not be tolerated in the calibration of secondary standards. Corrections for such errors are presented and various alternatives for amendments of the existing documentary standards are discussed.
Lin, Hsin-Hon; Peng, Shin-Lei; Wu, Jay; Shih, Tian-Yu; Chuang, Keh-Shih; Shih, Cheng-Ting
2017-05-01
Osteoporosis is a disease characterized by a degradation of bone structures. Various methods have been developed to diagnose osteoporosis by measuring bone mineral density (BMD) of patients. However, BMDs from these methods were not equivalent and were incomparable. In addition, partial volume effect introduces errors in estimating bone volume from computed tomography (CT) images using image segmentation. In this study, a two-compartment model (TCM) was proposed to calculate bone volume fraction (BV/TV) and BMD from CT images. The TCM considers bones to be composed of two sub-materials. Various equivalent BV/TV and BMD can be calculated by applying corresponding sub-material pairs in the TCM. In contrast to image segmentation, the TCM prevented the influence of the partial volume effect by calculating the volume percentage of sub-material in each image voxel. Validations of the TCM were performed using bone-equivalent uniform phantoms, a 3D-printed trabecular-structural phantom, a temporal bone flap, and abdominal CT images. By using the TCM, the calculated BV/TVs of the uniform phantoms were within percent errors of ±2%; the percent errors of the structural volumes with various CT slice thickness were below 9%; the volume of the temporal bone flap was close to that from micro-CT images with a percent error of 4.1%. No significant difference (p >0.01) was found between the areal BMD of lumbar vertebrae calculated using the TCM and measured using dual-energy X-ray absorptiometry. In conclusion, the proposed TCM could be applied to diagnose osteoporosis, while providing a basis for comparing various measurement methods.
Simplified Approach Charts Improve Data Retrieval Performance
Stewart, Michael; Laraway, Sean; Jordan, Kevin; Feary, Michael S.
2016-01-01
The effectiveness of different instrument approach charts to deliver minimum visibility and altitude information during airport equipment outages was investigated. Eighteen pilots flew simulated instrument approaches in three conditions: (a) normal operations using a standard approach chart (standard-normal), (b) equipment outage conditions using a standard approach chart (standard-outage), and (c) equipment outage conditions using a prototype decluttered approach chart (prototype-outage). Errors and retrieval times in identifying minimum altitudes and visibilities were measured. The standard-outage condition produced significantly more errors and longer retrieval times versus the standard-normal condition. The prototype-outage condition had significantly fewer errors and shorter retrieval times than did the standard-outage condition. The prototype-outage condition produced significantly fewer errors but similar retrieval times when compared with the standard-normal condition. Thus, changing the presentation of minima may reduce risk and increase safety in instrument approaches, specifically with airport equipment outages. PMID:28491009
Benchmark data on the separability among crops in the southern San Joaquin Valley of California
NASA Technical Reports Server (NTRS)
Morse, A.; Card, D. H.
1984-01-01
Landsat MSS data were input to a discriminant analysis of 21 crops on each of eight dates in 1979 using a total of 4,142 fields in southern Fresno County, California. The 21 crops, which together account for over 70 percent of the agricultural acreage in the southern San Joaquin Valley, were analyzed to quantify the spectral separability, defined as omission error, between all pairs of crops. On each date the fields were segregated into six groups based on the mean value of the MSS7/MSS5 ratio, which is correlated with green biomass. Discriminant analysis was run on each group on each date. The resulting contingency tables offer information that can be profitably used in conjunction with crop calendars to pick the best dates for a classification. The tables show expected percent correct classification and error rates for all the crops. The patterns in the contingency tables show that the percent correct classification for crops generally increases with the amount of greenness in the fields being classified. However, there are exceptions to this general rule, notably grain.
A five-year experience with throat cultures.
Shank, J C; Powell, T A
1984-06-01
This study addresses the usefulness of the throat culture in a family practice residency setting and explores the following questions: (1) Do faculty physicians clinically identify streptococcal pharyngitis better than residents? (2) With time, will residents and faculty physicians improve in their diagnostic accuracy? (3) Should the throat culture be used always, selectively, or never? A total of 3,982 throat cultures were obtained over a five-year study period with 16 percent positive for beta-hemolytic streptococci. The results were compared with the physician's clinical diagnosis of either "nonstreptococcal" (category A) or "streptococcal" (category B). Within category A, 363 of 3,023 patients had positive cultures (12 percent clinical diagnostic error rate). Within category B, 665 of 959 patients had negative cultures (69 percent clinical diagnostic error rate). Faculty were significantly better than residents in diagnosing streptococcal pharyngitis, but not in diagnosing nonstreptococcal sore throats. Neither faculty nor residents improved their diagnostic accuracy over time. Regarding age-specific recommendations, the findings support utilizing a throat culture in all children aged 2 to 15 years with sore throat, but in adults only when the physician suspects streptococcal pharyngitis.
Magnitude and frequency of floods in Washington
Cummans, J.E.; Collings, Michael R.; Nasser, Edmund George
1975-01-01
Relations are provided to estimate the magnitude and frequency of floods on Washington streams. Annual-peak-flow data from stream gaging stations on unregulated streams having 1 years or more of record were used to determine a log-Pearson Type III frequency curve for each station. Flood magnitudes having recurrence intervals of 2, 5, i0, 25, 50, and 10years were then related to physical and climatic indices of the drainage basins by multiple-regression analysis using the Biomedical Computer Program BMDO2R. These regression relations are useful for estimating flood magnitudes of the specified recurrence intervals at ungaged or short-record sites. Separate sets of regression equations were defined for western and eastern parts of the State, and the State was further subdivided into 12 regions in which the annual floods exhibit similar flood characteristics. Peak flows are related most significantly in western Washington to drainage-area size and mean annual precipitation. In eastern Washington-they are related most significantly to drainage-area size, mean annual precipitation, and percentage of forest cover. Standard errors of estimate of the estimating relations range from 25 to 129 percent, and the smallest errors are generally associated with the more humid regions.
Performance of the NASA Digitizing Core-Loss Instrumentation
NASA Technical Reports Server (NTRS)
Schwarze, Gene E. (Technical Monitor); Niedra, Janis M.
2003-01-01
The standard method of magnetic core loss measurement was implemented on a high frequency digitizing oscilloscope in order to explore the limits to accuracy when characterizing high Q cores at frequencies up to 1 MHz. This method computes core loss from the cycle mean of the product of the exciting current in a primary winding and induced voltage in a separate flux sensing winding. It is pointed out that just 20 percent accuracy for a Q of 100 core material requires a phase angle accuracy of 0.1 between the voltage and current measurements. Experiment shows that at 1 MHz, even high quality, high frequency current sensing transformers can introduce phase errors of a degree or more. Due to the fact that the Q of some quasilinear core materials can exceed 300 at frequencies below 100 kHz, phase angle errors can be a problem even at 50 kHz. Hence great care is necessary with current sensing and ground loops when measuring high Q cores. Best high frequency current sensing accuracy was obtained from a fabricated 0.1-ohm coaxial resistor, differentially sensed. Sample high frequency core loss data taken with the setup for a permeability-14 MPP core is presented.
Gotvald, Anthony J.; Barth, Nancy A.; Veilleux, Andrea G.; Parrett, Charles
2012-01-01
Methods for estimating the magnitude and frequency of floods in California that are not substantially affected by regulation or diversions have been updated. Annual peak-flow data through water year 2006 were analyzed for 771 streamflow-gaging stations (streamgages) in California having 10 or more years of data. Flood-frequency estimates were computed for the streamgages by using the expected moments algorithm to fit a Pearson Type III distribution to logarithms of annual peak flows for each streamgage. Low-outlier and historic information were incorporated into the flood-frequency analysis, and a generalized Grubbs-Beck test was used to detect multiple potentially influential low outliers. Special methods for fitting the distribution were developed for streamgages in the desert region in southeastern California. Additionally, basin characteristics for the streamgages were computed by using a geographical information system. Regional regression analysis, using generalized least squares regression, was used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins in California that are outside of the southeastern desert region. Flood-frequency estimates and basin characteristics for 630 streamgages were combined to form the final database used in the regional regression analysis. Five hydrologic regions were developed for the area of California outside of the desert region. The final regional regression equations are functions of drainage area and mean annual precipitation for four of the five regions. In one region, the Sierra Nevada region, the final equations are functions of drainage area, mean basin elevation, and mean annual precipitation. Average standard errors of prediction for the regression equations in all five regions range from 42.7 to 161.9 percent. For the desert region of California, an analysis of 33 streamgages was used to develop regional estimates of all three parameters (mean, standard deviation, and skew) of the log-Pearson Type III distribution. The regional estimates were then used to develop a set of equations for estimating flows with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent annual exceedance probabilities for ungaged basins. The final regional regression equations are functions of drainage area. Average standard errors of prediction for these regression equations range from 214.2 to 856.2 percent. Annual peak-flow data through water year 2006 were analyzed for eight streamgages in California having 10 or more years of data considered to be affected by urbanization. Flood-frequency estimates were computed for the urban streamgages by fitting a Pearson Type III distribution to logarithms of annual peak flows for each streamgage. Regression analysis could not be used to develop flood-frequency estimation equations for urban streams because of the limited number of sites. Flood-frequency estimates for the eight urban sites were graphically compared to flood-frequency estimates for 630 non-urban sites. The regression equations developed from this study will be incorporated into the U.S. Geological Survey (USGS) StreamStats program. The StreamStats program is a Web-based application that provides streamflow statistics and basin characteristics for USGS streamgages and ungaged sites of interest. StreamStats can also compute basin characteristics and provide estimates of streamflow statistics for ungaged sites when users select the location of a site along any stream in California.
ERIC Educational Resources Information Center
Wang, Tianyou
2009-01-01
Holland and colleagues derived a formula for analytical standard error of equating using the delta-method for the kernel equating method. Extending their derivation, this article derives an analytical standard error of equating procedure for the conventional percentile rank-based equipercentile equating with log-linear smoothing. This procedure is…
NASA Technical Reports Server (NTRS)
Parsons, John F
1936-01-01
Surveys of the air flow over the upper surface of four different airfoils were made in the full-scale wind tunnel to determine a satisfactory location for a fixed Pitot-static tube on a low-wing monoplane. The selection was based on small interference errors, less than 5 percent, and on a consideration of structural and ground handling problems. The most satisfactory location on the airfoils without flaps that were investigated was 10 percent of the chord aft and 25 percent of the chord above the trailing edge of a section approximately 40 percent of the semispan inboard of the wing tip. No satisfactory location was found near the wing when the flaps were deflected.
NASA Technical Reports Server (NTRS)
Chen, D. W.; Sengupta, S. K.; Welch, R. M.
1989-01-01
This paper compares the results of cloud-field classification derived from two simplified vector approaches, the Sum and Difference Histogram (SADH) and the Gray Level Difference Vector (GLDV), with the results produced by the Gray Level Cooccurrence Matrix (GLCM) approach described by Welch et al. (1988). It is shown that the SADH method produces accuracies equivalent to those obtained using the GLCM method, while the GLDV method fails to resolve error clusters. Compared to the GLCM method, the SADH method leads to a 31 percent saving in run time and a 50 percent saving in storage requirements, while the GLVD approach leads to a 40 percent saving in run time and an 87 percent saving in storage requirements.
Validation of SenseWear Armband in children, adolescents, and adults.
Lopez, G A; Brønd, J C; Andersen, L B; Dencker, M; Arvidsson, D
2018-02-01
SenseWear Armband (SW) is a multisensor monitor to assess physical activity and energy expenditure. Its prediction algorithms have been updated periodically. The aim was to validate SW in children, adolescents, and adults. The most recent SW algorithm 5.2 (SW5.2) and the previous version 2.2 (SW2.2) were evaluated for estimation of energy expenditure during semi-structured activities in 35 children, 31 adolescents, and 36 adults with indirect calorimetry as reference. Energy expenditure estimated from waist-worn ActiGraph GT3X+ data (AG) was used for comparison. Improvements in measurement errors were demonstrated with SW5.2 compared to SW2.2, especially in children and for biking. The overall mean absolute percent error with SW5.2 was 24% in children, 23% in adolescents, and 20% in adults. The error was larger for sitting and standing (23%-32%) and for basketball and biking (19%-35%), compared to walking and running (8%-20%). The overall mean absolute error with AG was 28% in children, 22% in adolescents, and 28% in adults. The absolute percent error for biking was 32%-74% with AG. In general, SW and AG underestimated energy expenditure. However, both methods demonstrated a proportional bias, with increasing underestimation for increasing energy expenditure level, in addition to the large individual error. SW provides measures of energy expenditure level with similar accuracy in children, adolescents, and adults with the improvements in the updated algorithms. Although SW captures biking better than AG, these methods share remaining measurements errors requiring further improvements for accurate measures of physical activity and energy expenditure in clinical and epidemiological research. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Comparison between Brewer spectrometer, M 124 filter ozonometer and Dobson spectrophotometer
NASA Technical Reports Server (NTRS)
Feister, U.
1994-01-01
Concurrent measurements were taken using the Brewer spectrometer no. 30, the filter ozonometer M 124 no. 200 and the Dobson spectrophotometer no. 71 from September 1987 to December 1988 at Potsdam. The performance of the instrument types and the compatibility of ozone data was checked under the conditions of a field measuring station. Total ozone values derived from Dobson AD direct sun measurements were considered as standard. The Dobson instrument had been calibrated at intercomparisons with the World Standard Dobson instrument no. 83 (Boulder) and with the Regional Standard instrument no. 64 (Potsdam), while the Brewer instrument was calibrated several times with the Travelling Standard Brewer no. 17 (Canada). The differences between individual Brewer DS (direct sun) ozone data and Dobson ADDS are within plus or minus 3 percent with half of all differences within plus or minus 1 percent. Less than 0.7 percent of the systematic difference can be due to atmospheric SO2. Due to inadequate regression coefficients Brewer ZB (zenith blue) ozone measurements are by (3...4) percent higher than Dobson ADDS ozone values. M124 DS ozone data are systematically by (1...2) percent higher than Dobson ADDS ozone with 50 percent of the differences within plus or minus 4 percent, but with extreme differences up to plus or minus (20...25) percent. M124 ZB ozone values are by (3...5) percent higher than Dobson ADDS with all the differences within plus or minus 10 percent, i.e. the scatter of differences is smaller for ZB than for M 124 DS measurements, Results for differences in the daily mean ozone values are also addressed. The differences include the uncertainties in the ozone values derived from both types of measurements. They provide an indication of the uncertainty in ozone data and the comparability of ozone values derived from different types of instruments.
Duncker, James J.; Melching, Charles S.
1998-01-01
Rainfall and streamflow data collected from July 1986 through September 1993 were utilized to calibrate and verify a continuous-simulation rainfall-runoff model for three watersheds (11.8--18.0 square miles in area) in Du Page County. Classification of land cover into three categories of pervious (grassland, forest/wetland, and agricultural land) and one category of impervious subareas was sufficient to accurately simulate the rainfall-runoff relations for the three watersheds. Regional parameter sets were obtained by calibrating jointly all parameters except fraction of ground-water inflow that goes to inactive ground water (DEEPFR), interflow recession constant (IRC), and infiltration (INFILT) for runoff from all three watersheds. DEEPFR and IRC varied among the watersheds because of physical differences among the watersheds. Two values of INFILT were obtained: one representing the rainfall-runoff process on the silty and clayey soils on the uplands and lake plains that characterize Sawmill Creek, St. Joseph Creek, and eastern Du Page County; and one representing the rainfall-runoff process on the silty soils on uplands that characterize Kress Creek and parts of western Du Page County. Regional rainfall-runoff relations, defined through joint calibration of the rainfall-runoff model and verified for independent periods, presented in this report, allow estimation of runoff for watersheds in Du Page County with an error in the total water balance less than 4.0 percent; an average absolute error in the annual-flow estimates of 17.1 percent with the error rarely exceeding 25 percent for annual flows; and correlation coefficients and coefficients of model-fit efficiency for monthly flows of at least 87 and 76 percent, respectively. Close reproduction of the runoff-volume duration curves was obtained. A frequency analysis of storm-runoff volume indicates a tendency of the model to undersimulate large storms, which may result from underestimation of the amount of impervious land cover in the watershed and errors in measuring rainfall for convective storms. Overall, the results of regional calibration and verification of the rainfall-runoff model indicate the simulated rainfall-runoff relations are adequate for stormwater-management planning and design for watersheds in Du Page County.
Soulard, Christopher E.; Acevedo, William; Stehman, Stephen V.
2018-01-01
Quantifying change in urban land provides important information to create empirical models examining the effects of human land use. Maps of developed land from the National Land Cover Database (NLCD) of the conterminous United States include rural roads in the developed land class and therefore overestimate the amount of urban land. To better map the urban class and understand how urban lands change over time, we removed rural roads and small patches of rural development from the NLCD developed class and created four wall-to-wall maps (1992, 2001, 2006, and 2011) of urban land. Removing rural roads from the NLCD developed class involved a multi-step filtering process, data fusion using geospatial road and developed land data, and manual editing. Reference data classified as urban or not urban from a stratified random sample was used to assess the accuracy of the 2001 and 2006 urban and NLCD maps. The newly created urban maps had higher overall accuracy (98.7 percent) than the NLCD maps (96.2 percent). More importantly, the urban maps resulted in lower commission error of the urban class (23 percent versus 57 percent for the NLCD in 2006) with the trade-off of slightly inflated omission error (20 percent for the urban map, 16 percent for NLCD in 2006). The removal of approximately 230,000 km2 of rural roads from the NLCD developed class resulted in maps that better characterize the urban footprint. These urban maps are more suited to modeling applications and policy decisions that rely on quantitative and spatially explicit information regarding urban lands.
Senior, Lisa A.; Koerkle, Edward H.
2003-01-01
The Christina River Basin drains 565 square miles (mi2) in Pennsylvania, Maryland, and Delaware. Water from the basin is used for recreation, drinking water supply, and to support aquatic life. The Christina River Basin includes the major subbasins of Brandywine Creek, White Clay Creek, and Red Clay Creek. The White Clay Creek is the second largest of the subbasins and drains an area of 108 mi2. Water quality in some parts of the Christina River Basin is impaired and does not support designated uses of the streams. A multi-agency water-quality management strategy included a modeling component to evaluate the effects of point and nonpoint-source contributions of nutrients and suspended sediment on stream water quality. To assist in non point-source evaluation, four independent models, one for each of the three major subbasins and for the Christina River, were developed and calibrated using the model code Hydrological Simulation Program—Fortran (HSPF). Water-quality data for model calibration were collected in each of the four main subbasins and in smaller subbasins predominantly covered by one land use following a nonpoint-source monitoring plan. Under this plan, stormflow and base- flow samples were collected during 1998 at two sites in the White Clay Creek subbasin and at nine sites in the other subbasins.The HSPF model for the White Clay Creek Basin simulates streamflow, suspended sediment, and the nutrients, nitrogen and phosphorus. In addition, the model simulates water temperature, dissolved oxygen, biochemical oxygen demand, and plankton as secondary objectives needed to support the sediment and nutrient simulations. For the model, the basin was subdivided into 17 reaches draining areas that ranged from 1.37 to 13 mi2. Ten different pervious land uses and two impervious land uses were selected for simulation. Land-use areas were determined from 1995 land-use data. The predominant land uses in the White Clay Creek Basin are agricultural, forested, residential, and urban.The hydrologic component of the model was run at an hourly time step and primarily calibrated using streamflow data from two U.S. Geological Survey (USGS) streamflow-measurement stations for the period of October 1, 1994, through October 29, 1998. Additional calibration was done using data from two other USGS streamflow-measurement stations with periods of record shorter than the calibration period. Daily precipitation data from two National Oceanic and Atmospheric Administration (NOAA) gages and hourly precipitation and other meteorological data for one NOAA gage were used for model input. The difference between simulated and observed streamflow volume ranged from -0.9 to 1.8 percent for the 4-year period at the two calibration sites with 4-year records. Annual differences between observed and simulated streamflow generally were greater than the overall error. For example, at a site near the bottom of the basin (drainage area of 89.1 mi2), annual differences between observed and simulated streamflow ranged from -5.8 to 14.4 percent and the overall error for the 4-year period was -0.9 percent. Calibration errors for 36 storm periods at the two calibration sites for total volume, low-flowrecession rate, 50-percent lowest flows, 10-percent highest flows, and storm peaks were within the recommended criteria of 20 percent or less. Much of the error in simulating storm events on an hourly time step can be attributed to uncertainty in the hourly rainfall data.The water-quality component of the model was calibrated using data collected by the USGS and state agencies at three USGS streamflow-measurement stations with variable water-quality monitoring periods ending October 1998. Because of availability, monitoring data for suspended-solids concentrations were used as surrogates for suspended-sediment concentrations, although suspended solids may underestimate suspended sediment and affect apparent accuracy of the suspended-sediment simulation. Comparison of observed to simulated loads for up to five storms in 1998 at each of the two nonpoint-source monitoring sites in the White Clay Creek Basin indicate that simulation error is commonly as large as an order of magnitude for suspended sediment and nutrients. The simulation error tends to be smaller for dissolved nutrients than for particulate nutrients. Errors of 40 percent or less for monthly or annual values indicate a fair to good water-quality calibration according to recommended criteria, with much larger errors possible for individual events. The accuracy of the water-quality calibration under stormflow conditions is limited by the relatively small amount of water-quality data available for the White Clay Creek Basin.Users of the White Clay Creek HSPF model should be aware of model limitations and consider the following if the model is used for predictive purposes: streamflow and water quality for individual storm events may not be well simulated, but the model performance is reasonable when evaluated over longer periods of time; the observed flow-duration curve for the simulation period is similar to the long-term flow-duration curve at White Clay Creek near Newark, Del., indicating that the calibration period is representative of all but highest 0.1 percent and lowest 0.1 percent of flows at that site; relative errors in streamflow and water-quality simulations are greater for smaller drainage areas than for larger areas; and calibration for water-quality was based on sparse data.
A Model for Hydrogen Thermal Conductivity and Viscosity Including the Critical Point
NASA Technical Reports Server (NTRS)
Wagner, Howard A.; Tunc, Gokturk; Bayazitoglu, Yildiz
2001-01-01
In order to conduct a thermal analysis of heat transfer to liquid hydrogen near the critical point, an accurate understanding of the thermal transport properties is required. A review of the available literature on hydrogen transport properties identified a lack of useful equations to predict the thermal conductivity and viscosity of liquid hydrogen. The tables published by the National Bureau of Standards were used to perform a series of curve fits to generate the needed correlation equations. These equations give the thermal conductivity and viscosity of hydrogen below 100 K. They agree with the published NBS tables, with less than a 1.5 percent error for temperatures below 100 K and pressures from the triple point to 1000 KPa. These equations also capture the divergence in the thermal conductivity at the critical point
Cost-effectiveness of the US Geological Survey stream-gaging program in Arkansas
Darling, M.E.; Lamb, T.E.
1984-01-01
This report documents the results of the cost-effectiveness of the stream-gaging program in Arkansas. Data uses and funding sources were identified for the daily-discharge stations. All daily-discharge stations were found to be in one or more data use categories, and none were candidates for alternate methods which would result in discontinuation or conversion to a partial record station. The cost for operation of daily-discharge stations and routing costs to partial record stations, crest gages, pollution control stations as well as seven recording ground-water stations was evaluated in the Kalman-Filtering Cost-Effective Resource allocation (K-CERA) analysis. This operation under current practices requires a budget of $292,150. The average standard error of estimate of streamflow record for the Arkansas District was analyzed at 33 percent.
Airborne gamma radiation soil moisture measurements over short flight lines
NASA Technical Reports Server (NTRS)
Peck, Eugene L.; Carrol, Thomas R.; Lipinski, Daniel M.
1990-01-01
Results are presented on airborne gamma radiation measurements of soil moisture condition, carried out along short flight lines as part of the First International Satellite Land Surface Climatology Project Field Experiment (FIFE). Data were collected over an area in Kansas during the summers of 1987 and 1989. The airborne surveys, together with ground measurements, provide the most comprehensive set of airborne and ground truth data available in the U.S. for calibrating and evaluating airborne gamma flight lines. Analysis showed that, using standard National Weather Service weights for the K, Tl, and Gc radiation windows, the airborne soil moisture estimates for the FIFE lines had a root mean square error of no greater than 3.0 percent soil moisture. The soil moisture estimates for sections having acquisition time of at least 15 sec were found to be reliable.
Wetherbee, Gregory A.; Latysh, Natalie E.; Burke, Kevin P.
2005-01-01
Six external quality-assurance programs were operated by the U.S. Geological Survey (USGS) External Quality-Assurance (QA) Project for the National Atmospheric Deposition Program/National Trends Network (NADP/NTN) from 2002 through 2003. Each program measured specific components of the overall error inherent in NADP/NTN wet-deposition measurements. The intersite-comparison program assessed the variability and bias of pH and specific conductance determinations made by NADP/NTN site operators twice per year with respect to accuracy goals. The percentage of site operators that met the pH accuracy goals decreased from 92.0 percent in spring 2002 to 86.3 percent in spring 2003. In these same four intersite-comparison studies, the percentage of site operators that met the accuracy goals for specific conductance ranged from 94.4 to 97.5 percent. The blind-audit program and the sample-handling evaluation (SHE) program evaluated the effects of routine sample handling, processing, and shipping on the chemistry of weekly NADP/NTN samples. The blind-audit program data indicated that the variability introduced by sample handling might be environmentally significant to data users for sodium, potassium, chloride, and hydrogen ion concentrations during 2002. In 2003, the blind-audit program was modified and replaced by the SHE program. The SHE program was designed to control the effects of laboratory-analysis variability. The 2003 SHE data had less overall variability than the 2002 blind-audit data. The SHE data indicated that sample handling buffers the pH of the precipitation samples and, in turn, results in slightly lower conductivity. Otherwise, the SHE data provided error estimates that were not environmentally significant to data users. The field-audit program was designed to evaluate the effects of onsite exposure, sample handling, and shipping on the chemistry of NADP/NTN precipitation samples. Field-audit results indicated that exposure of NADP/NTN wet-deposition samples to onsite conditions tended to neutralize the acidity of the samples by less than 1.0 microequivalent per liter. Onsite exposure of the sampling bucket appeared to slightly increase the concentration of most of the analytes but not to an extent that was environmentally significant to NADP data users. An interlaboratory-comparison program was used to estimate the analytical variability and bias of the NADP Central Analytical Laboratory (CAL) during 2002-03. Bias was identified in the CAL data for calcium, magnesium, sodium, potassium, ammonium, chloride, nitrate, sulfate, hydrogen ion, and specific conductance, but the absolute value of the bias was less than analytical minimum detection limits for all constituents except magnesium, nitrate, sulfate, and specific conductance. Control charts showed that CAL results were within statistical control approximately 90 percent of the time. Data for the analysis of ultrapure deionized-water samples indicated that CAL did not have problems with laboratory contamination. During 2002-03, the overall variability of data from the NADP/NTN precipitation-monitoring system was estimated using data from three collocated monitoring sites. Measurement differences of constituent concentration and deposition for paired samples from the collocated samplers were evaluated to compute error terms. The medians of the absolute percentage errors (MAEs) for the paired samples generally were larger for cations (approximately 8 to 50 percent) than for anions (approximately 3 to 33 percent). MAEs were approximately 16 to 30 percent for hydrogen-ion concentration, less than 10 percent for specific conductance, less than 5 percent for sample volume, and less than 8 percent for precipitation depth. The variability attributed to each component of the sample-collection and analysis processes, as estimated by USGS quality-assurance programs, varied among analytes. Laboratory analysis variability accounted for approximately 2 percent of the
NASA Technical Reports Server (NTRS)
Duda, David P.; Minnis, Patrick
2009-01-01
Straightforward application of the Schmidt-Appleman contrail formation criteria to diagnose persistent contrail occurrence from numerical weather prediction data is hindered by significant bias errors in the upper tropospheric humidity. Logistic models of contrail occurrence have been proposed to overcome this problem, but basic questions remain about how random measurement error may affect their accuracy. A set of 5000 synthetic contrail observations is created to study the effects of random error in these probabilistic models. The simulated observations are based on distributions of temperature, humidity, and vertical velocity derived from Advanced Regional Prediction System (ARPS) weather analyses. The logistic models created from the simulated observations were evaluated using two common statistical measures of model accuracy, the percent correct (PC) and the Hanssen-Kuipers discriminant (HKD). To convert the probabilistic results of the logistic models into a dichotomous yes/no choice suitable for the statistical measures, two critical probability thresholds are considered. The HKD scores are higher when the climatological frequency of contrail occurrence is used as the critical threshold, while the PC scores are higher when the critical probability threshold is 0.5. For both thresholds, typical random errors in temperature, relative humidity, and vertical velocity are found to be small enough to allow for accurate logistic models of contrail occurrence. The accuracy of the models developed from synthetic data is over 85 percent for both the prediction of contrail occurrence and non-occurrence, although in practice, larger errors would be anticipated.
A Note on Standard Deviation and Standard Error
ERIC Educational Resources Information Center
Hassani, Hossein; Ghodsi, Mansoureh; Howell, Gareth
2010-01-01
Many students confuse the standard deviation and standard error of the mean and are unsure which, if either, to use in presenting data. In this article, we endeavour to address these questions and cover some related ambiguities about these quantities.
Methodological Challenges in Describing Medication Dosing Errors in Children
2005-01-01
recommendations. As an example, amoxicillin is the most commonly used medication in children. This one drug accounts for approximately 10 percent of...and a team intervention on prevention of serious medication errors. JAMA 1998;280(15):1311–6. 13. Bates DW, Teich JM, Lee J, et al. The impact of...barriers include prescribing medication that is not labeled for use in children, discrepancies in published dosing recommendations for many
TID and SEE Response of an Advanced Samsung 4G NAND Flash Memory
NASA Technical Reports Server (NTRS)
Oldham, Timothy R.; Friendlich, M.; Howard, J. W.; Berg, M. D.; Kim, H. S.; Irwin, T. L.; LaBel, K. A.
2007-01-01
Initial total ionizing dose (TID) and single event heavy ion test results are presented for an unhardened commercial flash memory, fabricated with 63 nm technology. Results are that the parts survive to a TID of nearly 200 krad (SiO2), with a tractable soft error rate of about 10(exp -l2) errors/bit-day, for the Adams Ten Percent Worst Case Environment.
NASA Astrophysics Data System (ADS)
Lawrence, Kurt C.; Park, Bosoon; Windham, William R.; Mao, Chengye; Poole, Gavin H.
2003-03-01
A method to calibrate a pushbroom hyperspectral imaging system for "near-field" applications in agricultural and food safety has been demonstrated. The method consists of a modified geometric control point correction applied to a focal plane array to remove smile and keystone distortion from the system. Once a FPA correction was applied, single wavelength and distance calibrations were used to describe all points on the FPA. Finally, a percent reflectance calibration, applied on a pixel-by-pixel basis, was used for accurate measurements for the hyperspectral imaging system. The method was demonstrated with a stationary prism-grating-prism, pushbroom hyperspectral imaging system. For the system described, wavelength and distance calibrations were used to reduce the wavelength errors to <0.5 nm and distance errors to <0.01mm (across the entrance slit width). The pixel-by-pixel percent reflectance calibration, which was performed at all wavelengths with dark current and 99% reflectance calibration-panel measurements, was verified with measurements on a certified gradient Spectralon panel with values ranging from about 14% reflectance to 99% reflectance with errors generally less than 5% at the mid-wavelength measurements. Results from the calibration method, indicate the hyperspectral imaging system has a usable range between 420 nm and 840 nm. Outside this range, errors increase significantly.
Quality of nutrient data from streams and ground water sampled during water years 1992-2001
Mueller, David K.; Titus, Cindy J.
2005-01-01
Proper interpretation of water-quality data requires consideration of the effects that bias and variability might have on measured constituent concentrations. In this report, methods are described to estimate the bias due to contamination of samples in the field or laboratory and the variability due to sample collection, processing, shipment, and analysis. Contamination can adversely affect interpretation of measured concentrations in comparison to standards or criteria. Variability can affect interpretation of small differences between individual measurements or mean concentrations. Contamination and variability are determined for nutrient data from quality-control samples (field blanks and replicates) collected as part of the National Water-Quality Assessment (NAWQA) Program during water years 1992-2001. Statistical methods are used to estimate the likelihood of contamination and variability in all samples. Results are presented for five nutrient analytes from stream samples and four nutrient analytes from ground-water samples. Ammonia contamination can add at least 0.04 milligram per liter in up to 5 percent of all samples. This could account for more than 22 percent of measured concentrations at the low range of aquatic-life criteria (0.18 milligram per liter). Orthophosphate contamination, at least 0.019 milligram per liter in up to 5 percent of all samples, could account for more than 38 percent of measured concentrations at the limit to avoid eutrophication (0.05 milligram per liter). Nitrite-plus-nitrate and Kjeldahl nitrogen contamination is less than 0.4 milligram per liter in 99 percent of all samples; thus there is no significant effect on measured concentrations of environmental significance. Sampling variability has little or no effect on reported concentrations of ammonia, nitrite-plus-nitrate, orthophosphate, or total phosphorus sampled after 1998. The potential errors due to sampling variability are greater for the Kjeldahl nitrogen analytes and for total phosphorus sampled before 1999. The uncertainty in a mean of 10 concentrations caused by sampling variability is within a small range (1 to 7 percent) for all nutrients. These results can be applied to interpretation of environmental data collected during water years 1992-2001 in 52 NAWQA study units.
NASA Technical Reports Server (NTRS)
Kuehn, C. E.; Himwich, W. E.; Clark, T. A.; Ma, C.
1991-01-01
The internal consistency of the baseline-length measurements derived from analysis of several independent VLBI experiments is an estimate of the measurement precision. The paper investigates whether the inclusion of water vapor radiometer (WVR) data as an absolute calibration of the propagation delay due to water vapor improves the precision of VLBI baseline-length measurements. The paper analyzes 28 International Radio Interferometric Surveying runs between June 1988 and January 1989; WVR measurements were made during each session. The addition of WVR data decreased the scatter of the length measurements of the baselines by 5-10 percent. The observed reduction in the scatter of the baseline lengths is less than what is expected from the behavior of the formal errors, which suggest that the baseline-length measurement precision should improve 10-20 percent if WVR data are included in the analysis. The discrepancy between the formal errors and the baseline-length results can be explained as the consequence of systematic errors in the dry-mapping function parameters, instrumental biases in the WVR and the barometer, or both.
Time series forecasting of future claims amount of SOCSO's employment injury scheme (EIS)
NASA Astrophysics Data System (ADS)
Zulkifli, Faiz; Ismail, Isma Liana; Chek, Mohd Zaki Awang; Jamal, Nur Faezah; Ridzwan, Ahmad Nur Azam Ahmad; Jelas, Imran Md; Noor, Syamsul Ikram Mohd; Ahmad, Abu Bakar
2012-09-01
The Employment Injury Scheme (EIS) provides protection to employees who are injured due to accidents whilst working, commuting from home to the work place or during employee takes a break during an authorized recess time or while travelling that is related with his work. The main purpose of this study is to forecast value on claims amount of EIS for the year 2011 until 2015 by using appropriate models. These models were tested on the actual EIS data from year 1972 until year 2010. Three different forecasting models are chosen for comparisons. These are the Naïve with Trend Model, Average Percent Change Model and Double Exponential Smoothing Model. The best model is selected based on the smallest value of error measures using the Mean Squared Error (MSE) and Mean Absolute Percentage Error (MAPE). From the result, the best model that best fit the forecast for the EIS is the Average Percent Change Model. Furthermore, the result also shows the claims amount of EIS for the year 2011 to year 2015 continue to trend upwards from year 2010.
ERIC Educational Resources Information Center
Wang, Tianyou; And Others
M. J. Kolen, B. A. Hanson, and R. L. Brennan (1992) presented a procedure for assessing the conditional standard error of measurement (CSEM) of scale scores using a strong true-score model. They also investigated the ways of using nonlinear transformation from number-correct raw score to scale score to equalize the conditional standard error along…
On the applicability of the standard kinetic theory to the study of nanoplasmas
DOE Office of Scientific and Technical Information (OSTI.GOV)
D'Angola, A., E-mail: antonio.dangola@unibas.it; Boella, E.; GoLP/Instituto de Plasmas e Fusão Nuclear-Laboratório Associado, Instituto Superior Técnico, Avenida Rovisco Pais 1-1049-001 Lisboa
Kinetic theory applies to systems with a large number of particles, while nanoplasma generated by the interaction of ultra–short laser pulses with atomic clusters are systems composed by a relatively small number (10{sup 2} ÷ 10{sup 4}) of electrons and ions. In the paper, the applicability of the kinetic theory for studying nanoplasmas is discussed. In particular, two typical phenomena are investigated: the collisionless expansion of electrons in a spherical nanoplasma with immobile ions and the formation of shock shells during Coulomb explosions. The analysis, which is carried out comparing ensemble averages obtained by solving the exact equations of motionmore » with reference solutions of the Vlasov-Poisson model, shows that for the dynamics of the electrons the error of the usually employed models is of the order of few percents (but the standard deviation in a single experiment can be of the order of 10%). Instead, special care must be taken in the study of shock formation, as the discrete structure of the electric charge can destroy or strongly modify the phenomenon.« less
Code of Federal Regulations, 2010 CFR
2010-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946...) For mixed types. 20 percent for filberts which are of a different type. (2) For defects. 10 percent...
Code of Federal Regulations, 2011 CFR
2011-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946... percent, including not more than one-fifth of this amount, or 1 percent, for bitter almonds mixed with...
7 CFR 51.2111 - U.S. No. 1 Pieces.
Code of Federal Regulations, 2011 CFR
2011-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946... bitter almonds mixed with sweet almonds. 1 percent; (2) For foreign material. Two-tenths of 1 percent (0...
Code of Federal Regulations, 2011 CFR
2011-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946...) For mixed types. 20 percent for filberts which are of a different type. (2) For defects. 10 percent...
7 CFR 51.2111 - U.S. No. 1 Pieces.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946... bitter almonds mixed with sweet almonds. 1 percent; (2) For foreign material. Two-tenths of 1 percent (0...
Code of Federal Regulations, 2010 CFR
2010-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing Practices), DEPARTMENT OF AGRICULTURE REGULATIONS AND STANDARDS UNDER THE AGRICULTURAL MARKETING ACT OF 1946... percent, including not more than one-fifth of this amount, or 1 percent, for bitter almonds mixed with...
Comparing Measurement Error between Two Different Methods of Measurement of Various Magnitudes
ERIC Educational Resources Information Center
Zavorsky, Gerald S.
2010-01-01
Measurement error is a common problem in several fields of research such as medicine, physiology, and exercise science. The standard deviation of repeated measurements on the same person is the measurement error. One way of presenting measurement error is called the repeatability, which is 2.77 multiplied by the within subject standard deviation.…
NASA Astrophysics Data System (ADS)
Soret, Marine; Alaoui, Jawad; Koulibaly, Pierre M.; Darcourt, Jacques; Buvat, Irène
2007-02-01
ObjectivesPartial volume effect (PVE) is a major source of bias in brain SPECT imaging of dopamine transporter. Various PVE corrections (PVC) making use of anatomical data have been developed and yield encouraging results. However, their accuracy in clinical data is difficult to demonstrate because the gold standard (GS) is usually unknown. The objective of this study was to assess the accuracy of PVC. MethodTwenty-three patients underwent MRI and 123I-FP-CIT SPECT. The binding potential (BP) values were measured in the striata segmented on the MR images after coregistration to SPECT images. These values were calculated without and with an original PVC. In addition, for each patient, a Monte Carlo simulation of the SPECT scan was performed. For these simulations where true simulated BP values were known, percent biases in BP estimates were calculated. For the real data, an evaluation method that simultaneously estimates the GS and a quadratic relationship between the observed and the GS values was used. It yields a surrogate mean square error (sMSE) between the estimated values and the estimated GS values. ResultsThe averaged percent difference between BP measured for real and for simulated patients was 0.7±9.7% without PVC and was -8.5±14.5% with PVC, suggesting that the simulated data reproduced the real data well enough. For the simulated patients, BP was underestimated by 66.6±9.3% on average without PVC and overestimated by 11.3±9.5% with PVC, demonstrating the greatest accuracy of BP estimates with PVC. For the simulated data, sMSE were 27.3 without PVC and 0.90 with PVC, confirming that our sMSE index properly captured the greatest accuracy of BP estimates with PVC. For the real patient data, sMSE was 50.8 without PVC and 3.5 with PVC. These results were consistent with those obtained on the simulated data, suggesting that for clinical data, and despite probable segmentation and registration errors, BP were more accurately estimated with PVC than without. ConclusionPVC was very efficient to greatly reduce the error in BP estimates in clinical imaging of dopamine transporter.
Death Notification: Someone Needs To Call the Family.
Ombres, Rachel; Montemorano, Lauren; Becker, Daniel
2017-06-01
The death notification process can affect family grief and bereavement. It can also affect the well-being of involved physicians. There is no standardized process for making death notification phone calls. We assumed that residents are likely to be unprepared before and troubled after. We investigated current death notification practices to develop an evidence-based template for standardizing this process. We used results of a literature review and open-ended interviews with faculty, residents, and widows to develop a survey regarding resident training and experience in death notification by phone. We invited all internal medicine (IM) residents at our institution to complete the survey. Sixty-seven of 93 IM residents (72%) responded to the survey. Eighty-seven percent of responders reported involvement in a death that required notification by phone. Eighty percent of residents felt inadequately trained for this task. Over 25% reported that calls went poorly. Attendings were involved in 17% of cases. Primary care physicians were not involved. Nurses and chaplains were not involved. Respondents never delayed notification of death until family arrived at the hospital. There was no consistent approach to rehearsing or making the call, advising families about safe travel to the hospital, greeting families upon arrival, or following up with expressions of condolence. Poor communication skills during death notification may contribute to complicated grief for surviving relatives and stress among physicians. This study is the first to describe current practices of death notification by IM residents. More training is needed and could be combined with training in disclosure of medical error.
High resolution isotopic analysis of U-bearing particles via fusion of SIMS and EDS images
DOE Office of Scientific and Technical Information (OSTI.GOV)
Tarolli, Jay G.; Naes, Benjamin E.; Garcia, Benjamin J.
Image fusion of secondary ion mass spectrometry (SIMS) images and X-ray elemental maps from energy-dispersive spectroscopy (EDS) was performed to facilitate the isolation and re-analysis of isotopically unique U-bearing particles where the highest precision SIMS measurements are required. Image registration, image fusion and particle micromanipulation were performed on a subset of SIMS images obtained from a large area pre-screen of a particle distribution from a sample containing several certified reference materials (CRM) U129A, U015, U150, U500 and U850, as well as a standard reference material (SRM) 8704 (Buffalo River Sediment) to simulate particles collected on swipes during routine inspections ofmore » declared uranium enrichment facilities by the International Atomic Energy Agency (IAEA). In total, fourteen particles, ranging in size from 5 – 15 µm, were isolated and re-analyzed by SIMS in multi-collector mode identifying nine particles of CRM U129A, one of U150, one of U500 and three of U850. These identifications were made within a few percent errors from the National Institute of Standards and Technology (NIST) certified atom percent values for 234U, 235U and 238U for the corresponding CRMs. This work represents the first use of image fusion to enhance the accuracy and precision of isotope ratio measurements for isotopically unique U-bearing particles for nuclear safeguards applications. Implementation of image fusion is essential for the identification of particles of interests that fall below the spatial resolution of the SIMS images.« less
Death Notification: Someone Needs To Call the Family
Ombres, Rachel; Montemorano, Lauren
2017-01-01
Abstract Background: The death notification process can affect family grief and bereavement. It can also affect the well-being of involved physicians. There is no standardized process for making death notification phone calls. We assumed that residents are likely to be unprepared before and troubled after. Objective: We investigated current death notification practices to develop an evidence-based template for standardizing this process. Design: We used results of a literature review and open-ended interviews with faculty, residents, and widows to develop a survey regarding resident training and experience in death notification by phone. Setting/Subjects: We invited all internal medicine (IM) residents at our institution to complete the survey. Measurements: Sixty-seven of 93 IM residents (72%) responded to the survey. Eighty-seven percent of responders reported involvement in a death that required notification by phone. Results: Eighty percent of residents felt inadequately trained for this task. Over 25% reported that calls went poorly. Attendings were involved in 17% of cases. Primary care physicians were not involved. Nurses and chaplains were not involved. Respondents never delayed notification of death until family arrived at the hospital. There was no consistent approach to rehearsing or making the call, advising families about safe travel to the hospital, greeting families upon arrival, or following up with expressions of condolence. Conclusions: Poor communication skills during death notification may contribute to complicated grief for surviving relatives and stress among physicians. This study is the first to describe current practices of death notification by IM residents. More training is needed and could be combined with training in disclosure of medical error. PMID:28099046
DOE Office of Scientific and Technical Information (OSTI.GOV)
Pickles, W.L.; McClure, J.W.; Howell, R.H.
1978-05-01
A sophisticated nonlinear multiparameter fitting program was used to produce a best fit calibration curve for the response of an x-ray fluorescence analyzer to uranium nitrate, freeze dried, 0.2% accurate, gravimetric standards. The program is based on unconstrained minimization subroutine, VA02A. The program considers the mass values of the gravimetric standards as parameters to be fit along with the normal calibration curve parameters. The fitting procedure weights with the system errors and the mass errors in a consistent way. The resulting best fit calibration curve parameters reflect the fact that the masses of the standard samples are measured quantities withmore » a known error. Error estimates for the calibration curve parameters can be obtained from the curvature of the ''Chi-Squared Matrix'' or from error relaxation techniques. It was shown that nondispersive XRFA of 0.1 to 1 mg freeze-dried UNO/sub 3/ can have an accuracy of 0.2% in 1000 s.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
This document provides information on ethanol fuel properties, standards, codes, best practices, and equipment information for those who blend, distribute, store, sell, or use E15 (gasoline blended with 10.5 percent - 15 percent ethanol), E85 (marketing term for ethanol-gasoline blends containing 51 percent - 83 percent ethanol, depending on geography and season), and other ethanol blends.
Equations for estimating Clark Unit-hydrograph parameters for small rural watersheds in Illinois
Straub, Timothy D.; Melching, Charles S.; Kocher, Kyle E.
2000-01-01
Simulation of the measured discharge hydrographs for the verification storms utilizing TC and R obtained from the estimation equations yielded good results. The error in peak discharge for 21 of the 29 verification storms was less than 25 percent, and the error in time-to-peak discharge for 18 of the 29 verification storms also was less than 25 percent. Therefore, applying the estimation equations to determine TC and R for design-storm simulation may result in reliable design hydrographs, as long as the physical characteristics of the watersheds under consideration are within the range of those characteristics for the watersheds in this study [area: 0.02-2.3 mi2, main-channel length: 0.17-3.4 miles, main-channel slope: 10.5-229 feet per mile, and insignificant percentage of impervious cover].
Radial basis function network learns ceramic processing and predicts related strength and density
NASA Technical Reports Server (NTRS)
Cios, Krzysztof J.; Baaklini, George Y.; Vary, Alex; Tjia, Robert E.
1993-01-01
Radial basis function (RBF) neural networks were trained using the data from 273 Si3N4 modulus of rupture (MOR) bars which were tested at room temperature and 135 MOR bars which were tested at 1370 C. Milling time, sintering time, and sintering gas pressure were the processing parameters used as the input features. Flexural strength and density were the outputs by which the RBF networks were assessed. The 'nodes-at-data-points' method was used to set the hidden layer centers and output layer training used the gradient descent method. The RBF network predicted strength with an average error of less than 12 percent and density with an average error of less than 2 percent. Further, the RBF network demonstrated a potential for optimizing and accelerating the development and processing of ceramic materials.
Skinner, Kenneth D.
2011-01-01
High-quality elevation data in riverine environments are important for fisheries management applications and the accuracy of such data needs to be determined for its proper application. The Experimental Advanced Airborne Research LiDAR (Light Detection and Ranging)-or EAARL-system was used to obtain topographic and bathymetric data along the Deadwood and South Fork Boise Rivers in west-central Idaho. The EAARL data were post-processed into bare earth and bathymetric raster and point datasets. Concurrently with the EAARL surveys, real-time kinematic global positioning system surveys were made in three areas along each of the rivers to assess the accuracy of the EAARL elevation data in different hydrogeomorphic settings. The accuracies of the EAARL-derived raster elevation values, determined in open, flat terrain, to provide an optimal vertical comparison surface, had root mean square errors ranging from 0.134 to 0.347 m. Accuracies in the elevation values for the stream hydrogeomorphic settings had root mean square errors ranging from 0.251 to 0.782 m. The greater root mean square errors for the latter data are the result of complex hydrogeomorphic environments within the streams, such as submerged aquatic macrophytes and air bubble entrainment; and those along the banks, such as boulders, woody debris, and steep slopes. These complex environments reduce the accuracy of EAARL bathymetric and topographic measurements. Steep banks emphasize the horizontal location discrepancies between the EAARL and ground-survey data and may not be good representations of vertical accuracy. The EAARL point to ground-survey comparisons produced results with slightly higher but similar root mean square errors than those for the EAARL raster to ground-survey comparisons, emphasizing the minimized horizontal offset by using interpolated values from the raster dataset at the exact location of the ground-survey point as opposed to an actual EAARL point within a 1-meter distance. The average error for the wetted stream channel surface areas was -0.5 percent, while the average error for the wetted stream channel volume was -8.3 percent. The volume of the wetted river channel was underestimated by an average of 31 percent in half of the survey areas, and overestimated by an average of 14 percent in the remainder of the survey areas. The EAARL system is an efficient way to obtain topographic and bathymetric data in large areas of remote terrain. The elevation accuracy of the EAARL system varies throughout the area depending upon the hydrogeomorphic setting, preventing the use of a single accuracy value to describe the EAARL system. The elevation accuracy variations should be kept in mind when using the data, such as for hydraulic modeling or aquatic habitat assessments.
NASA Technical Reports Server (NTRS)
Spera, David A.
2008-01-01
Equations are developed with which to calculate lift and drag coefficients along the spans of torsionally-stiff rotating airfoils of the type used in wind turbine rotors and wind tunnel fans, at angles of attack in both the unstalled and stalled aerodynamic regimes. Explicit adjustments are made for the effects of aspect ratio (length to chord width) and airfoil thickness ratio. Calculated lift and drag parameters are compared to measured parameters for 55 airfoil data sets including 585 test points. Mean deviation was found to be -0.4 percent and standard deviation was 4.8 percent. When the proposed equations were applied to the calculation of power from a stall-controlled wind turbine tested in a NASA wind tunnel, mean deviation from 54 data points was -1.3 percent and standard deviation was 4.0 percent. Pressure-rise calculations for a large wind tunnel fan deviated by 2.7 percent (mean) and 4.4 percent (standard). The assumption that a single set of lift and drag coefficient equations can represent the stalled aerodynamic behavior of a wide variety of airfoils was found to be satisfactory.
NASA Technical Reports Server (NTRS)
Nese, Jon M.; Dutton, John A.
1993-01-01
The predictability of the weather and climatic states of a low-order moist general circulation model is quantified using a dynamic systems approach, and the effect of incorporating a simple oceanic circulation on predictability is evaluated. The predictability and the structure of the model attractors are compared using Liapunov exponents, local divergence rates, and the correlation and Liapunov dimensions. It was found that the activation of oceanic circulation increases the average error doubling time of the atmosphere and the coupled ocean-atmosphere system by 10 percent and decreases the variance of the largest local divergence rate by 20 percent. When an oceanic circulation develops, the average predictability of annually averaged states is improved by 25 percent and the variance of the largest local divergence rate decreases by 25 percent.
Conrads, Paul; Roehl, Edwin A.
2007-01-01
The Everglades Depth Estimation Network (EDEN) is an integrated network of real-time water-level gaging stations, ground-elevation models, and water-surface models designed to provide scientists, engineers, and water-resource managers with current (2000-present) water-depth information for the entire freshwater portion of the greater Everglades. The U.S. Geological Survey Greater Everglades Priority Ecosystem Science provides support for EDEN and the goal of providing quality assured monitoring data for the U.S. Army Corps of Engineers Comprehensive Everglades Restoration Plan. To increase the accuracy of the water-surface models, 25 real-time water-level gaging stations were added to the network of 253 established water-level gaging stations. To incorporate the data from the newly added stations to the 7-year EDEN database in the greater Everglades, the short-term water-level records (generally less than 1 year) needed to be simulated back in time (hindcasted) to be concurrent with data from the established gaging stations in the database. A three-step modeling approach using artificial neural network models was used to estimate the water levels at the new stations. The artificial neural network models used static variables that represent the gaging station location and percent vegetation in addition to dynamic variables that represent water-level data from the established EDEN gaging stations. The final step of the modeling approach was to simulate the computed error of the initial estimate to increase the accuracy of the final water-level estimate. The three-step modeling approach for estimating water levels at the new EDEN gaging stations produced satisfactory results. The coefficients of determination (R2) for 21 of the 25 estimates were greater than 0.95, and all of the estimates (25 of 25) were greater than 0.82. The model estimates showed good agreement with the measured data. For some new EDEN stations with limited measured data, the record extension (hindcasts) included periods beyond the range of the data used to train the artificial neural network models. The comparison of the hindcasts with long-term water-level data proximal to the new EDEN gaging stations indicated that the water-level estimates were reasonable. The percent model error (root mean square error divided by the range of the measured data) was less than 6 percent, and for the majority of stations (20 of 25), the percent model error was less than 1 percent.
Sando, Roy; Chase, Katherine J.
2017-03-23
A common statistical procedure for estimating streamflow statistics at ungaged locations is to develop a relational model between streamflow and drainage basin characteristics at gaged locations using least squares regression analysis; however, least squares regression methods are parametric and make constraining assumptions about the data distribution. The random forest regression method provides an alternative nonparametric method for estimating streamflow characteristics at ungaged sites and requires that the data meet fewer statistical conditions than least squares regression methods.Random forest regression analysis was used to develop predictive models for 89 streamflow characteristics using Precipitation-Runoff Modeling System simulated streamflow data and drainage basin characteristics at 179 sites in central and eastern Montana. The predictive models were developed from streamflow data simulated for current (baseline, water years 1982–99) conditions and three future periods (water years 2021–38, 2046–63, and 2071–88) under three different climate-change scenarios. These predictive models were then used to predict streamflow characteristics for baseline conditions and three future periods at 1,707 fish sampling sites in central and eastern Montana. The average root mean square error for all predictive models was about 50 percent. When streamflow predictions at 23 fish sampling sites were compared to nearby locations with simulated data, the mean relative percent difference was about 43 percent. When predictions were compared to streamflow data recorded at 21 U.S. Geological Survey streamflow-gaging stations outside of the calibration basins, the average mean absolute percent error was about 73 percent.
Code of Federal Regulations, 2010 CFR
2010-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Rules § 29.2634 Rule 18. Any lot of tobacco containing 20 percent or more of green leaves or any lot which is not crude but contains 20 percent or more of green and crude combined shall be...
Code of Federal Regulations, 2011 CFR
2011-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Rules § 29.3121 Rule 18. Any lot of tobacco containing 20 percent or more of green leaves, or any lot which is not crude but contains 20 percent or more of green and crude combined, shall...
Code of Federal Regulations, 2011 CFR
2011-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Rules § 29.1125 Rule 19. Any lot of tobacco containing 20 percent or more of green tobacco, or any lot which is not crude but contains 20 percent or more of green and crude combined shall...
Code of Federal Regulations, 2010 CFR
2010-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Rules § 29.2409 Rule 18. Any lot of tobacco containing 20 percent or more of green leaves or any lot which is not crude but contains 20 percent or more of green and crude combined shall be...
Code of Federal Regulations, 2011 CFR
2011-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Rules § 29.3620 Rule 19. Any lot of tobacco containing 20 percent or more of green leaves or any lot which is not crude but contains 20 percent or more of green and crude combined shall be...
Code of Federal Regulations, 2010 CFR
2010-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Rules § 29.3620 Rule 19. Any lot of tobacco containing 20 percent or more of green leaves or any lot which is not crude but contains 20 percent or more of green and crude combined shall be...
Code of Federal Regulations, 2011 CFR
2011-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Rules § 29.2409 Rule 18. Any lot of tobacco containing 20 percent or more of green leaves or any lot which is not crude but contains 20 percent or more of green and crude combined shall be...
Code of Federal Regulations, 2010 CFR
2010-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Rules § 29.1125 Rule 19. Any lot of tobacco containing 20 percent or more of green tobacco, or any lot which is not crude but contains 20 percent or more of green and crude combined shall...
Code of Federal Regulations, 2011 CFR
2011-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Rules § 29.2634 Rule 18. Any lot of tobacco containing 20 percent or more of green leaves or any lot which is not crude but contains 20 percent or more of green and crude combined shall be...
Code of Federal Regulations, 2010 CFR
2010-01-01
... Regulations of the Department of Agriculture AGRICULTURAL MARKETING SERVICE (Standards, Inspections, Marketing... INSPECTION Standards Rules § 29.3121 Rule 18. Any lot of tobacco containing 20 percent or more of green leaves, or any lot which is not crude but contains 20 percent or more of green and crude combined, shall...
40 CFR Table 6 to Subpart Vvvv of... - Default Organic HAP Contents of Petroleum Solvent Groups
Code of Federal Regulations, 2011 CFR
2011-07-01
... PROTECTION AGENCY (CONTINUED) AIR PROGRAMS (CONTINUED) NATIONAL EMISSION STANDARDS FOR HAZARDOUS AIR POLLUTANTS FOR SOURCE CATEGORIES National Emission Standards for Hazardous Air Pollutants for Boat... content, percent by mass Typical organic HAP, percent by mass Aliphatic (Mineral Spirits 135, Mineral...
Estimating standard errors in feature network models.
Frank, Laurence E; Heiser, Willem J
2007-05-01
Feature network models are graphical structures that represent proximity data in a discrete space while using the same formalism that is the basis of least squares methods employed in multidimensional scaling. Existing methods to derive a network model from empirical data only give the best-fitting network and yield no standard errors for the parameter estimates. The additivity properties of networks make it possible to consider the model as a univariate (multiple) linear regression problem with positivity restrictions on the parameters. In the present study, both theoretical and empirical standard errors are obtained for the constrained regression parameters of a network model with known features. The performance of both types of standard error is evaluated using Monte Carlo techniques.
1983-03-01
Decision Tree -------------------- 62 4-E. PACKAGE unitrep Action/Area Selection flow Chart 82 4-7. PACKAGE unitrep Control Flow Chart...the originetor wculd manually draft simple, readable, formatted iressages using "-i predef.ined forms and decision logic trees . This alternative was...Study Analysis DATA CCNTENT ERRORS PERCENT OF ERRORS Character Type 2.1 Calcvlations/Associations 14.3 Message Identification 4.? Value Pisiratch 22.E
Eash, David A.; Barnes, Kimberlee K.; O'Shea, Padraic S.; Gelder, Brian K.
2018-02-14
Basin-characteristic measurements related to stream length, stream slope, stream density, and stream order have been identified as significant variables for estimation of flood, flow-duration, and low-flow discharges in Iowa. The placement of channel initiation points, however, has always been a matter of individual interpretation, leading to differences in stream definitions between analysts.This study investigated five different methods to define stream initiation using 3-meter light detection and ranging (lidar) digital elevation models (DEMs) data for 17 streamgages with drainage areas less than 50 square miles within the Des Moines Lobe landform region in north-central Iowa. Each DEM was hydrologically enforced and the five stream initiation methods were used to define channel initiation points and the downstream flow paths. The five different methods to define stream initiation were tested side-by-side for three watershed delineations: (1) the total drainage-area delineation, (2) an effective drainage-area delineation of basins based on a 2-percent annual exceedance probability (AEP) 12-hour rainfall, and (3) an effective drainage-area delineation based on a 20-percent AEP 12-hour rainfall.Generalized least squares regression analysis was used to develop a set of equations for sites in the Des Moines Lobe landform region for estimating discharges for ungaged stream sites with 50-, 20-, 10-, 4-, 2-, 1-, 0.5-, and 0.2-percent AEPs. A total of 17 streamgages were included in the development of the regression equations. In addition, geographic information system software was used to measure 58 selected basin-characteristics for each streamgage.Results of the regression analyses of the 15 lidar datasets indicate that the datasets that produce regional regression equations (RREs) with the best overall predictive accuracy are the National Hydrographic Dataset, Iowa Department of Natural Resources, and profile curvature of 0.5 stream initiation methods combined with the 20-percent AEP 12-hour rainfall watershed delineation method. These RREs have a mean average standard error of prediction (SEP) for 4-, 2-, and 1-percent AEP discharges of 53.9 percent and a mean SEP for all eight AEPs of 55.5 percent. Compared to the RREs developed in this study using the basin characteristics from the U.S. Geological Survey StreamStats application, the lidar basin characteristics provide better overall predictive accuracy.