Fabietti, Pier Giorgio; Canonico, Valentina; Orsini-Federici, Marco; Sarti, Eugenio; Massi-Benedetti, Massimo
2007-08-01
The development of an artificial pancreas requires an accurate representation of diabetes pathophysiology to create effective and safe control systems for automatic insulin infusion regulation. The aim of the present study is the assessment of a previously developed mathematical model of insulin and glucose metabolism in type 1 diabetes and the evaluation of its effectiveness for the development and testing of control algorithms. Based on the already existing "minimal model" a new mathematical model was developed composed of glucose and insulin submodels. The glucose model includes the representation of peripheral uptake, hepatic uptake and release, and renal clearance. The insulin model describes the kinetics of exogenous insulin injected either subcutaneously or intravenously. The estimation of insulin sensitivity allows the model to personalize parameters to each subject. Data sets from two different clinical trials were used here for model validation through simulation studies. The first set had subcutaneous insulin injection, while the second set had intravenous insulin injection. The root mean square error between simulated and real blood glucose profiles (G(rms)) and the Clarke error grid analysis were used to evaluate the system efficacy. Results from our study demonstrated the model's capability in identifying individual characteristics even under different experimental conditions. This was reflected by an effective simulation as indicated by G(rms), and clinical acceptability by the Clarke error grid analysis, in both clinical data series. Simulation results confirmed the capacity of the model to faithfully represent the glucose-insulin relationship in type 1 diabetes in different circumstances.
Christiansen, Mark P; Klaff, Leslie J; Brazg, Ronald; Chang, Anna R; Levy, Carol J; Lam, David; Denham, Douglas S; Atiee, George; Bode, Bruce W; Walters, Steven J; Kelley, Lynne; Bailey, Timothy S
2018-03-01
Persistent use of real-time continuous glucose monitoring (CGM) improves diabetes control in individuals with type 1 diabetes (T1D) and type 2 diabetes (T2D). PRECISE II was a nonrandomized, blinded, prospective, single-arm, multicenter study that evaluated the accuracy and safety of the implantable Eversense CGM system among adult participants with T1D and T2D (NCT02647905). The primary endpoint was the mean absolute relative difference (MARD) between paired Eversense and Yellow Springs Instrument (YSI) reference measurements through 90 days postinsertion for reference glucose values from 40 to 400 mg/dL. Additional endpoints included Clarke Error Grid analysis and sensor longevity. The primary safety endpoint was the incidence of device-related or sensor insertion/removal procedure-related serious adverse events (SAEs) through 90 days postinsertion. Ninety participants received the CGM system. The overall MARD value against reference glucose values was 8.8% (95% confidence interval: 8.1%-9.3%), which was significantly lower than the prespecified 20% performance goal for accuracy (P < 0.0001). Ninety-three percent of CGM values were within 20/20% of reference values over the total glucose range of 40-400 mg/dL. Clarke Error Grid analysis showed 99.3% of samples in the clinically acceptable error zones A (92.8%) and B (6.5%). Ninety-one percent of sensors were functional through day 90. One related SAE (1.1%) occurred during the study for removal of a sensor. The PRECISE II trial demonstrated that the Eversense CGM system provided accurate glucose readings through the intended 90-day sensor life with a favorable safety profile.
Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B.; Kirkman, M. Sue; Kovatchev, Boris
2014-01-01
Introduction: Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. Methods: A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. Results: SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to the data plotted on the CEG and PEG produced risk estimates that were more granular and reflective of a continuously increasing risk scale. Discussion: The SEG is a modern metric for clinical risk assessments of BG monitor errors that assigns a unique risk score to each monitor data point when compared to a reference value. The SEG allows the clinical accuracy of a BG monitor to be portrayed in many ways, including as the percentages of data points falling into custom-defined risk zones. For modeled data the SEG, compared with the CEG and PEG, allows greater precision for quantifying risk, especially when the risks are low. This tool will be useful to allow regulators and manufacturers to monitor and evaluate glucose monitor performance in their surveillance programs. PMID:25562886
Klonoff, David C; Lias, Courtney; Vigersky, Robert; Clarke, William; Parkes, Joan Lee; Sacks, David B; Kirkman, M Sue; Kovatchev, Boris
2014-07-01
Currently used error grids for assessing clinical accuracy of blood glucose monitors are based on out-of-date medical practices. Error grids have not been widely embraced by regulatory agencies for clearance of monitors, but this type of tool could be useful for surveillance of the performance of cleared products. Diabetes Technology Society together with representatives from the Food and Drug Administration, the American Diabetes Association, the Endocrine Society, and the Association for the Advancement of Medical Instrumentation, and representatives of academia, industry, and government, have developed a new error grid, called the surveillance error grid (SEG) as a tool to assess the degree of clinical risk from inaccurate blood glucose (BG) monitors. A total of 206 diabetes clinicians were surveyed about the clinical risk of errors of measured BG levels by a monitor. The impact of such errors on 4 patient scenarios was surveyed. Each monitor/reference data pair was scored and color-coded on a graph per its average risk rating. Using modeled data representative of the accuracy of contemporary meters, the relationships between clinical risk and monitor error were calculated for the Clarke error grid (CEG), Parkes error grid (PEG), and SEG. SEG action boundaries were consistent across scenarios, regardless of whether the patient was type 1 or type 2 or using insulin or not. No significant differences were noted between responses of adult/pediatric or 4 types of clinicians. Although small specific differences in risk boundaries between US and non-US clinicians were noted, the panel felt they did not justify separate grids for these 2 types of clinicians. The data points of the SEG were classified in 15 zones according to their assigned level of risk, which allowed for comparisons with the classic CEG and PEG. Modeled glucose monitor data with realistic self-monitoring of blood glucose errors derived from meter testing experiments plotted on the SEG when compared to the data plotted on the CEG and PEG produced risk estimates that were more granular and reflective of a continuously increasing risk scale. The SEG is a modern metric for clinical risk assessments of BG monitor errors that assigns a unique risk score to each monitor data point when compared to a reference value. The SEG allows the clinical accuracy of a BG monitor to be portrayed in many ways, including as the percentages of data points falling into custom-defined risk zones. For modeled data the SEG, compared with the CEG and PEG, allows greater precision for quantifying risk, especially when the risks are low. This tool will be useful to allow regulators and manufacturers to monitor and evaluate glucose monitor performance in their surveillance programs. © 2014 Diabetes Technology Society.
Gómez, Ana M; Marín Sánchez, Alejandro; Muñoz, Oscar M; Colón Peña, Christian Alejandro
2015-12-01
Insulin pump therapy associated with continuous glucose monitoring has shown a positive clinical impact on diabetes control and reduction of hypoglycemia episodes. There are descriptions of the performance of this device in other populations, but its precision and accuracy in Colombia and Latin America are unknown, especially in the routine outpatient setting. Data from 33 type 1 and type 2 diabetes patients with sensor-augmented pump therapy with threshold suspend automation, MiniMed Paradigm® Veo™ (Medtronic, Northridge, California), managed at Hospital Universitario San Ignacio (Bogotá, Colombia) and receiving outpatient treatment, were analyzed. Simultaneous data from continuous glucose monitoring and capillary blood glucose were compared, and their precision and accuracy were calculating with different methods, including Clarke error grid. Analyses included 2,262 continuous glucose monitoring -reference paired glucose values. A mean absolute relative difference of 20.1% was found for all measurements, with a value higher than 23% for glucose levels ≤75mg/dL. Global compliance with the ISO criteria was 64.9%. It was higher for values >75mg/dl (68.3%, 1,308 of 1,916 readings), than for those ≤ 75mg/dl (49.4%, 171 of 346 readings). Clinical accuracy, as assessed by the Clarke error grid, showed that 91.77% of data were within the A and B zones (75.6% in hypoglycemia). A good numerical accuracy was found for continuous glucose monitoring in normo and hyperglycemia situations, with low precision in hypoglycemia. The clinical accuracy of the device was adequate, with no significant safety concerns for patients. Copyright © 2015 SEEN. Published by Elsevier España, S.L.U. All rights reserved.
Data Based Prediction of Blood Glucose Concentrations Using Evolutionary Methods.
Hidalgo, J Ignacio; Colmenar, J Manuel; Kronberger, Gabriel; Winkler, Stephan M; Garnica, Oscar; Lanchares, Juan
2017-08-08
Predicting glucose values on the basis of insulin and food intakes is a difficult task that people with diabetes need to do daily. This is necessary as it is important to maintain glucose levels at appropriate values to avoid not only short-term, but also long-term complications of the illness. Artificial intelligence in general and machine learning techniques in particular have already lead to promising results in modeling and predicting glucose concentrations. In this work, several machine learning techniques are used for the modeling and prediction of glucose concentrations using as inputs the values measured by a continuous monitoring glucose system as well as also previous and estimated future carbohydrate intakes and insulin injections. In particular, we use the following four techniques: genetic programming, random forests, k-nearest neighbors, and grammatical evolution. We propose two new enhanced modeling algorithms for glucose prediction, namely (i) a variant of grammatical evolution which uses an optimized grammar, and (ii) a variant of tree-based genetic programming which uses a three-compartment model for carbohydrate and insulin dynamics. The predictors were trained and tested using data of ten patients from a public hospital in Spain. We analyze our experimental results using the Clarke error grid metric and see that 90% of the forecasts are correct (i.e., Clarke error categories A and B), but still even the best methods produce 5 to 10% of serious errors (category D) and approximately 0.5% of very serious errors (category E). We also propose an enhanced genetic programming algorithm that incorporates a three-compartment model into symbolic regression models to create smoothed time series of the original carbohydrate and insulin time series.
Thabit, Hood; Leelarathna, Lalantha; Wilinska, Malgorzata E; Elleri, Daniella; Allen, Janet M; Lubina-Solomon, Alexandra; Walkinshaw, Emma; Stadler, Marietta; Choudhary, Pratik; Mader, Julia K; Dellweg, Sibylle; Benesch, Carsten; Pieber, Thomas R; Arnolds, Sabine; Heller, Simon R; Amiel, Stephanie A; Dunger, David; Evans, Mark L; Hovorka, Roman
2015-11-01
Closed-loop (CL) systems modulate insulin delivery based on glucose levels measured by a continuous glucose monitor (CGM). Accuracy of the CGM affects CL performance and safety. We evaluated the accuracy of the Freestyle Navigator(®) II CGM (Abbott Diabetes Care, Alameda, CA) during three unsupervised, randomized, open-label, crossover home CL studies. Paired CGM and capillary glucose values (10,597 pairs) were collected from 57 participants with type 1 diabetes (41 adults [mean±SD age, 39±12 years; mean±SD hemoglobin A1c, 7.9±0.8%] recruited at five centers and 16 adolescents [mean±SD age, 15.6±3.6 years; mean±SD hemoglobin A1c, 8.1±0.8%] recruited at two centers). Numerical accuracy was assessed by absolute relative difference (ARD) and International Organization for Standardization (ISO) 15197:2013 15/15% limits, and clinical accuracy was assessed by Clarke error grid analysis. Total duration of sensor use was 2,002 days (48,052 h). Overall sensor accuracy for the capillary glucose range (1.1-27.8 mmol/L) showed mean±SD and median (interquartile range) ARD of 14.2±15.5% and 10.0% (4.5%, 18.4%), respectively. Lowest mean ARD was observed in the hyperglycemic range (9.8±8.8%). Over 95% of pairs were in combined Clarke error grid Zones A and B (A, 80.1%, B, 16.2%). Overall, 70.0% of the sensor readings satisfied ISO criteria. Mean ARD was consistent (12.3%; 95% of the values fall within ±3.7%) and not different between participants (P=0.06) within the euglycemic and hyperglycemic range, when CL is actively modulating insulin delivery. Consistent accuracy of the CGM within the euglycemic-hyperglycemic range using the Freestyle Navigator II was observed and supports its use in home CL studies. Our results may contribute toward establishing normative CGM performance criteria for unsupervised home use of CL.
Tack, Cornelius; Pohlmeier, Harald; Behnke, Thomas; Schmid, Volkmar; Grenningloh, Marco; Forst, Thomas; Pfützner, Andreas
2012-04-01
This multicenter study was conducted to evaluate the performance of five recently introduced blood glucose (BG) monitoring (BGM) devices under daily routine conditions in comparison with the YSI (Yellow Springs, OH) 2300 Stat Plus glucose analyzer. Five hundred one diabetes patients with experience in self-monitoring of BG were randomized to use three of five different BGM devices (FreeStyle Lite® [Abbott Diabetes Care Inc., Alameda, CA], FreeStyle Freedom Lite [Abbott Diabetes Care], OneTouch® UltraEasy® [LifeScan Inc., Milpitas, CA], Accu-Chek® Aviva [Roche Diagnostics, Mannheim, Germany], and Contour® [Bayer Vital GmbH, Leverkusen, Germany]) in a daily routine setting. All devices and strips were purchased from local regular distribution sources (pharmacies, four strip lots per device). The patients performed the finger prick and the glucose measurement on their own. In parallel, a healthcare professional performed the glucose assessment with the reference method (YSI 2300 Stat Plus). The primary objective was the comparison of the mean absolute relative differences (MARD). Secondary objectives were compliance with the International Organization for Standardization (ISO) accuracy criteria under these routine conditions and Clarke and Parkes Error Grid analyses. MARD ranged from 4.9% (FreeStyle Lite) to 9.7% (OneTouch UltraEasy). The ISO 15197:2003 requirements were fulfilled by the FreeStyle Lite (98.8%), FreeStyle Freedom Lite (97.5%), and Accu-Chek Aviva (97.0%), but not by the Contour (92.4%) and OneTouch UltraEasy (91.1%). The number of values in Zone A of the Clarke Error Grid analysis was highest for the FreeStyle Lite (98.8%) and lowest for the OneTouch Ultra Easy (90.4%). FreeStyle Lite, FreeStyle Freedom Lite, and Accu-Chek Aviva performed very well in this study with devices and strips purchased through regular distribution channels, with the FreeStyle Lite achieving the lowest MARD in this investigation.
Tack, Cornelius; Pohlmeier, Harald; Behnke, Thomas; Schmid, Volkmar; Grenningloh, Marco; Forst, Thomas
2012-01-01
Abstract Background This multicenter study was conducted to evaluate the performance of five recently introduced blood glucose (BG) monitoring (BGM) devices under daily routine conditions in comparison with the YSI (Yellow Springs, OH) 2300 Stat Plus glucose analyzer. Methods Five hundred one diabetes patients with experience in self-monitoring of BG were randomized to use three of five different BGM devices (FreeStyle Lite® [Abbott Diabetes Care Inc., Alameda, CA], FreeStyle Freedom Lite [Abbott Diabetes Care], OneTouch® UltraEasy® [LifeScan Inc., Milpitas, CA], Accu-Chek® Aviva [Roche Diagnostics, Mannheim, Germany], and Contour® [Bayer Vital GmbH, Leverkusen, Germany]) in a daily routine setting. All devices and strips were purchased from local regular distribution sources (pharmacies, four strip lots per device). The patients performed the finger prick and the glucose measurement on their own. In parallel, a healthcare professional performed the glucose assessment with the reference method (YSI 2300 Stat Plus). The primary objective was the comparison of the mean absolute relative differences (MARD). Secondary objectives were compliance with the International Organization for Standardization (ISO) accuracy criteria under these routine conditions and Clarke and Parkes Error Grid analyses. Results MARD ranged from 4.9% (FreeStyle Lite) to 9.7% (OneTouch UltraEasy). The ISO 15197:2003 requirements were fulfilled by the FreeStyle Lite (98.8%), FreeStyle Freedom Lite (97.5%), and Accu-Chek Aviva (97.0%), but not by the Contour (92.4%) and OneTouch UltraEasy (91.1%). The number of values in Zone A of the Clarke Error Grid analysis was highest for the FreeStyle Lite (98.8%) and lowest for the OneTouch Ultra Easy (90.4%). Conclusions FreeStyle Lite, FreeStyle Freedom Lite, and Accu-Chek Aviva performed very well in this study with devices and strips purchased through regular distribution channels, with the FreeStyle Lite achieving the lowest MARD in this investigation. PMID:22176154
Hoss, Udo; Jeddi, Iman; Schulz, Mark; Budiman, Erwin; Bhogal, Claire; McGarraugh, Geoffrey
2010-08-01
Commercial continuous subcutaneous glucose monitors require in vivo calibration using capillary blood glucose tests. Feasibility of factory calibration, i.e., sensor batch characterization in vitro with no further need for in vivo calibration, requires a predictable and stable in vivo sensor sensitivity and limited inter- and intra-subject variation of the ratio of interstitial to blood glucose concentration. Twelve volunteers wore two FreeStyle Navigator (Abbott Diabetes Care, Alameda, CA) continuous glucose monitoring systems for 5 days in parallel for two consecutive sensor wears (four sensors per subject, 48 sensors total). Sensors from a prototype sensor lot with a low variability in glucose sensitivity were used for the study. Median sensor sensitivity values based on capillary blood glucose were calculated per sensor and compared for inter- and intra-subject variation. Mean absolute relative difference (MARD) calculation and error grid analysis were performed using a single calibration factor for all sensors to simulate factory calibration and compared to standard fingerstick calibration. Sensor sensitivity variation in vitro was 4.6%, which increased to 8.3% in vivo (P < 0.0001). Analysis of variance revealed no significant inter-subject differences in sensor sensitivity (P = 0.134). Applying a single universal calibration factor retrospectively to all sensors resulted in a MARD of 10.4% and 88.1% of values in Clarke Error Grid Zone A, compared to a MARD of 10.9% and 86% of values in Error Grid Zone A for fingerstick calibration. Factory calibration of sensors for continuous subcutaneous glucose monitoring is feasible with similar accuracy to standard fingerstick calibration. Additional data are required to confirm this result in subjects with diabetes.
Validation of the continuous glucose monitoring sensor in preterm infants.
Beardsall, K; Vanhaesebrouck, S; Ogilvy-Stuart, A L; Vanhole, C; VanWeissenbruch, M; Midgley, P; Thio, M; Cornette, L; Ossuetta, I; Palmer, C R; Iglesias, I; de Jong, M; Gill, B; de Zegher, F; Dunger, D B
2013-03-01
Recent studies have highlighted the need for improved methods of monitoring glucose control in intensive care to reduce hyperglycaemia, without increasing the risk of hypoglycaemia. Continuous glucose monitoring is increasingly used in children with diabetes, but there are little data regarding its use in the preterm infant, particularly at extremes of glucose levels and over prolonged periods. This study aimed to assess the accuracy of the continuous glucose monitoring sensor (CGMS) across the glucose profile, and to determine whether there was any deterioration over a 7 day period. Prospectively collected CGMS data from the NIRTURE Trial was compared with the data obtained simultaneously using point of care glucose monitors. An international multicentre randomised controlled trial. One hundred and eighty-eight very low birth weight control infants. Optimal accuracy, performance goals (American Diabetes Association consensus), Bland Altman, Error Grid analyses and accuracy. The mean (SD) duration of CGMS recordings was 156.18 (29) h (6.5 days), with a total of 5207 paired glucose levels. CGMS data correlated well with point of care devices (r=0.94), with minimal bias. It met the Clarke Error Grid and Consensus Grid criteria for clinical significance. Accuracy of single readings to detect set thresholds of hypoglycaemia, or hyperglycaemia was poor. There was no deterioration over time from insertion. CGMS can provide information on trends in glucose control, and guidance on the need for blood glucose assessment. This highlights the potential use of CGMS in optimising glucose control in preterm infants.
National Grid Deep Energy Retrofit Pilot Program—Clark Residence
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2010-03-30
In this case study, Building Science Corporation partnered with local utility company, National Grid, Massachusetts homes. This project involved the renovation of a 18th century Cape-style building and achieved a super-insulated enclosure (R-35 walls, R-50+ roof, R-20+ foundation), extensive water management improvements, high-efficiency water heater, and state-of-the-art ventilation.
A modified adjoint-based grid adaptation and error correction method for unstructured grid
NASA Astrophysics Data System (ADS)
Cui, Pengcheng; Li, Bin; Tang, Jing; Chen, Jiangtao; Deng, Youqi
2018-05-01
Grid adaptation is an important strategy to improve the accuracy of output functions (e.g. drag, lift, etc.) in computational fluid dynamics (CFD) analysis and design applications. This paper presents a modified robust grid adaptation and error correction method for reducing simulation errors in integral outputs. The procedure is based on discrete adjoint optimization theory in which the estimated global error of output functions can be directly related to the local residual error. According to this relationship, local residual error contribution can be used as an indicator in a grid adaptation strategy designed to generate refined grids for accurately estimating the output functions. This grid adaptation and error correction method is applied to subsonic and supersonic simulations around three-dimensional configurations. Numerical results demonstrate that the sensitive grids to output functions are detected and refined after grid adaptation, and the accuracy of output functions is obviously improved after error correction. The proposed grid adaptation and error correction method is shown to compare very favorably in terms of output accuracy and computational efficiency relative to the traditional featured-based grid adaptation.
The Need for Balance in Attack Aviation Employment Against Hybrid Threats
2014-06-13
they mitigate airpower as well as their ability to counter landpower. Cilluffo and Clark offer a further explanation of hybrid threats. Although...power radios, and the systematic manipulation of the power grids resulting in the flickering of the lights in certain towns to alert fighters...
Notes on Accuracy of Finite-Volume Discretization Schemes on Irregular Grids
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2011-01-01
Truncation-error analysis is a reliable tool in predicting convergence rates of discretization errors on regular smooth grids. However, it is often misleading in application to finite-volume discretization schemes on irregular (e.g., unstructured) grids. Convergence of truncation errors severely degrades on general irregular grids; a design-order convergence can be achieved only on grids with a certain degree of geometric regularity. Such degradation of truncation-error convergence does not necessarily imply a lower-order convergence of discretization errors. In these notes, irregular-grid computations demonstrate that the design-order discretization-error convergence can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jakeman, J.D., E-mail: jdjakem@sandia.gov; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the physical discretization error and the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity of the sparse grid. Utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchicalmore » surplus based strategies. Throughout this paper we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less
Mortellaro, Mark; DeHennis, Andrew
2014-11-15
A continuous glucose monitoring (CGM) system consisting of a wireless, subcutaneously implantable glucose sensor and a body-worn transmitter is described and clinical performance over a 28 day implant period in 12 type 1 diabetic patients is reported. The implantable sensor is constructed of a fluorescent, boronic-acid based glucose indicating polymer coated onto a miniaturized, polymer-encased optical detection system. The external transmitter wirelessly communicates with and powers the sensor and contains Bluetooth capability for interfacing with a Smartphone application. The accuracy of 19 implanted sensors were evaluated over 28 days during 6 in-clinic sessions by comparing the CGM glucose values to venous blood glucose measurements taken every 15 min. Mean absolute relative difference (MARD) for all sensors was 11.6 ± 0.7%, and Clarke error grid analysis showed that 99% of paired data points were in the combined A and B zones. Copyright © 2014 The Authors. Published by Elsevier B.V. All rights reserved.
Noninvasive in vivo glucose sensing using an iris based technique
NASA Astrophysics Data System (ADS)
Webb, Anthony J.; Cameron, Brent D.
2011-03-01
Physiological glucose monitoring is important aspect in the treatment of individuals afflicted with diabetes mellitus. Although invasive techniques for glucose monitoring are widely available, it would be very beneficial to make such measurements in a noninvasive manner. In this study, a New Zealand White (NZW) rabbit animal model was utilized to evaluate a developed iris-based imaging technique for the in vivo measurement of physiological glucose concentration. The animals were anesthetized with isoflurane and an insulin/dextrose protocol was used to control blood glucose concentration. To further help restrict eye movement, a developed ocular fixation device was used. During the experimental time frame, near infrared illuminated iris images were acquired along with corresponding discrete blood glucose measurements taken with a handheld glucometer. Calibration was performed using an image based Partial Least Squares (PLS) technique. Independent validation was also performed to assess model performance along with Clarke Error Grid Analysis (CEGA). Initial validation results were promising and show that a high percentage of the predicted glucose concentrations are within 20% of the reference values.
The performance of flash glucose monitoring in critically ill patients with diabetes.
Ancona, Paolo; Eastwood, Glenn M; Lucchetta, Luca; Ekinci, Elif I; Bellomo, Rinaldo; Mårtensson, Johan
2017-06-01
Frequent glucose monitoring may improve glycaemic control in critically ill patients with diabetes. We aimed to assess the accuracy of a novel subcutaneous flash glucose monitor (FreeStyle Libre [Abbott Diabetes Care]) in these patients. We applied the FreeStyle Libre sensor to the upper arm of eight patients with diabetes in the intensive care unit and obtained hourly flash glucose measurements. Duplicate recordings were obtained to assess test-retest reliability. The reference glucose level was measured in arterial or capillary blood. We determined numerical accuracy using Bland- Altman methods, the mean absolute relative difference (MARD) and whether the International Organization for Standardization (ISO) and Clinical and Laboratory Standards Institute Point of Care Testing (CLSI POCT) criteria were met. Clarke error grid (CEG) and surveillance error grid (SEG) analyses were used to determine clinical accuracy. We compared 484 duplicate flash glucose measurements and observed a Pearson correlation coefficient of 0.97 and a coefficient of repeatability of 1.6 mmol/L. We studied 185 flash readings paired with arterial glucose levels, and 89 paired with capillary glucose levels. Using the arterial glucose level as the reference, we found a mean bias of 1.4 mmol/L (limits of agreement, -1.7 to 4.5 mmol/L). The MARD was 14% (95% CI, 12%-16%) and the proportion of measurements meeting ISO and CLSI POCT criteria was 64.3% and 56.8%, respectively. The proportions of values within a low-risk zone on CEG and SEG analyses were 97.8% and 99.5%, respectively. Using capillary glucose levels as the reference, we found that numerical and clinical accuracy were lower. The subcutaneous FreeStyle Libre blood glucose measurement system showed high test-retest reliability and acceptable accuracy when compared with arterial blood glucose measurement in critically ill patients with diabetes.
Jakeman, J. D.; Wildey, T.
2015-01-01
In this paper we present an algorithm for adaptive sparse grid approximations of quantities of interest computed from discretized partial differential equations. We use adjoint-based a posteriori error estimates of the interpolation error in the sparse grid to enhance the sparse grid approximation and to drive adaptivity. We show that utilizing these error estimates provides significantly more accurate functional values for random samples of the sparse grid approximation. We also demonstrate that alternative refinement strategies based upon a posteriori error estimates can lead to further increases in accuracy in the approximation over traditional hierarchical surplus based strategies. Throughout this papermore » we also provide and test a framework for balancing the physical discretization error with the stochastic interpolation error of the enhanced sparse grid approximation.« less
Marics, Gábor; Koncz, Levente; Eitler, Katalin; Vatai, Barbara; Szénási, Boglárka; Zakariás, David; Mikos, Borbála; Körner, Anna; Tóth-Heyn, Péter
2015-03-19
Continuous glucose monitoring (CGM) originally was developed for diabetic patients and it may be a useful tool for monitoring glucose changes in pediatric intensive care unit (PICU). Its use is, however, limited by the lack of sufficient data on its reliability at insufficient peripheral perfusion. We aimed to correlate the accuracy of CGM with laboratory markers relevant to disturbed tissue perfusion. In 38 pediatric patients (age range, 0-18 years) requiring intensive care we tested the effect of pH, lactate, hematocrit and serum potassium on the difference between CGM and meter glucose measurements. Guardian® (Medtronic®) CGM results were compared to GEM 3000 (Instrumentation laboratory®) and point-of-care measurements. The clinical accuracy of CGM was evaluated by Clarke Error Grid -, Bland-Altman analysis and Pearson's correlation. We used Friedman test for statistical analysis (statistical significance was established as a p < 0.05). CGM values exhibited a considerable variability without any correlation with the examined laboratory parameters. Clarke, Bland-Altman analysis and Pearson's correlation coefficient demonstrated a good clinical accuracy of CGM (zone A and B = 96%; the mean difference between reference and CGM glucose was 1,3 mg/dL, 48 from the 780 calibration pairs overrunning the 2 standard deviation; Pearson's correlation coefficient: 0.83). The accuracy of CGM measurements is independent of laboratory parameters relevant to tissue hypoperfusion. CGM may prove a reliable tool for continuous monitoring of glucose changes in PICUs, not much influenced by tissue perfusion, but still not appropriate for being the base for clinical decisions.
Association rule mining on grid monitoring data to detect error sources
NASA Astrophysics Data System (ADS)
Maier, Gerhild; Schiffers, Michael; Kranzlmueller, Dieter; Gaidioz, Benjamin
2010-04-01
Error handling is a crucial task in an infrastructure as complex as a grid. There are several monitoring tools put in place, which report failing grid jobs including exit codes. However, the exit codes do not always denote the actual fault, which caused the job failure. Human time and knowledge is required to manually trace back errors to the real fault underlying an error. We perform association rule mining on grid job monitoring data to automatically retrieve knowledge about the grid components' behavior by taking dependencies between grid job characteristics into account. Therewith, problematic grid components are located automatically and this information - expressed by association rules - is visualized in a web interface. This work achieves a decrease in time for fault recovery and yields an improvement of a grid's reliability.
NASA Astrophysics Data System (ADS)
Mohd Sakri, F.; Mat Ali, M. S.; Sheikh Salim, S. A. Z.
2016-10-01
The study of physic fluid for a liquid draining inside a tank is easily accessible using numerical simulation. However, numerical simulation is expensive when the liquid draining involves the multi-phase problem. Since an accurate numerical simulation can be obtained if a proper method for error estimation is accomplished, this paper provides systematic assessment of error estimation due to grid convergence error using OpenFOAM. OpenFOAM is an open source CFD-toolbox and it is well-known among the researchers and institutions because of its free applications and ready to use. In this study, three types of grid resolution are used: coarse, medium and fine grids. Grid Convergence Index (GCI) is applied to estimate the error due to the grid sensitivity. A monotonic convergence condition is obtained in this study that shows the grid convergence error has been progressively reduced. The fine grid has the GCI value below 1%. The extrapolated value from Richardson Extrapolation is in the range of the GCI obtained.
In Search of Grid Converged Solutions
NASA Technical Reports Server (NTRS)
Lockard, David P.
2010-01-01
Assessing solution error continues to be a formidable task when numerically solving practical flow problems. Currently, grid refinement is the primary method used for error assessment. The minimum grid spacing requirements to achieve design order accuracy for a structured-grid scheme are determined for several simple examples using truncation error evaluations on a sequence of meshes. For certain methods and classes of problems, obtaining design order may not be sufficient to guarantee low error. Furthermore, some schemes can require much finer meshes to obtain design order than would be needed to reduce the error to acceptable levels. Results are then presented from realistic problems that further demonstrate the challenges associated with using grid refinement studies to assess solution accuracy.
Barnette, Daniel W.
2002-01-01
The present invention provides a method of grid generation that uses the geometry of the problem space and the governing relations to generate a grid. The method can generate a grid with minimized discretization errors, and with minimal user interaction. The method of the present invention comprises assigning grid cell locations so that, when the governing relations are discretized using the grid, at least some of the discretization errors are substantially zero. Conventional grid generation is driven by the problem space geometry; grid generation according to the present invention is driven by problem space geometry and by governing relations. The present invention accordingly can provide two significant benefits: more efficient and accurate modeling since discretization errors are minimized, and reduced cost grid generation since less human interaction is required.
Noninvasive wearable sensor for indirect glucometry.
Zilberstein, Gleb; Zilberstein, Roman; Maor, Uriel; Righetti, Pier Giorgio
2018-04-02
A noninvasive mini-sensor for blood glucose concentration assessment has been developed. The monitoring is performed by gently pressing a wrist or fingertip onto the chemochromic mixture coating a thin glass or polymer film positioned on the back panel of a smart watch with PPG/HRM (photoplethysmographic/heart rate monitoring sensor). The various chemochromic components measure the absolute values of the following metabolites present in the sweat: acetone, acetone beta-hydroxybutirate, aceto acetate, water, carbon dioxide, lactate anion, pyruvic acid, Na and K salts. Taken together, all these parameters give information about blood glucose concentration, calculated via multivariate analysis based on neural network algorithms built into the sensor. The Clarke Error Grid shows an excellent correlation between data measured by the standard invasive glucose analyser and the present noninvasive sensor, with all points aligned along a 45-degree diagonal and contained almost exclusively in sector A. Graphs measuring glucose levels five times a day (prior, during and after breakfast and prior, during and after lunch), for different individuals (males and females) show a good correlation between the two curves of conventional, invasive meters vs. the noninvasive sensor, with an error of ±15%. This novel, noninvasive sensor for indirect glucometry is fully miniaturized, easy to use and operate and could represent a valid alternative in clinical settings and for individual, personal users, to current, invasive tools. © 2018 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Photovoltaic at Hollywood and Desert Breeze Recreational Centers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ammerman, Shane
Executive Summary Renewable Energy Initiatives for Clark County Parks and Recreation Solar Project DOE grant # DE-EE0003180 In accordance with the goals of the Department of Energy (DOE) Office of Energy Efficiency and Renewable Energy for promoting solar energy as clean, carbon-free and cost-effective, the County believed that a recreational center was an ideal place to promote solar energy technologies to the public. This project included the construction of solar electricity generation facilities (40kW) at two Clark County facility sites, Desert Breeze Recreational Center and Hollywood Recreational Center, with educational kiosks and Green Boxes for classroom instruction. The major objectivesmore » and goals of this Solar Project include demonstration of state of the art technologies for the generation of electricity from solar technology and the creation of an informative and educational tool in regards to the benefits and process of generating alternative energy. Clark County partnered with Anne Johnson (design architect/consultant), Affiliated Engineers Inc. (AEI), Desert Research Institute (DRI), and Morse Electric. The latest photovoltaic technologies were used in the project to help create the greatest expected energy savings for60443 each recreational center. This coupled with the data created from the monitoring system will help Clark County and NREL further understand the real time outputs from the system. The educational portion created with AEI and DRI incorporates material for all ages with a focus on K - 12. The AEI component is an animated story telling the fundamentals of how sunlight is turned into electricity and DRI‘s creation of Solar Green Boxes brings environmental education into the classroom. In addition to the educational component for the public, the energy that is created through the photovoltaic system also translates into saved money and health benefits for the general public. This project has helped Clark County to further add to its own energy reduction goals created by the energy management agenda (Resolution to Encourage Sustainability) and the County’s Eco-initiative. Each site has installed photovoltaic panels on the existing roof structures that exhibit suitable solar exposure. The generation systems utilize solar energy creating electricity used for the facility’s lighting system and other electrical requirements. Unused electricity is sent to the electric utility grid, often at peak demand times. Educational signage, kiosks and information have been included to inform and expand the public’s understanding of solar energy technology. The Solar Green Boxes were created for further hands on classroom education of solar power. In addition, data is sent by a Long Term PV performance monitoring system, complete with data transmission to NREL (National Renewable Energy Laboratory), located in Golden, CO. This system correlates local solar irradiance and weather with power production. The expected outcomes of this Solar Project are as follows: (1) Successful photovoltaic electricity generation technologies to capture solar energy in a useful form of electrical energy. (2) Reduction of greenhouse gas emissions and environmental degradation resulting from reduced energy demand from traditional electricity sources such as fossil fuel fired and nuclear power plants. (3) Advance the research and development of solar electricity generation. (4) The education of the general public in regards to the benefits of environmentally friendly electricity generation and Clark County’s efforts to encourage sustainable living practices. (5) To provide momentum for the nexus for future solar generation facilities in Clark County facilities and buildings and further the County’s energy reduction goals. (6) To ultimately contribute to the reduction of dependence on foreign oil and other unsustainable sources of energy. This Solar Project addresses several objectives and goals of the U.S. Department of Energy’s Solar Energy Technology Program. The project improves the integration and performance of solar electricity directly through implementation of cutting edge technology. The project further addresses this goal by laying important ground work and infrastructure for integration into the utility grid in future related projects. There will also be added security, reliability, and diversity to the energy system by providing and using reliable, secure, distributed electricity in Clark County facilities as well as sending such electricity back into the utility electric grid. A final major objective met by the Solar Project will be the displacement of energy derived by fossil fuels with clean renewable energy created by photovoltaic panels.« less
Zhou, Jian; Lv, Xiaofeng; Mu, Yiming; Wang, Xianling; Li, Jing; Zhang, Xingguang; Wu, Jinxiao; Bao, Yuqian; Jia, Weiping
2012-08-01
The purpose of this multicenter study was to investigate the accuracy of a real-time continuous glucose monitoring sensor in Chinese diabetes patients. In total, 48 patients with type 1 or 2 diabetes from three centers in China were included in the study. The MiniMed Paradigm(®) 722 insulin pump (Medtronic, Northridge, CA) was used to monitor the real-time continuous changes of blood glucose levels for three successive days. Venous blood of the subjects was randomly collected every 15 min for seven consecutive hours on the day when the subjects were wearing the sensor. Reference values were provided by the YSI(®) 2300 STAT PLUS™ glucose and lactate analyzer (YSI Life Sciences, Yellow Springs, OH). In total, 1,317 paired YSI-sensor values were collected from the 48 patients. Of the sensor readings, 88.3% (95% confidence interval, 0.84-0.92) were within±20% of the YSI values, and 95.7% were within±30% of the YSI values. Clarke and consensus error grid analyses showed that the ratios of the YSI-sensor values in Zone A to the values in Zone B were 99.1% and 99.9%, respectively. Continuous error grid analysis showed that the ratios of the YSI-sensor values in the region of accurate reading, benign errors, and erroneous reading were 96.4%, 1.8%, and 1.8%, respectively. The mean absolute relative difference (ARD) for all subjects was 10.4%, and the median ARD was 7.8%. Bland-Altman analysis detected a mean blood glucose level of 3.84 mg/dL. Trend analysis revealed that 86.1% of the difference of the rates of change between the YSI values and the sensor readings occurred within the range of 1 mg/dL/min. The Paradigm insulin pump has high accuracy in both monitoring the real-time continuous changes and predicting the trend of changes in blood glucose level. However, actual clinical manifestations should be taken into account for diagnosis of hypoglycemia.
Dutt-Ballerstadt, Ralph; Evans, Colton; Pillai, Arun P; Orzeck, Eric; Drabek, Rafal; Gowda, Ashok; McNichols, Roger
2012-03-01
We report results of a pilot clinical study of a subcutaneous fluorescence affinity sensor (FAS) for continuous glucose monitoring conducted in people with type 1 and type 2 diabetes. The device was assessed based on performance, safety, and comfort level under acute conditions (4 h). A second-generation FAS (BioTex Inc., Houston, TX) was subcutaneously implanted in the abdomens of 12 people with diabetes, and its acute performance to excursions in blood glucose was monitored over 4 h. After 30-60 min the subjects, who all had fasting blood glucose levels of less than 200 mg/dl, received a glucose bolus of 75 g/liter dextrose by oral administration. Capillary blood glucose samples were obtained from the finger tip. The FAS data were retrospectively evaluated by linear least squares regression analysis and by the Clarke error grid method. Comfort levels during insertion, operation, and sensor removal were scored by the subjects using an analog pain scale. After retrospective calibration of 17 sensors implanted in 12 subjects, error grid analysis showed 97% of the paired values in zones A and B and 1.5% in zones C and D, respectively. The mean absolute relative error between sensor signal and capillary blood glucose was 13% [±15% standard deviation (SD), 100-350 mg/dl] with an average correlation coefficient of 0.84 (±0.24 SD). The actual average "warm-up" time for the FAS readings, at which highest correlation with glucose readings was determined, was 65 (±32 SD) min. Mean time lag was 4 (±5 SD) min during the initial operational hours. Pain levels during insertion and operation were modest. The in vivo performance of the FAS demonstrates feasibility of the fluorescence affinity technology to determine blood glucose excursions accurately and safely under acute dynamic conditions in humans with type 1 and type 2 diabetes. Specific engineering challenges to sensor and instrumentation robustness remain. Further studies will be required to validate its promising performance over longer implantation duration (5-7 days) in people with diabetes. © 2012 Diabetes Technology Society.
Posteriori error determination and grid adaptation for AMR and ALE computational fluid dynamics
DOE Office of Scientific and Technical Information (OSTI.GOV)
Lapenta, G. M.
2002-01-01
We discuss grid adaptation for application to AMR and ALE codes. Two new contributions are presented. First, a new method to locate the regions where the truncation error is being created due to an insufficient accuracy: the operator recovery error origin (OREO) detector. The OREO detector is automatic, reliable, easy to implement and extremely inexpensive. Second, a new grid motion technique is presented for application to ALE codes. The method is based on the Brackbill-Saltzman approach but it is directly linked to the OREO detector and moves the grid automatically to minimize the error.
Stability and error estimation for Component Adaptive Grid methods
NASA Technical Reports Server (NTRS)
Oliger, Joseph; Zhu, Xiaolei
1994-01-01
Component adaptive grid (CAG) methods for solving hyperbolic partial differential equations (PDE's) are discussed in this paper. Applying recent stability results for a class of numerical methods on uniform grids. The convergence of these methods for linear problems on component adaptive grids is established here. Furthermore, the computational error can be estimated on CAG's using the stability results. Using these estimates, the error can be controlled on CAG's. Thus, the solution can be computed efficiently on CAG's within a given error tolerance. Computational results for time dependent linear problems in one and two space dimensions are presented.
Huang, Weiquan; Fang, Tao; Luo, Li; Zhao, Lin; Che, Fengzhu
2017-07-03
The grid strapdown inertial navigation system (SINS) used in polar navigation also includes three kinds of periodic oscillation errors as common SINS are based on a geographic coordinate system. Aiming ships which have the external information to conduct a system reset regularly, suppressing the Schuler periodic oscillation is an effective way to enhance navigation accuracy. The Kalman filter based on the grid SINS error model which applies to the ship is established in this paper. The errors of grid-level attitude angles can be accurately estimated when the external velocity contains constant error, and then correcting the errors of the grid-level attitude angles through feedback correction can effectively dampen the Schuler periodic oscillation. The simulation results show that with the aid of external reference velocity, the proposed external level damping algorithm based on the Kalman filter can suppress the Schuler periodic oscillation effectively. Compared with the traditional external level damping algorithm based on the damping network, the algorithm proposed in this paper can reduce the overshoot errors when the state of grid SINS is switched from the non-damping state to the damping state, and this effectively improves the navigation accuracy of the system.
On Accuracy of Adaptive Grid Methods for Captured Shocks
NASA Technical Reports Server (NTRS)
Yamaleev, Nail K.; Carpenter, Mark H.
2002-01-01
The accuracy of two grid adaptation strategies, grid redistribution and local grid refinement, is examined by solving the 2-D Euler equations for the supersonic steady flow around a cylinder. Second- and fourth-order linear finite difference shock-capturing schemes, based on the Lax-Friedrichs flux splitting, are used to discretize the governing equations. The grid refinement study shows that for the second-order scheme, neither grid adaptation strategy improves the numerical solution accuracy compared to that calculated on a uniform grid with the same number of grid points. For the fourth-order scheme, the dominant first-order error component is reduced by the grid adaptation, while the design-order error component drastically increases because of the grid nonuniformity. As a result, both grid adaptation techniques improve the numerical solution accuracy only on the coarsest mesh or on very fine grids that are seldom found in practical applications because of the computational cost involved. Similar error behavior has been obtained for the pressure integral across the shock. A simple analysis shows that both grid adaptation strategies are not without penalties in the numerical solution accuracy. Based on these results, a new grid adaptation criterion for captured shocks is proposed.
NASA Technical Reports Server (NTRS)
Nakamura, S.
1983-01-01
The effects of truncation error on the numerical solution of transonic flows using the full potential equation are studied. The effects of adapting grid point distributions to various solution aspects including shock waves is also discussed. A conclusion is that a rapid change of grid spacing is damaging to the accuracy of the flow solution. Therefore, in a solution adaptive grid application an optimal grid is obtained as a tradeoff between the amount of grid refinement and the rate of grid stretching.
Engineering Data Compendium. Human Perception and Performance. Volume 1
1988-01-01
1986 by John Wiley & Sons, Inc. Reprinted with permission. 3.310, Fig. I: From B. Kraske & M. Crawshaw , Differential errors of kin- esthesis...1973). Influence of after-move- (1974). Muscular and joint-recep- 1. Clark, F. J.,&Horch, K. W. 2. Craske B. & Crawshaw M. ment on muscle memory...Experimental Psychol- ogy, 90, 287-299. *2. Craske, B., & Crawshaw , M. (1974). Differential errors of kines- thesis produced by previous limb position
Temperature Dependence of Errors in Parameters Derived from Van't Hoff Studies.
ERIC Educational Resources Information Center
Dec, Steven F.; Gill, Stanley J.
1985-01-01
The method of Clarke and Glew is broadly applicable to studies of the temperature dependence of equilibrium constant measurements. The method is described and examples of its use in comparing calorimetric results and temperature dependent gas solubility studies are provided. (JN)
Accuracy of a continuous glucose monitoring system in dogs and cats with diabetic ketoacidosis.
Reineke, Erica L; Fletcher, Daniel J; King, Lesley G; Drobatz, Kenneth J
2010-06-01
(1) To determine the ability of a continuous interstitial glucose monitoring system (CGMS) to accurately estimate blood glucose (BG) in dogs and cats with diabetic ketoacidosis. (2) To determine the effect of perfusion, hydration, body condition score, severity of ketosis, and frequency of calibration on the accuracy of the CGMS. Prospective study. University Teaching Hospital. Thirteen dogs and 11 cats diagnosed with diabetic ketoacidosis were enrolled in the study within 24 hours of presentation. Once BG dropped below 22.2 mmol/L (400 mg/dL), a sterile flexible glucose sensor was placed aseptically in the interstitial space and attached to the continuous glucose monitoring device for estimation of the interstitial glucose every 5 minutes. BG measurements were taken with a portable BG meter every 2-4 hours at the discretion of the primary clinician and compared with CGMS glucose measurements. The CGMS estimates of BG and BG measured on the glucometer were strongly associated regardless of calibration frequency (calibration every 8 h: r=0.86, P<0.001; calibration every 12 h: r=0.85, P<0.001). Evaluation of this data using both the Clarke and Consensus error grids showed that 96.7% and 99% of the CGMS readings, respectively, were deemed clinically acceptable (Zones A and B errors). Interpatient variability in the accuracy of the CGMS glucose measurements was found but was not associated with body condition, perfusion, or degree of ketosis. A weak association between hydration status of the patient as assessed with the visual analog scale and absolute percent error (Spearman's rank correlation, rho=-0.079, 95% CI=-0.15 to -0.01, P=0.03) was found, with the device being more accurate in the more hydrated patients. The CGMS provides clinically accurate estimates of BG in patients with diabetic ketoacidosis.
Fendler, Wojciech; Hogendorf, Anna; Szadkowska, Agnieszka; Młynarski, Wojciech
2011-01-01
Self-monitoring of blood glucose (SMBG) is one of the cornerstones of diabetes management. To evaluate the potential for miscoding of a personal glucometer, to define a target population among pediatric patients with diabetes for a non-coding glucometer and the accuracy of the Contour TS non-coding system. Potential for miscoding during self-monitoring of blood glucose was evaluated by means of an anonymous questionnaire, with worst and best case scenarios evaluated depending on the responses pattern. Testing of the Contour TS system was performed according to guidelines set by the national committee for clinical laboratory standards. Estimated frequency of individuals prone to non-coding ranged from 68.21% (95% 60.70- 75.72%) to 7.95% (95%CI 3.86-12.31%) for the worse and best case scenarios respectively. Factors associated with increased likelihood of non-coding were: a smaller number of tests per day, a greater number of individuals involved in testing and self-testing by the patient with diabetes. The Contour TS device showed intra- and inter-assay accuracy -95%, linear association with laboratory measurements (R2=0.99, p <0.0001) and consistent, but small bias of -1.12% (95% Confidence Interval -3.27 to 1.02%). Clarke error grid analysis showed 4% of values within the benign error zone (B) with the other measurements yielding an acceptably accurate result (zone A). The Contour TS system showed sufficient accuracy to be safely used in monitoring of pediatric diabetic patients. Patients from families with a high throughput of test-strips or multiple individuals involved in SMBG using the same meter are candidates for clinical use of such devices due to an increased risk of calibration errors.
Continuous glucose monitoring: quality of hypoglycaemia detection.
Zijlstra, E; Heise, T; Nosek, L; Heinemann, L; Heckermann, S
2013-02-01
To evaluate the accuracy of a (widely used) continuous glucose monitoring (CGM)-system and its ability to detect hypoglycaemic events. A total of 18 patients with type 1 diabetes mellitus used continuous glucose monitoring (Guardian REAL-Time CGMS) during two 9-day in-house periods. A hypoglycaemic threshold alarm alerted patients to sensor readings <70 mg/dl. Continuous glucose monitoring sensor readings were compared to laboratory reference measurements taken every 4 h and in case of a hypoglycaemic alarm. A total of 2317 paired data points were evaluated. Overall, the mean absolute relative difference (MARD) was 16.7%. The percentage of data points in the clinically accurate or acceptable Clarke Error Grid zones A + B was 94.6%. In the hypoglycaemic range, accuracy worsened (MARD 38.8%) leading to a failure to detect more than half of the true hypoglycaemic events (sensitivity 37.5%). Furthermore, more than half of the alarms that warn patients for hypoglycaemia were false (false alert rate 53.3%). Above the low alert threshold, the sensor confirmed 2077 of 2182 reference values (specificity 95.2%). Patients using continuous glucose monitoring should be aware of its limitation to accurately detect hypoglycaemia. © 2012 Blackwell Publishing Ltd.
Design, development, and evaluation of a novel microneedle array-based continuous glucose monitor.
Jina, Arvind; Tierney, Michael J; Tamada, Janet A; McGill, Scott; Desai, Shashi; Chua, Beelee; Chang, Anna; Christiansen, Mark
2014-05-01
The development of accurate, minimally invasive continuous glucose monitoring (CGM) devices has been the subject of much work by several groups, as it is believed that a less invasive and more user-friendly device will result in greater adoption of CGM by persons with insulin-dependent diabetes. This article presents the results of preliminary clinical studies in subjects with diabetes of a novel prototype microneedle-based continuous glucose monitor. In this device, an array of tiny hollow microneedles is applied into the epidermis from where glucose in interstitial fluid (ISF) is transported via passive diffusion to an amperometric glucose sensor external to the body. Comparison of 1396 paired device glucose measurements and fingerstick blood glucose readings for up to 72-hour wear in 10 diabetic subjects shows the device to be accurate and well tolerated by the subjects. Overall mean absolute relative difference (MARD) is 15% with 98.4% of paired points in the A+B region of the Clarke error grid. The prototype device has demonstrated clinically accurate glucose readings over 72 hours, the first time a microneedle-based device has achieved such performance. © 2014 Diabetes Technology Society.
Mediation of in vivo glucose sensor inflammatory response via nitric oxide release.
Gifford, Raeann; Batchelor, Melissa M; Lee, Youngmi; Gokulrangan, Giridharan; Meyerhoff, Mark E; Wilson, George S
2005-12-15
In vivo glucose sensor nitric oxide (NO) release is a means of mediating the inflammatory response that may cause sensor/tissue interactions and degraded sensor performance. The NO release (NOr) sensors were prepared by doping the outer polymeric membrane coating of previously reported needle-type electrochemical sensors with suitable lipophilic diazeniumdiolate species. The Clarke error grid correlation of sensor glycemia estimates versus blood glucose measured in Sprague-Dawley rats yielded 99.7% of the points for NOr sensors and 96.3% of points for the control within zones A and B (clinically acceptable) on Day 1, with a similar correlation for Day 3. Histological examination of the implant site demonstrated that the inflammatory response was significantly decreased for 100% of the NOr sensors at 24 h. The NOr sensors also showed a reduced run-in time of minutes versus hours for control sensors. NO evolution does increase protein nitration in tissue surrounding the sensor, which may be linked to the suppression of inflammation. This study further emphasizes the importance of NO as an electroactive species that can potentially interfere with glucose (peroxide) detection. The NOr sensor offers a viable option for in vivo glucose sensor development.
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2010-01-01
Cell-centered and node-centered approaches have been compared for unstructured finite-volume discretization of inviscid fluxes. The grids range from regular grids to irregular grids, including mixed-element grids and grids with random perturbations of nodes. Accuracy, complexity, and convergence rates of defect-correction iterations are studied for eight nominally second-order accurate schemes: two node-centered schemes with weighted and unweighted least-squares (LSQ) methods for gradient reconstruction and six cell-centered schemes two node-averaging with and without clipping and four schemes that employ different stencils for LSQ gradient reconstruction. The cell-centered nearest-neighbor (CC-NN) scheme has the lowest complexity; a version of the scheme that involves smart augmentation of the LSQ stencil (CC-SA) has only marginal complexity increase. All other schemes have larger complexity; complexity of node-centered (NC) schemes are somewhat lower than complexity of cell-centered node-averaging (CC-NA) and full-augmentation (CC-FA) schemes. On highly anisotropic grids typical of those encountered in grid adaptation, discretization errors of five of the six cell-centered schemes converge with second order on all tested grids; the CC-NA scheme with clipping degrades solution accuracy to first order. The NC schemes converge with second order on regular and/or triangular grids and with first order on perturbed quadrilaterals and mixed-element grids. All schemes may produce large relative errors in gradient reconstruction on grids with perturbed nodes. Defect-correction iterations for schemes employing weighted least-square gradient reconstruction diverge on perturbed stretched grids. Overall, the CC-NN and CC-SA schemes offer the best options of the lowest complexity and secondorder discretization errors. On anisotropic grids over a curved body typical of turbulent flow simulations, the discretization errors converge with second order and are small for the CC-NN, CC-SA, and CC-FA schemes on all grids and for NC schemes on triangular grids; the discretization errors of the CC-NA scheme without clipping do not converge on irregular grids. Accurate gradient reconstruction can be achieved by introducing a local approximate mapping; without approximate mapping, only the NC scheme with weighted LSQ method provides accurate gradients. Defect correction iterations for the CC-NA scheme without clipping diverge; for the NC scheme with weighted LSQ method, the iterations either diverge or converge very slowly. The best option in curved geometries is the CC-SA scheme that offers low complexity, second-order discretization errors, and fast convergence.
Error Estimates of the Ares I Computed Turbulent Ascent Longitudinal Aerodynamic Analysis
NASA Technical Reports Server (NTRS)
Abdol-Hamid, Khaled S.; Ghaffari, Farhad
2012-01-01
Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on an unstructured grid, Reynolds-averaged Navier-Stokes analysis. The validity of the approach to compute the associated error estimates, derived from a base grid to an extrapolated infinite-size grid, was first demonstrated on a sub-scaled wind tunnel model at representative ascent flow conditions for which the experimental data existed. Such analysis at the transonic flow conditions revealed a maximum deviation of about 23% between the computed longitudinal aerodynamic coefficients with the base grid and the measured data across the entire roll angles. This maximum deviation from the wind tunnel data was associated with the computed normal force coefficient at the transonic flow condition and was reduced to approximately 16% based on the infinite-size grid. However, all the computed aerodynamic coefficients with the base grid at the supersonic flow conditions showed a maximum deviation of only about 8% with that level being improved to approximately 5% for the infinite-size grid. The results and the error estimates based on the established procedure are also presented for the flight flow conditions.
Grid convergence errors in hemodynamic solution of patient-specific cerebral aneurysms.
Hodis, Simona; Uthamaraj, Susheil; Smith, Andrea L; Dennis, Kendall D; Kallmes, David F; Dragomir-Daescu, Dan
2012-11-15
Computational fluid dynamics (CFD) has become a cutting-edge tool for investigating hemodynamic dysfunctions in the body. It has the potential to help physicians quantify in more detail the phenomena difficult to capture with in vivo imaging techniques. CFD simulations in anatomically realistic geometries pose challenges in generating accurate solutions due to the grid distortion that may occur when the grid is aligned with complex geometries. In addition, results obtained with computational methods should be trusted only after the solution has been verified on multiple high-quality grids. The objective of this study was to present a comprehensive solution verification of the intra-aneurysmal flow results obtained on different morphologies of patient-specific cerebral aneurysms. We chose five patient-specific brain aneurysm models with different dome morphologies and estimated the grid convergence errors for each model. The grid convergence errors were estimated with respect to an extrapolated solution based on the Richardson extrapolation method, which accounts for the degree of grid refinement. For four of the five models, calculated velocity, pressure, and wall shear stress values at six different spatial locations converged monotonically, with maximum uncertainty magnitudes ranging from 12% to 16% on the finest grids. Due to the geometric complexity of the fifth model, the grid convergence errors showed oscillatory behavior; therefore, each patient-specific model required its own grid convergence study to establish the accuracy of the analysis. Copyright © 2012 Elsevier Ltd. All rights reserved.
Evaluating the accuracy and large inaccuracy of two continuous glucose monitoring systems.
Leelarathna, Lalantha; Nodale, Marianna; Allen, Janet M; Elleri, Daniela; Kumareswaran, Kavita; Haidar, Ahmad; Caldwell, Karen; Wilinska, Malgorzata E; Acerini, Carlo L; Evans, Mark L; Murphy, Helen R; Dunger, David B; Hovorka, Roman
2013-02-01
This study evaluated the accuracy and large inaccuracy of the Freestyle Navigator (FSN) (Abbott Diabetes Care, Alameda, CA) and Dexcom SEVEN PLUS (DSP) (Dexcom, Inc., San Diego, CA) continuous glucose monitoring (CGM) systems during closed-loop studies. Paired CGM and plasma glucose values (7,182 data pairs) were collected, every 15-60 min, from 32 adults (36.2±9.3 years) and 20 adolescents (15.3±1.5 years) with type 1 diabetes who participated in closed-loop studies. Levels 1, 2, and 3 of large sensor error with increasing severity were defined according to absolute relative deviation greater than or equal to ±40%, ±50%, and ±60% at a reference glucose level of ≥6 mmol/L or absolute deviation greater than or equal to ±2.4 mmol/L,±3.0 mmol/L, and ±3.6 mmol/L at a reference glucose level of <6 mmol/L. Median absolute relative deviation was 9.9% for FSN and 12.6% for DSP. Proportions of data points in Zones A and B of Clarke error grid analysis were similar (96.4% for FSN vs. 97.8% for DSP). Large sensor over-reading, which increases risk of insulin over-delivery and hypoglycemia, occurred two- to threefold more frequently with DSP than FSN (once every 2.5, 4.6, and 10.7 days of FSN use vs. 1.2, 2.0, and 3.7 days of DSP use for Level 1-3 errors, respectively). At levels 2 and 3, large sensor errors lasting 1 h or longer were absent with FSN but persisted with DSP. FSN and DSP differ substantially in the frequency and duration of large inaccuracy despite only modest differences in conventional measures of numerical and clinical accuracy. Further evaluations are required to confirm that FSN is more suitable for integration into closed-loop delivery systems.
Optimal configurations of spatial scale for grid cell firing under noise and uncertainty
Towse, Benjamin W.; Barry, Caswell; Bush, Daniel; Burgess, Neil
2014-01-01
We examined the accuracy with which the location of an agent moving within an environment could be decoded from the simulated firing of systems of grid cells. Grid cells were modelled with Poisson spiking dynamics and organized into multiple ‘modules’ of cells, with firing patterns of similar spatial scale within modules and a wide range of spatial scales across modules. The number of grid cells per module, the spatial scaling factor between modules and the size of the environment were varied. Errors in decoded location can take two forms: small errors of precision and larger errors resulting from ambiguity in decoding periodic firing patterns. With enough cells per module (e.g. eight modules of 100 cells each) grid systems are highly robust to ambiguity errors, even over ranges much larger than the largest grid scale (e.g. over a 500 m range when the maximum grid scale is 264 cm). Results did not depend strongly on the precise organization of scales across modules (geometric, co-prime or random). However, independent spatial noise across modules, which would occur if modules receive independent spatial inputs and might increase with spatial uncertainty, dramatically degrades the performance of the grid system. This effect of spatial uncertainty can be mitigated by uniform expansion of grid scales. Thus, in the realistic regimes simulated here, the optimal overall scale for a grid system represents a trade-off between minimizing spatial uncertainty (requiring large scales) and maximizing precision (requiring small scales). Within this view, the temporary expansion of grid scales observed in novel environments may be an optimal response to increased spatial uncertainty induced by the unfamiliarity of the available spatial cues. PMID:24366144
Sheffield, Catherine A; Kane, Michael P; Bakst, Gary; Busch, Robert S; Abelseth, Jill M; Hamilton, Robert A
2009-09-01
This study compared the accuracy and precision of four value-added glucose meters. Finger stick glucose measurements in diabetes patients were performed using the Abbott Diabetes Care (Alameda, CA) Optium, Diagnostic Devices, Inc. (Miami, FL) DDI Prodigy, Home Diagnostics, Inc. (Fort Lauderdale, FL) HDI True Track Smart System, and Arkray, USA (Minneapolis, MN) HypoGuard Assure Pro. Finger glucose measurements were compared with laboratory reference results. Accuracy was assessed by a Clarke error grid analysis (EGA), a Parkes EGA, and within 5%, 10%, 15%, and 20% of the laboratory value criteria (chi2 analysis). Meter precision was determined by calculating absolute mean differences in glucose values between duplicate samples (Kruskal-Wallis test). Finger sticks were obtained from 125 diabetes patients, of which 90.4% were Caucasian, 51.2% were female, 83.2% had type 2 diabetes, and average age of 59 years (SD 14 years). Mean venipuncture blood glucose was 151 mg/dL (SD +/-65 mg/dL; range, 58-474 mg/dL). Clinical accuracy by Clarke EGA was demonstrated in 94% of Optium, 82% of Prodigy, 61% of True Track, and 77% of the Assure Pro samples (P < 0.05 for Optium and True Track compared to all others). By Parkes EGA, the True Track was significantly less accurate than the other meters. Within 5% accuracy was achieved in 34%, 24%, 29%, and 13%, respectively (P < 0.05 for Optium, Prodigy, and Assure Pro compared to True Track). Within 10% accuracy was significantly greater for the Optium, Prodigy, and Assure Pro compared to True Track. Significantly more Optium results demonstrated within 15% and 20% accuracy compared to the other meter systems. The HDI True Track was significantly less precise than the other meter systems. The Abbott Optium was significantly more accurate than the other meter systems, whereas the HDI True Track was significantly less accurate and less precise compared to the other meter systems.
Interpolation Method Needed for Numerical Uncertainty Analysis of Computational Fluid Dynamics
NASA Technical Reports Server (NTRS)
Groves, Curtis; Ilie, Marcel; Schallhorn, Paul
2014-01-01
Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors in an unstructured grid, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors. Nomenclature
Adjoint-Based, Three-Dimensional Error Prediction and Grid Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.
2002-01-01
Engineering computational fluid dynamics (CFD) analysis and design applications focus on output functions (e.g., lift, drag). Errors in these output functions are generally unknown and conservatively accurate solutions may be computed. Computable error estimates can offer the possibility to minimize computational work for a prescribed error tolerance. Such an estimate can be computed by solving the flow equations and the linear adjoint problem for the functional of interest. The computational mesh can be modified to minimize the uncertainty of a computed error estimate. This robust mesh-adaptation procedure automatically terminates when the simulation is within a user specified error tolerance. This procedure for estimating and adapting to error in a functional is demonstrated for three-dimensional Euler problems. An adaptive mesh procedure that links to a Computer Aided Design (CAD) surface representation is demonstrated for wing, wing-body, and extruded high lift airfoil configurations. The error estimation and adaptation procedure yielded corrected functions that are as accurate as functions calculated on uniformly refined grids with ten times as many grid points.
Effects of Mesh Irregularities on Accuracy of Finite-Volume Discretization Schemes
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2012-01-01
The effects of mesh irregularities on accuracy of unstructured node-centered finite-volume discretizations are considered. The focus is on an edge-based approach that uses unweighted least-squares gradient reconstruction with a quadratic fit. For inviscid fluxes, the discretization is nominally third order accurate on general triangular meshes. For viscous fluxes, the scheme is an average-least-squares formulation that is nominally second order accurate and contrasted with a common Green-Gauss discretization scheme. Gradient errors, truncation errors, and discretization errors are separately studied according to a previously introduced comprehensive methodology. The methodology considers three classes of grids: isotropic grids in a rectangular geometry, anisotropic grids typical of adapted grids, and anisotropic grids over a curved surface typical of advancing layer grids. The meshes within the classes range from regular to extremely irregular including meshes with random perturbation of nodes. Recommendations are made concerning the discretization schemes that are expected to be least sensitive to mesh irregularities in applications to turbulent flows in complex geometries.
Attention in the predictive mind.
Ransom, Madeleine; Fazelpour, Sina; Mole, Christopher
2017-01-01
It has recently become popular to suggest that cognition can be explained as a process of Bayesian prediction error minimization. Some advocates of this view propose that attention should be understood as the optimization of expected precisions in the prediction-error signal (Clark, 2013, 2016; Feldman & Friston, 2010; Hohwy, 2012, 2013). This proposal successfully accounts for several attention-related phenomena. We claim that it cannot account for all of them, since there are certain forms of voluntary attention that it cannot accommodate. We therefore suggest that, although the theory of Bayesian prediction error minimization introduces some powerful tools for the explanation of mental phenomena, its advocates have been wrong to claim that Bayesian prediction error minimization is 'all the brain ever does'. Copyright © 2016 Elsevier Inc. All rights reserved.
Dai, Juan; Ji, Zhong; Du, Yubao
2017-08-01
Existing near-infrared non-invasive blood glucose detection modelings mostly detect multi-spectral signals with different wavelength, which is not conducive to the popularization of non-invasive glucose meter at home and does not consider the physiological glucose dynamics of individuals. In order to solve these problems, this study presented a non-invasive blood glucose detection model combining particle swarm optimization (PSO) and artificial neural network (ANN) by using the 1 550 nm near-infrared absorbance as the independent variable and the concentration of blood glucose as the dependent variable, named as PSO-2ANN. The PSO-2ANN model was based on two sub-modules of neural networks with certain structures and arguments, and was built up after optimizing the weight coefficients of the two networks by particle swarm optimization. The results of 10 volunteers were predicted by PSO-2ANN. It was indicated that the relative error of 9 volunteers was less than 20%; 98.28% of the predictions of blood glucose by PSO-2ANN were distributed in the regions A and B of Clarke error grid, which confirmed that PSO-2ANN could offer higher prediction accuracy and better robustness by comparison with ANN. Additionally, even the physiological glucose dynamics of individuals may be different due to the influence of environment, temper, mental state and so on, PSO-2ANN can correct this difference only by adjusting one argument. The PSO-2ANN model provided us a new prospect to overcome individual differences in blood glucose prediction.
Plasma-Generating Glucose Monitor Accuracy Demonstrated in an Animal Model
Magarian, Peggy; Sterling, Bernhard
2009-01-01
Introduction Four randomized controlled trials have compared mortality and morbidity of tight glycemic control versus conventional glucose for intensive care unit (ICU) patients. Two trials showed a positive outcome. However, one single-center trial and a large multicenter trial had negative results. The positive trials used accurate portable lab analyzers. The negative trial allowed the use of meters. The portable analyzer measures in filtered plasma, minimizing the interference effects. OptiScan Biomedical Corporation is developing a continuous glucose monitor using centrifuged plasma and mid-infrared spectroscopy for use in ICU medicine. The OptiScanner draws approximately 0.1 ml of blood every 15 min and creates a centrifuged plasma sample. Internal quality control minimizes sample preparation error. Interference adjustment using this technique has been presented at the Society of Critical Care Medicine in separate studies since 2006. Method A good laboratory practice study was conducted on three Yorkshire pigs using a central venous catheter over 6 h while performing a glucose challenge. Matching Yellow Springs Instrument glucose readings were obtained. Results Some 95.7% of the predicted values were in the Clarke Error Grid A zone and 4.3% in the B zone. Of those in the B zone, all were within 3.3% of the A zone boundaries. The coefficient of determination (R2) was 0.993. The coefficient of variance was 5.02%. Animal necropsy and blood panels demonstrated safety. Conclusion The OptiScanner investigational device performed safely and accurately in an animal model. Human studies using the device will begin soon. PMID:20144396
Cheng, Yuhua; Chen, Kai; Bai, Libing; Yang, Jing
2014-02-01
Precise control of the grid-connected current is a challenge in photovoltaic inverter research. Traditional Proportional-Integral (PI) control technology cannot eliminate steady-state error when tracking the sinusoidal signal from the grid, which results in a very high total harmonic distortion in the grid-connected current. A novel PI controller has been developed in this paper, in which the sinusoidal wave is discretized into an N-step input signal that is decided by the control frequency to eliminate the steady state error of the system. The effect of periodical error caused by the dead zone of the power switch and conduction voltage drop can be avoided; the current tracking accuracy and current harmonic content can also be improved. Based on the proposed PI controller, a 700 W photovoltaic grid-connected inverter is developed and validated. The improvement has been demonstrated through experimental results.
Accuracy Analysis for Finite-Volume Discretization Schemes on Irregular Grids
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.
2010-01-01
A new computational analysis tool, downscaling test, is introduced and applied for studying the convergence rates of truncation and discretization errors of nite-volume discretization schemes on general irregular (e.g., unstructured) grids. The study shows that the design-order convergence of discretization errors can be achieved even when truncation errors exhibit a lower-order convergence or, in some cases, do not converge at all. The downscaling test is a general, efficient, accurate, and practical tool, enabling straightforward extension of verification and validation to general unstructured grid formulations. It also allows separate analysis of the interior, boundaries, and singularities that could be useful even in structured-grid settings. There are several new findings arising from the use of the downscaling test analysis. It is shown that the discretization accuracy of a common node-centered nite-volume scheme, known to be second-order accurate for inviscid equations on triangular grids, degenerates to first order for mixed grids. Alternative node-centered schemes are presented and demonstrated to provide second and third order accuracies on general mixed grids. The local accuracy deterioration at intersections of tangency and in flow/outflow boundaries is demonstrated using the DS tests tailored to examining the local behavior of the boundary conditions. The discretization-error order reduction within inviscid stagnation regions is demonstrated. The accuracy deterioration is local, affecting mainly the velocity components, but applies to any order scheme.
Fully implicit moving mesh adaptive algorithm
NASA Astrophysics Data System (ADS)
Serazio, C.; Chacon, L.; Lapenta, G.
2006-10-01
In many problems of interest, the numerical modeler is faced with the challenge of dealing with multiple time and length scales. The former is best dealt with with fully implicit methods, which are able to step over fast frequencies to resolve the dynamical time scale of interest. The latter requires grid adaptivity for efficiency. Moving-mesh grid adaptive methods are attractive because they can be designed to minimize the numerical error for a given resolution. However, the required grid governing equations are typically very nonlinear and stiff, and of considerably difficult numerical treatment. Not surprisingly, fully coupled, implicit approaches where the grid and the physics equations are solved simultaneously are rare in the literature, and circumscribed to 1D geometries. In this study, we present a fully implicit algorithm for moving mesh methods that is feasible for multidimensional geometries. Crucial elements are the development of an effective multilevel treatment of the grid equation, and a robust, rigorous error estimator. For the latter, we explore the effectiveness of a coarse grid correction error estimator, which faithfully reproduces spatial truncation errors for conservative equations. We will show that the moving mesh approach is competitive vs. uniform grids both in accuracy (due to adaptivity) and efficiency. Results for a variety of models 1D and 2D geometries will be presented. L. Chac'on, G. Lapenta, J. Comput. Phys., 212 (2), 703 (2006) G. Lapenta, L. Chac'on, J. Comput. Phys., accepted (2006)
Working Papers in Experimental Speech-Language Pathology and Audiology. Volume VII, 1979.
ERIC Educational Resources Information Center
City Univ. of New York, Flushing, NY. Queens Coll.
Seven papers review research in speech-language pathology and audiology. K. Polzer et al. describe an investigation of sign language therapy for the severely language impaired. S. Dworetsky and L. Clark analyze the phonemic and nonphonemic error patterns in five nonverbal and five verbal oral apraxic adults. The performance of three language…
Torralba, Marta; Díaz-Pérez, Lucía C.
2017-01-01
This article presents a self-calibration procedure and the experimental results for the geometrical characterisation of a 2D laser system operating along a large working range (50 mm × 50 mm) with submicrometre uncertainty. Its purpose is to correct the geometric errors of the 2D laser system setup generated when positioning the two laser heads and the plane mirrors used as reflectors. The non-calibrated artefact used in this procedure is a commercial grid encoder that is also a measuring instrument. Therefore, the self-calibration procedure also allows the determination of the geometrical errors of the grid encoder, including its squareness error. The precision of the proposed algorithm is tested using virtual data. Actual measurements are subsequently registered, and the algorithm is applied. Once the laser system is characterised, the error of the grid encoder is calculated along the working range, resulting in an expanded submicrometre calibration uncertainty (k = 2) for the X and Y axes. The results of the grid encoder calibration are comparable to the errors provided by the calibration certificate for its main central axes. It is, therefore, possible to confirm the suitability of the self-calibration methodology proposed in this article. PMID:28858239
Errors in retarding potential analyzers caused by nonuniformity of the grid-plane potential.
NASA Technical Reports Server (NTRS)
Hanson, W. B.; Frame, D. R.; Midgley, J. E.
1972-01-01
One aspect of the degradation in performance of retarding potential analyzers caused by potential depressions in the retarding grid is quantitatively estimated from laboratory measurements and theoretical calculations. A simple expression is obtained that permits the use of laboratory measurements of grid properties to make first-order corrections to flight data. Systematic positive errors in ion temperature of approximately 16% for the Ogo 4 instrument and 3% for the Ogo 6 instrument are deduced. The effects of the transverse electric fields arising from the grid potential depressions are not treated.
Improving the quality of marine geophysical track line data: Along-track analysis
NASA Astrophysics Data System (ADS)
Chandler, Michael T.; Wessel, Paul
2008-02-01
We have examined 4918 track line geophysics cruises archived at the U.S. National Geophysical Data Center (NGDC) using comprehensive error checking methods. Each cruise was checked for observation outliers, excessive gradients, metadata consistency, and general agreement with satellite altimetry-derived gravity and predicted bathymetry grids. Thresholds for error checking were determined empirically through inspection of histograms for all geophysical values, gradients, and differences with gridded data sampled along ship tracks. Robust regression was used to detect systematic scale and offset errors found by comparing ship bathymetry and free-air anomalies to the corresponding values from global grids. We found many recurring error types in the NGDC archive, including poor navigation, inappropriately scaled or offset data, excessive gradients, and extended offsets in depth and gravity when compared to global grids. While ˜5-10% of bathymetry and free-air gravity records fail our conservative tests, residual magnetic errors may exceed twice this proportion. These errors hinder the effective use of the data and may lead to mistakes in interpretation. To enable the removal of gross errors without over-writing original cruise data, we developed an errata system that concisely reports all errors encountered in a cruise. With such errata files, scientists may share cruise corrections, thereby preventing redundant processing. We have implemented these quality control methods in the modified MGD77 supplement to the Generic Mapping Tools software suite.
Merging gauge and satellite rainfall with specification of associated uncertainty across Australia
NASA Astrophysics Data System (ADS)
Woldemeskel, Fitsum M.; Sivakumar, Bellie; Sharma, Ashish
2013-08-01
Accurate estimation of spatial rainfall is crucial for modelling hydrological systems and planning and management of water resources. While spatial rainfall can be estimated either using rain gauge-based measurements or using satellite-based measurements, such estimates are subject to uncertainties due to various sources of errors in either case, including interpolation and retrieval errors. The purpose of the present study is twofold: (1) to investigate the benefit of merging rain gauge measurements and satellite rainfall data for Australian conditions and (2) to produce a database of retrospective rainfall along with a new uncertainty metric for each grid location at any timestep. The analysis involves four steps: First, a comparison of rain gauge measurements and the Tropical Rainfall Measuring Mission (TRMM) 3B42 data at such rain gauge locations is carried out. Second, gridded monthly rain gauge rainfall is determined using thin plate smoothing splines (TPSS) and modified inverse distance weight (MIDW) method. Third, the gridded rain gauge rainfall is merged with the monthly accumulated TRMM 3B42 using a linearised weighting procedure, the weights at each grid being calculated based on the error variances of each dataset. Finally, cross validation (CV) errors at rain gauge locations and standard errors at gridded locations for each timestep are estimated. The CV error statistics indicate that merging of the two datasets improves the estimation of spatial rainfall, and more so where the rain gauge network is sparse. The provision of spatio-temporal standard errors with the retrospective dataset is particularly useful for subsequent modelling applications where input error knowledge can help reduce the uncertainty associated with modelling outcomes.
Using meta-differential evolution to enhance a calculation of a continuous blood glucose level.
Koutny, Tomas
2016-09-01
We developed a new model of glucose dynamics. The model calculates blood glucose level as a function of transcapillary glucose transport. In previous studies, we validated the model with animal experiments. We used analytical method to determine model parameters. In this study, we validate the model with subjects with type 1 diabetes. In addition, we combine the analytic method with meta-differential evolution. To validate the model with human patients, we obtained a data set of type 1 diabetes study that was coordinated by Jaeb Center for Health Research. We calculated a continuous blood glucose level from continuously measured interstitial fluid glucose level. We used 6 different scenarios to ensure robust validation of the calculation. Over 96% of calculated blood glucose levels fit A+B zones of the Clarke Error Grid. No data set required any correction of model parameters during the time course of measuring. We successfully verified the possibility of calculating a continuous blood glucose level of subjects with type 1 diabetes. This study signals a successful transition of our research from an animal experiment to a human patient. Researchers can test our model with their data on-line at https://diabetes.zcu.cz. Copyright © 2016 The Author. Published by Elsevier Ireland Ltd.. All rights reserved.
Hemkens, Lars G; Hilden, Kristian M; Hartschen, Stephan; Kaiser, Thomas; Didjurgeit, Ulrike; Hansen, Roland; Bender, Ralf; Sawicki, Peter T
2008-08-01
In addition to the metrological quality of international normalized ratio (INR) monitoring devices used in patients' self-management of long-term anticoagulation, the effectiveness of self-monitoring with such devices has to be evaluated under real-life conditions with a focus on clinical implications. An approach to evaluate the clinical significance of inaccuracies is the error-grid analysis as already established in self-monitoring of blood glucose. Two anticoagulation monitors were compared in a real-life setting and a novel error-grid instrument for oral anticoagulation has been evaluated. In a randomized crossover study 16 patients performed self-management of anticoagulation using the INRatio and the CoaguChek S system. Main outcome measures were clinically relevant INR differences according to established criteria and to the error-grid approach. A lower rate of clinically relevant disagreements according to Anderson's criteria was found with CoaguChek S than with INRatio without statistical significance (10.77% vs. 12.90%; P = 0.787). Using the error-grid we found principally consistent results: More measurement pairs with discrepancies of no or low clinical relevance were found with CoaguChek S, whereas with INRatio we found more differences with a moderate clinical relevance. A high rate of patients' satisfaction with both of the point of care devices was found with only marginal differences. A principal appropriateness of the investigated point-of-care devices to adequately monitor the INR is shown. The error-grid is useful for comparing monitoring methods with a focus on clinical relevance under real-life conditions beyond assessing the pure metrological quality, but we emphasize that additional trials using this instrument with larger patient populations are needed to detect differences in clinically relevant disagreements.
INITIAL APPL;ICATION OF THE ADAPTIVE GRID AIR POLLUTION MODEL
The paper discusses an adaptive-grid algorithm used in air pollution models. The algorithm reduces errors related to insufficient grid resolution by automatically refining the grid scales in regions of high interest. Meanwhile the grid scales are coarsened in other parts of the d...
Nonlinear grid error effects on numerical solution of partial differential equations
NASA Technical Reports Server (NTRS)
Dey, S. K.
1980-01-01
Finite difference solutions of nonlinear partial differential equations require discretizations and consequently grid errors are generated. These errors strongly affect stability and convergence properties of difference models. Previously such errors were analyzed by linearizing the difference equations for solutions. Properties of mappings of decadence were used to analyze nonlinear instabilities. Such an analysis is directly affected by initial/boundary conditions. An algorithm was developed, applied to nonlinear Burgers equations, and verified computationally. A preliminary test shows that Navier-Stokes equations may be treated similarly.
Domain-Level Assessment of the Weather Running Estimate-Nowcast (WREN) Model
2016-11-01
Added by Decreased Grid Spacing 14 4.4 Performance Comparison of 2 WRE–N Configurations 18 4.5 Performance Comparison: Dumais WRE–N with FDDA vs. the...FDDA for 2 -m-AGL TMP (K) ..................................................... 15 Fig. 11 Bias and RMSE errors for the 3 grids for Dumais and Passner...WRE–N with FDDA for 2 -m-AGL DPT (K) ...................................................... 16 Fig. 12 Bias and RMSE errors for the 3 grids for Dumais
Improving the Glucose Meter Error Grid With the Taguchi Loss Function.
Krouwer, Jan S
2016-07-01
Glucose meters often have similar performance when compared by error grid analysis. This is one reason that other statistics such as mean absolute relative deviation (MARD) are used to further differentiate performance. The problem with MARD is that too much information is lost. But additional information is available within the A zone of an error grid by using the Taguchi loss function. Applying the Taguchi loss function gives each glucose meter difference from reference a value ranging from 0 (no error) to 1 (error reaches the A zone limit). Values are averaged over all data which provides an indication of risk of an incorrect medical decision. This allows one to differentiate glucose meter performance for the common case where meters have a high percentage of values in the A zone and no values beyond the B zone. Examples are provided using simulated data. © 2015 Diabetes Technology Society.
Research on control strategy based on fuzzy PR for grid-connected inverter
NASA Astrophysics Data System (ADS)
Zhang, Qian; Guan, Weiguo; Miao, Wen
2018-04-01
In the traditional PI controller, there is static error in tracking ac signals. To solve the problem, the control strategy of a fuzzy PR and the grid voltage feed-forward is proposed. The fuzzy PR controller is to eliminate the static error of the system. It also adjusts parameters of PR controller in real time, which avoids the defect of fixed parameter fixed. The grid voltage feed-forward control can ensure the quality of current and improve the system's anti-interference ability when the grid voltage is distorted. Finally, the simulation results show that the system can output grid current with good quality and also has good dynamic and steady state performance.
An ILP based Algorithm for Optimal Customer Selection for Demand Response in SmartGrids
DOE Office of Scientific and Technical Information (OSTI.GOV)
Kuppannagari, Sanmukh R.; Kannan, Rajgopal; Prasanna, Viktor K.
Demand Response (DR) events are initiated by utilities during peak demand periods to curtail consumption. They ensure system reliability and minimize the utility’s expenditure. Selection of the right customers and strategies is critical for a DR event. An effective DR scheduling algorithm minimizes the curtailment error which is the absolute difference between the achieved curtailment value and the target. State-of-the-art heuristics exist for customer selection, however their curtailment errors are unbounded and can be as high as 70%. In this work, we develop an Integer Linear Programming (ILP) formulation for optimally selecting customers and curtailment strategies that minimize the curtailmentmore » error during DR events in SmartGrids. We perform experiments on real world data obtained from the University of Southern California’s SmartGrid and show that our algorithm achieves near exact curtailment values with errors in the range of 10 -7 to 10 -5, which are within the range of numerical errors. We compare our results against the state-of-the-art heuristic being deployed in practice in the USC SmartGrid. We show that for the same set of available customer strategy pairs our algorithm performs 103 to 107 times better in terms of the curtailment errors incurred.« less
A time-efficient algorithm for implementing the Catmull-Clark subdivision method
NASA Astrophysics Data System (ADS)
Ioannou, G.; Savva, A.; Stylianou, V.
2015-10-01
Splines are the most popular methods in Figure Modeling and CAGD (Computer Aided Geometric Design) in generating smooth surfaces from a number of control points. The control points define the shape of a figure and splines calculate the required number of points which when displayed on a computer screen the result is a smooth surface. However, spline methods are based on a rectangular topological structure of points, i.e., a two-dimensional table of vertices, and thus cannot generate complex figures, such as the human and animal bodies that their complex structure does not allow them to be defined by a regular rectangular grid. On the other hand surface subdivision methods, which are derived by splines, generate surfaces which are defined by an arbitrary topology of control points. This is the reason that during the last fifteen years subdivision methods have taken the lead over regular spline methods in all areas of modeling in both industry and research. The cost of executing computer software developed to read control points and calculate the surface is run-time, due to the fact that the surface-structure required for handling arbitrary topological grids is very complicate. There are many software programs that have been developed related to the implementation of subdivision surfaces however, not many algorithms are documented in the literature, to support developers for writing efficient code. This paper aims to assist programmers by presenting a time-efficient algorithm for implementing subdivision splines. The Catmull-Clark which is the most popular of the subdivision methods has been employed to illustrate the algorithm.
A Novel Grid SINS/DVL Integrated Navigation Algorithm for Marine Application
Kang, Yingyao; Zhao, Lin; Cheng, Jianhua; Fan, Xiaoliang
2018-01-01
Integrated navigation algorithms under the grid frame have been proposed based on the Kalman filter (KF) to solve the problem of navigation in some special regions. However, in the existing study of grid strapdown inertial navigation system (SINS)/Doppler velocity log (DVL) integrated navigation algorithms, the Earth models of the filter dynamic model and the SINS mechanization are not unified. Besides, traditional integrated systems with the KF based correction scheme are susceptible to measurement errors, which would decrease the accuracy and robustness of the system. In this paper, an adaptive robust Kalman filter (ARKF) based hybrid-correction grid SINS/DVL integrated navigation algorithm is designed with the unified reference ellipsoid Earth model to improve the navigation accuracy in middle-high latitude regions for marine application. Firstly, to unify the Earth models, the mechanization of grid SINS is introduced and the error equations are derived based on the same reference ellipsoid Earth model. Then, a more accurate grid SINS/DVL filter model is designed according to the new error equations. Finally, a hybrid-correction scheme based on the ARKF is proposed to resist the effect of measurement errors. Simulation and experiment results show that, compared with the traditional algorithms, the proposed navigation algorithm can effectively improve the navigation performance in middle-high latitude regions by the unified Earth models and the ARKF based hybrid-correction scheme. PMID:29373549
Syamlal, Madhava; Celik, Ismail B.; Benyahia, Sofiane
2017-07-12
The two-fluid model (TFM) has become a tool for the design and troubleshooting of industrial fluidized bed reactors. To use TFM for scale up with confidence, the uncertainty in its predictions must be quantified. Here, we study two sources of uncertainty: discretization and time-averaging. First, we show that successive grid refinement may not yield grid-independent transient quantities, including cross-section–averaged quantities. Successive grid refinement would yield grid-independent time-averaged quantities on sufficiently fine grids. A Richardson extrapolation can then be used to estimate the discretization error, and the grid convergence index gives an estimate of the uncertainty. Richardson extrapolation may not workmore » for industrial-scale simulations that use coarse grids. We present an alternative method for coarse grids and assess its ability to estimate the discretization error. Second, we assess two methods (autocorrelation and binning) and find that the autocorrelation method is more reliable for estimating the uncertainty introduced by time-averaging TFM data.« less
Implicit adaptive mesh refinement for 2D reduced resistive magnetohydrodynamics
NASA Astrophysics Data System (ADS)
Philip, Bobby; Chacón, Luis; Pernice, Michael
2008-10-01
An implicit structured adaptive mesh refinement (SAMR) solver for 2D reduced magnetohydrodynamics (MHD) is described. The time-implicit discretization is able to step over fast normal modes, while the spatial adaptivity resolves thin, dynamically evolving features. A Jacobian-free Newton-Krylov method is used for the nonlinear solver engine. For preconditioning, we have extended the optimal "physics-based" approach developed in [L. Chacón, D.A. Knoll, J.M. Finn, An implicit, nonlinear reduced resistive MHD solver, J. Comput. Phys. 178 (2002) 15-36] (which employed multigrid solver technology in the preconditioner for scalability) to SAMR grids using the well-known Fast Adaptive Composite grid (FAC) method [S. McCormick, Multilevel Adaptive Methods for Partial Differential Equations, SIAM, Philadelphia, PA, 1989]. A grid convergence study demonstrates that the solver performance is independent of the number of grid levels and only depends on the finest resolution considered, and that it scales well with grid refinement. The study of error generation and propagation in our SAMR implementation demonstrates that high-order (cubic) interpolation during regridding, combined with a robustly damping second-order temporal scheme such as BDF2, is required to minimize impact of grid errors at coarse-fine interfaces on the overall error of the computation for this MHD application. We also demonstrate that our implementation features the desired property that the overall numerical error is dependent only on the finest resolution level considered, and not on the base-grid resolution or on the number of refinement levels present during the simulation. We demonstrate the effectiveness of the tool on several challenging problems.
Gridded National Inventory of U.S. Methane Emissions
NASA Technical Reports Server (NTRS)
Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.; Turner, Alexander J.; Weitz, Melissa; Wirth, Tom; Hight, Cate; DeFigueiredo, Mark; Desai, Mausami; Schmeltz, Rachel;
2016-01-01
We present a gridded inventory of US anthropogenic methane emissions with 0.1 deg x 0.1 deg spatial resolution, monthly temporal resolution, and detailed scale dependent error characterization. The inventory is designed to be onsistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissionsand Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a widerange of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show large differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.
Gridded national inventory of U.S. methane emissions
Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.; ...
2016-11-16
Here we present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scaledependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show largemore » differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Finally, our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.« less
Gridded National Inventory of U.S. Methane Emissions.
Maasakkers, Joannes D; Jacob, Daniel J; Sulprizio, Melissa P; Turner, Alexander J; Weitz, Melissa; Wirth, Tom; Hight, Cate; DeFigueiredo, Mark; Desai, Mausami; Schmeltz, Rachel; Hockstad, Leif; Bloom, Anthony A; Bowman, Kevin W; Jeong, Seongeun; Fischer, Marc L
2016-12-06
We present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scale-dependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show large differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.
Caduff, A; Dewarrat, F; Talary, M; Stalder, G; Heinemann, L; Feldman, Yu
2006-12-15
The aim of this work was to evaluate the performance of a novel non-invasive continuous glucose-monitoring system based on impedance spectroscopy (IS) in patients with diabetes. Ten patients with type 1 diabetes (mean+/-S.D., age 28+/-8 years, BMI 24.2+/-3.2 kg/m(2) and HbA(1C) 7.3+/-1.6%) and five with type 2 diabetes (age 61+/-8 years, BMI 27.5+/-3.2 kg/m(2) and HbA(1C) 8.3+/-1.8%) took part in this study, which comprised a glucose clamp experiment followed by a 7-day outpatient evaluation. The measurements obtained by the NI-CGMD and the reference blood glucose-measuring techniques were evaluated using retrospective data evaluation procedures. Under less controlled outpatient conditions a correlation coefficient of r=0.640 and a standard error of prediction (SEP) of 45 mg dl(-1) with a total of 590 paired glucose measurements was found (versus r=0.926 and a SEP of 26 mg dl(-1) under controlled conditions). Clark error grid analyses (EGA) showed 56% of all values in zone A, 37% in B and 7% in C-E. In conclusion, these results indicate that IS in the used technical setting allows retrospective, continuous and truly non-invasive glucose monitoring under defined conditions for patients with diabetes. Technical advances and developments are needed to expand on this concept to bring the results from the outpatient study closer to those in the experimental section of the study. Further studies will not only help to evaluate the performance and limitations of using such a technique for non non-invasive glucose monitoring but also help to verify technical extensions towards a IS-based concept that offers improved performance under real life operating conditions.
Ocvirk, Gregor; Hajnsek, Martin; Gillen, Ralph; Guenther, Arnfried; Hochmuth, Gernot; Kamecke, Ulrike; Koelker, Karl-Heinz; Kraemer, Peter; Obermaier, Karin; Reinheimer, Cornelia; Jendrike, Nina; Freckmann, Guido
2009-05-01
A novel microdialysis-based continuous glucose monitoring system, the so-called Clinical Research Tool (CRT), is presented. The CRT was designed exclusively for investigational use to offer high analytical accuracy and reliability. The CRT was built to avoid signal artifacts due to catheter clogging, flow obstruction by air bubbles, and flow variation caused by inconstant pumping. For differentiation between physiological events and system artifacts, the sensor current, counter electrode and polarization voltage, battery voltage, sensor temperature, and flow rate are recorded at a rate of 1 Hz. In vitro characterization with buffered glucose solutions (c(glucose) = 0 - 26 x 10(-3) mol liter(-1)) over 120 h yielded a mean absolute relative error (MARE) of 2.9 +/- 0.9% and a recorded mean flow rate of 330 +/- 48 nl/min with periodic flow rate variation amounting to 24 +/- 7%. The first 120 h in vivo testing was conducted with five type 1 diabetes subjects wearing two systems each. A mean flow rate of 350 +/- 59 nl/min and a periodic variation of 22 +/- 6% were recorded. Utilizing 3 blood glucose measurements per day and a physical lag time of 1980 s, retrospective calibration of the 10 in vivo experiments yielded a MARE value of 12.4 +/- 5.7. Clarke error grid analysis resulted in 81.0%, 16.6%, 0.8%, 1.6%, and 0% in regions A, B, C, D, and E, respectively. The CRT demonstrates exceptional reliability of system operation and very good measurement performance. The ability to differentiate between artifacts and physiological effects suggests the use of the CRT as a reference tool in clinical investigations. 2009 Diabetes Technology Society.
Ocvirk, Gregor; Hajnsek, Martin; Gillen, Ralph; Guenther, Arnfried; Hochmuth, Gernot; Kamecke, Ulrike; Koelker, Karl-Heinz; Kraemer, Peter; Obermaier, Karin; Reinheimer, Cornelia; Jendrike, Nina; Freckmann, Guido
2009-01-01
Background A novel microdialysis-based continuous glucose monitoring system, the so-called Clinical Research Tool (CRT), is presented. The CRT was designed exclusively for investigational use to offer high analytical accuracy and reliability. The CRT was built to avoid signal artifacts due to catheter clogging, flow obstruction by air bubbles, and flow variation caused by inconstant pumping. For differentiation between physiological events and system artifacts, the sensor current, counter electrode and polarization voltage, battery voltage, sensor temperature, and flow rate are recorded at a rate of 1 Hz. Method In vitro characterization with buffered glucose solutions (cglucose = 0 - 26 × 10-3 mol liter-1) over 120 h yielded a mean absolute relative error (MARE) of 2.9 ± 0.9% and a recorded mean flow rate of 330 ± 48 nl/min with periodic flow rate variation amounting to 24 ± 7%. The first 120 h in vivo testing was conducted with five type 1 diabetes subjects wearing two systems each. A mean flow rate of 350 ± 59 nl/min and a periodic variation of 22 ± 6% were recorded. Results Utilizing 3 blood glucose measurements per day and a physical lag time of 1980 s, retrospective calibration of the 10 in vivo experiments yielded a MARE value of 12.4 ± 5.7. Clarke error grid analysis resulted in 81.0%, 16.6%, 0.8%, 1.6%, and 0% in regions A, B, C, D, and E, respectively. Conclusion The CRT demonstrates exceptional reliability of system operation and very good measurement performance. The ability to differentiate between artifacts and physiological effects suggests the use of the CRT as a reference tool in clinical investigations. PMID:20144284
Assessing the performance of handheld glucose testing for critical care.
Kost, Gerald J; Tran, Nam K; Louie, Richard F; Gentile, Nicole L; Abad, Victor J
2008-12-01
We assessed the performance of a point-of-care (POC) glucose meter system (GMS) with multitasking test strip by using the locally-smoothed (LS) median absolute difference (MAD) curve method in conjunction with a modified Bland-Altman difference plot and superimposed International Organization for Standardization (ISO) 15197 tolerance bands. We analyzed performance for tight glycemic control (TGC). A modified glucose oxidase enzyme with a multilayer-gold, multielectrode, four-well test strip (StatStriptrade mark, NOVA Biomedical, Waltham, MA) was used. There was no test strip calibration code. Pragmatic comparison was done of GMS results versus paired plasma glucose measurements from chemistry analyzers in clinical laboratories. Venous samples (n = 1,703) were analyzed at 35 hospitals that used 20 types of chemistry analyzers. Erroneous results were identified using the Bland-Altman plot and ISO 15197 criteria. Discrepant values were analyzed for the TGC interval of 80-110 mg/dL. The GMS met ISO 15197 guidelines; 98.6% (410 of 416) of observations were within tolerance for glucose <75 mg/dL, and for > or =75 mg/dL, 100% were within tolerance. Paired differences (handheld minus reference) averaged -2.2 (SD 9.8) mg/dL; the median was -1 (range, -96 to 45) mg/dL. LS MAD curve analysis revealed satisfactory performance below 186 mg/dL; above 186 mg/dL, the recommended error tolerance limit (5 mg/dL) was not met. No discrepant values appeared. All points fell in Clarke Error Grid zone A. Linear regression showed y = 1.018x - 0.716 mg/dL, and r2 = 0.995. LS MAD curves draw on human ability to discriminate performance visually. LS MAD curve and ISO 15197 performance were acceptable for TGC. POC and reference glucose calibration should be harmonized and standardized.
Interpolation Method Needed for Numerical Uncertainty
NASA Technical Reports Server (NTRS)
Groves, Curtis E.; Ilie, Marcel; Schallhorn, Paul A.
2014-01-01
Using Computational Fluid Dynamics (CFD) to predict a flow field is an approximation to the exact problem and uncertainties exist. There is a method to approximate the errors in CFD via Richardson's Extrapolation. This method is based off of progressive grid refinement. To estimate the errors, the analyst must interpolate between at least three grids. This paper describes a study to find an appropriate interpolation scheme that can be used in Richardson's extrapolation or other uncertainty method to approximate errors.
Turbulent Output-Based Anisotropic Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.; Carlson, Jan-Renee
2010-01-01
Controlling discretization error is a remaining challenge for computational fluid dynamics simulation. Grid adaptation is applied to reduce estimated discretization error in drag or pressure integral output functions. To enable application to high O(10(exp 7)) Reynolds number turbulent flows, a hybrid approach is utilized that freezes the near-wall boundary layer grids and adapts the grid away from the no slip boundaries. The hybrid approach is not applicable to problems with under resolved initial boundary layer grids, but is a powerful technique for problems with important off-body anisotropic features. Supersonic nozzle plume, turbulent flat plate, and shock-boundary layer interaction examples are presented with comparisons to experimental measurements of pressure and velocity. Adapted grids are produced that resolve off-body features in locations that are not known a priori.
SAGE: The Self-Adaptive Grid Code. 3
NASA Technical Reports Server (NTRS)
Davies, Carol B.; Venkatapathy, Ethiraj
1999-01-01
The multi-dimensional self-adaptive grid code, SAGE, is an important tool in the field of computational fluid dynamics (CFD). It provides an efficient method to improve the accuracy of flow solutions while simultaneously reducing computer processing time. Briefly, SAGE enhances an initial computational grid by redistributing the mesh points into more appropriate locations. The movement of these points is driven by an equal-error-distribution algorithm that utilizes the relationship between high flow gradients and excessive solution errors. The method also provides a balance between clustering points in the high gradient regions and maintaining the smoothness and continuity of the adapted grid, The latest version, Version 3, includes the ability to change the boundaries of a given grid to more efficiently enclose flow structures and provides alternative redistribution algorithms.
NASA Technical Reports Server (NTRS)
Bell, Thomas L.; Kundu, Prasun K.; Kummerow, Christian D.; Einaudi, Franco (Technical Monitor)
2000-01-01
Quantitative use of satellite-derived maps of monthly rainfall requires some measure of the accuracy of the satellite estimates. The rainfall estimate for a given map grid box is subject to both remote-sensing error and, in the case of low-orbiting satellites, sampling error due to the limited number of observations of the grid box provided by the satellite. A simple model of rain behavior predicts that Root-mean-square (RMS) random error in grid-box averages should depend in a simple way on the local average rain rate, and the predicted behavior has been seen in simulations using surface rain-gauge and radar data. This relationship was examined using satellite SSM/I data obtained over the western equatorial Pacific during TOGA COARE. RMS error inferred directly from SSM/I rainfall estimates was found to be larger than predicted from surface data, and to depend less on local rain rate than was predicted. Preliminary examination of TRMM microwave estimates shows better agreement with surface data. A simple method of estimating rms error in satellite rainfall estimates is suggested, based on quantities that can be directly computed from the satellite data.
Fellin, Francesco; Righetto, Roberto; Fava, Giovanni; Trevisan, Diego; Amelio, Dante; Farace, Paolo
2017-03-01
To investigate the range errors made in treatment planning due to the presence of the immobilization devices along the proton beam path. The measured water equivalent thickness (WET) of selected devices was measured by a high-energy spot and a multi-layer ionization chamber and compared with that predicted by treatment planning system (TPS). Two treatment couches, two thermoplastic masks (both un-stretched and stretched) and one headrest were selected. At TPS, every immobilization device was modelled as being part of the patient. The following parameters were assessed: CT acquisition protocol, dose-calculation grid-sizes (1.5 and 3.0mm) and beam-entrance with respect to the devices (coplanar and non-coplanar). Finally, the potential errors produced by a wrong manual separation between treatment couch and the CT table (not present during treatment) were investigated. In the thermoplastic mask, there was a clear effect due to beam entrance, a moderate effect due to the CT protocols and almost no effect due to TPS grid-size, with 1mm errors observed only when thick un-stretched portions were crossed by non-coplanar beams. In the treatment couches the WET errors were negligible (<0.3mm) regardless of the grid-size and CT protocol. The potential range errors produced in the manual separation between treatment couch and CT table were small with 1.5mm grid-size, but could be >0.5mm with a 3.0mm grid-size. In the headrest, WET errors were negligible (0.2mm). With only one exception (un-stretched mask, non-coplanar beams), the WET of all the immobilization devices was properly modelled by the TPS. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.
NASA Technical Reports Server (NTRS)
Fiske, David R.
2004-01-01
In an earlier paper, Misner (2004, Class. Quant. Grav., 21, S243) presented a novel algorithm for computing the spherical harmonic components of data represented on a cubic grid. I extend Misner s original analysis by making detailed error estimates of the numerical errors accrued by the algorithm, by using symmetry arguments to suggest a more efficient implementation scheme, and by explaining how the algorithm can be applied efficiently on data with explicit reflection symmetries.
A Comprehensive Study of Gridding Methods for GPS Horizontal Velocity Fields
NASA Astrophysics Data System (ADS)
Wu, Yanqiang; Jiang, Zaisen; Liu, Xiaoxia; Wei, Wenxin; Zhu, Shuang; Zhang, Long; Zou, Zhenyu; Xiong, Xiaohui; Wang, Qixin; Du, Jiliang
2017-03-01
Four gridding methods for GPS velocities are compared in terms of their precision, applicability and robustness by analyzing simulated data with uncertainties from 0.0 to ±3.0 mm/a. When the input data are 1° × 1° grid sampled and the uncertainty of the additional error is greater than ±1.0 mm/a, the gridding results show that the least-squares collocation method is highly robust while the robustness of the Kriging method is low. In contrast, the spherical harmonics and the multi-surface function are moderately robust, and the regional singular values for the multi-surface function method and the edge effects for the spherical harmonics method become more significant with increasing uncertainty of the input data. When the input data (with additional errors of ±2.0 mm/a) are decimated by 50% from the 1° × 1° grid data and then erased in three 6° × 12° regions, the gridding results in these three regions indicate that the least-squares collocation and the spherical harmonics methods have good performances, while the multi-surface function and the Kriging methods may lead to singular values. The gridding techniques are also applied to GPS horizontal velocities with an average error of ±0.8 mm/a over the Chinese mainland and the surrounding areas, and the results show that the least-squares collocation method has the best performance, followed by the Kriging and multi-surface function methods. Furthermore, the edge effects of the spherical harmonics method are significantly affected by the sparseness and geometric distribution of the input data. In general, the least-squares collocation method is superior in terms of its robustness, edge effect, error distribution and stability, while the other methods have several positive features.
On the Estimation of Errors in Sparse Bathymetric Geophysical Data Sets
NASA Astrophysics Data System (ADS)
Jakobsson, M.; Calder, B.; Mayer, L.; Armstrong, A.
2001-05-01
There is a growing demand in the geophysical community for better regional representations of the world ocean's bathymetry. However, given the vastness of the oceans and the relative limited coverage of even the most modern mapping systems, it is likely that many of the older data sets will remain part of our cumulative database for several more decades. Therefore, regional bathymetrical compilations that are based on a mixture of historic and contemporary data sets will have to remain the standard. This raises the problem of assembling bathymetric compilations and utilizing data sets not only with a heterogeneous cover but also with a wide range of accuracies. In combining these data to regularly spaced grids of bathymetric values, which the majority of numerical procedures in earth sciences require, we are often forced to use a complex interpolation scheme due to the sparseness and irregularity of the input data points. Consequently, we are faced with the difficult task of assessing the confidence that we can assign to the final grid product, a task that is not usually addressed in most bathymetric compilations. We approach the problem of assessing the confidence via a direct-simulation Monte Carlo method. We start with a small subset of data from the International Bathymetric Chart of the Arctic Ocean (IBCAO) grid model [Jakobsson et al., 2000]. This grid is compiled from a mixture of data sources ranging from single beam soundings with available metadata to spot soundings with no available metadata, to digitized contours; the test dataset shows examples of all of these types. From this database, we assign a priori error variances based on available meta-data, and when this is not available, based on a worst-case scenario in an essentially heuristic manner. We then generate a number of synthetic datasets by randomly perturbing the base data using normally distributed random variates, scaled according to the predicted error model. These datasets are then re-gridded using the same methodology as the original product, generating a set of plausible grid models of the regional bathymetry that we can use for standard error estimates. Finally, we repeat the entire random estimation process and analyze each run's standard error grids in order to examine sampling bias and variance in the predictions. The final products of the estimation are a collection of standard error grids, which we combine with the source data density in order to create a grid that contains information about the bathymetry model's reliability. Jakobsson, M., Cherkis, N., Woodward, J., Coakley, B., and Macnab, R., 2000, A new grid of Arctic bathymetry: A significant resource for scientists and mapmakers, EOS Transactions, American Geophysical Union, v. 81, no. 9, p. 89, 93, 96.
NASA Astrophysics Data System (ADS)
Pradhan, Aniruddhe; Akhavan, Rayhaneh
2017-11-01
Effect of collision model, subgrid-scale model and grid resolution in Large Eddy Simulation (LES) of wall-bounded turbulent flows with the Lattice Boltzmann Method (LBM) is investigated in turbulent channel flow. The Single Relaxation Time (SRT) collision model is found to be more accurate than Multi-Relaxation Time (MRT) collision model in well-resolved LES. Accurate LES requires grid resolutions of Δ+ <= 4 in the near-wall region, which is comparable to Δ+ <= 2 required in DNS. At larger grid resolutions SRT becomes unstable, while MRT remains stable but gives unacceptably large errors. LES with no model gave errors comparable to the Dynamic Smagorinsky Model (DSM) and the Wall Adapting Local Eddy-viscosity (WALE) model. The resulting errors in the prediction of the friction coefficient in turbulent channel flow at a bulk Reynolds Number of 7860 (Reτ 442) with Δ+ = 4 and no-model, DSM and WALE were 1.7%, 2.6%, 3.1% with SRT, and 8.3% 7.5% 8.7% with MRT, respectively. These results suggest that LES of wall-bounded turbulent flows with LBM requires either grid-embedding in the near-wall region, with grid resolutions comparable to DNS, or a wall model. Results of LES with grid-embedding and wall models will be discussed.
Aerosol anomalies in Nimbus-7 coastal zone color scanner data obtained in Japan area
NASA Technical Reports Server (NTRS)
Fukushima, Hajime; Sugimori, Yasuhiro; Toratani, Mitsuhiro; Smith, Raymond C.; Yasuda, Yoshizumi
1989-01-01
About 400 CZCS (coastal zone color scanner) scenes covering the Japan area in November 1978-May 1982 were processed to study the applicability of the Gordon-Clark atmospheric correction scheme which produces water-leaving radiances Lw at 443 nm, 520 nm, and 550 nm as well as phytoplankton pigment maps. Typical spring-fall aerosol radiance in the images was found to be 0.8-1.5 micro-W/sq cm-nm-sr, which is about 50 percent more than reported for the US eastern coastal images. The correction for about half the data resulted in negative Lw (443) values, implying overestimation of the aerosol effect for this channel. Several possible reasons for this are considered, including deviation of the aerosol optical thickness tau(a) at 443 nm from that estimated by Angstrom's exponential law, which the algorithm assumes. The analysis shows that, assuming the use of the Gordon-Clark algorithm, and for a pigment concentration of about 1 microgram/l, -40 percent to +100 percent error in satellite estimates is common. Although this does not fully explain the negative Lw (443) in the satellite data, it seems to contribute to the problem significantly, together with other error sources, including one in the sensor calibration.
Bedini, José Luis; Wallace, Jane F; Pardo, Scott; Petruschke, Thorsten
2015-10-07
Blood glucose monitoring is an essential component of diabetes management. Inaccurate blood glucose measurements can severely impact patients' health. This study evaluated the performance of 3 blood glucose monitoring systems (BGMS), Contour® Next USB, FreeStyle InsuLinx®, and OneTouch® Verio™ IQ, under routine hospital conditions. Venous blood samples (N = 236) obtained for routine laboratory procedures were collected at a Spanish hospital, and blood glucose (BG) concentrations were measured with each BGMS and with the available reference (hexokinase) method. Accuracy of the 3 BGMS was compared according to ISO 15197:2013 accuracy limit criteria, by mean absolute relative difference (MARD), consensus error grid (CEG) and surveillance error grid (SEG) analyses, and an insulin dosing error model. All BGMS met the accuracy limit criteria defined by ISO 15197:2013. While all measurements of the 3 BGMS were within low-risk zones in both error grid analyses, the Contour Next USB showed significantly smaller MARDs between reference values compared to the other 2 BGMS. Insulin dosing errors were lowest for the Contour Next USB than compared to the other systems. All BGMS fulfilled ISO 15197:2013 accuracy limit criteria and CEG criterion. However, taking together all analyses, differences in performance of potential clinical relevance may be observed. Results showed that Contour Next USB had lowest MARD values across the tested glucose range, as compared with the 2 other BGMS. CEG and SEG analyses as well as calculation of the hypothetical bolus insulin dosing error suggest a high accuracy of the Contour Next USB. © 2015 Diabetes Technology Society.
Pirnstill, Casey W; Malik, Bilal H; Gresham, Vincent C; Coté, Gerard L
2012-09-01
Over the past 35 years considerable research has been performed toward the investigation of noninvasive and minimally invasive glucose monitoring techniques. Optical polarimetry is one noninvasive technique that has shown promise as a means to ascertain blood glucose levels through measuring the glucose concentrations in the anterior chamber of the eye. However, one of the key limitations to the use of optical polarimetry as a means to noninvasively measure glucose levels is the presence of sample noise caused by motion-induced time-varying corneal birefringence. In this article our group presents, for the first time, results that show dual-wavelength polarimetry can be used to accurately detect glucose concentrations in the presence of motion-induced birefringence in vivo using New Zealand White rabbits. In total, nine animal studies (three New Zealand White rabbits across three separate days) were conducted. Using the dual-wavelength optical polarimetric approach, in vivo, an overall mean average relative difference of 4.49% (11.66 mg/dL) was achieved with 100% Zone A+B hits on a Clarke error grid, including 100% falling in Zone A. The results indicate that dual-wavelength polarimetry can effectively be used to significantly reduce the noise due to time-varying corneal birefringence in vivo, allowing the accurate measurement of glucose concentration in the aqueous humor of the eye and correlating that with blood glucose.
Ghys, Timothy; Goedhuys, Wim; Spincemaille, Katrien; Gorus, Frans; Gerlo, Erik
2007-01-01
Glucose testing at the bedside has become an integral part of the management strategy in diabetes and of the careful maintenance of normoglycemia in all patients in intensive care units. We evaluated two point-of-care glucometers for the determination of plasma-equivalent blood glucose. The Precision PCx and the Accu-Chek Inform glucometers were evaluated. Imprecision and bias relative to the Vitros 950 system were determined using protocols of the Clinical Laboratory Standards Institute (CLSI). The effects of low, normal, and high hematocrit levels were investigated. Interference by maltose was also studied. Within-run precision for both instruments ranged from 2-5%. Total imprecision was less than 5% except for the Accu-Chek Inform at the low level (2.9 mmol/L). Both instruments correlated well with the comparison instrument and showed excellent recovery and linearity. Both systems reported at least 95% of their values within zone A of the Clarke Error Grid, and both fulfilled the CLSI quality criteria. The more stringent goals of the American Diabetes Association, however, were not reached. Both systems showed negative bias at high hematocrit levels. Maltose interfered with the glucose measurements on the Accu-Chek Inform but not on the Precision PCx. Both systems showed satisfactory imprecision and were reliable in reporting plasma-equivalent glucose concentrations. The most stringent performance goals were however not met.
3. View of Clark Fork Vehicle Bridge facing southwest. Bridge ...
3. View of Clark Fork Vehicle Bridge facing southwest. Bridge from north shore of Clark Fork River. - Clark Fork Vehicle Bridge, Spanning Clark Fork River, serves Highway 200, Clark Fork, Bonner County, ID
Evaluation of tracking accuracy of the CyberKnife system using a webcam and printed calibrated grid.
Sumida, Iori; Shiomi, Hiroya; Higashinaka, Naokazu; Murashima, Yoshikazu; Miyamoto, Youichi; Yamazaki, Hideya; Mabuchi, Nobuhisa; Tsuda, Eimei; Ogawa, Kazuhiko
2016-03-08
Tracking accuracy for the CyberKnife's Synchrony system is commonly evaluated using a film-based verification method. We have evaluated a verification system that uses a webcam and a printed calibrated grid to verify tracking accuracy over three different motion patterns. A box with an attached printed calibrated grid and four fiducial markers was attached to the motion phantom. A target marker was positioned at the grid's center. The box was set up using the other three markers. Target tracking accuracy was evaluated under three conditions: 1) stationary; 2) sinusoidal motion with different amplitudes of 5, 10, 15, and 20 mm for the same cycle of 4 s and different cycles of 2, 4, 6, and 8 s with the same amplitude of 15 mm; and 3) irregular breathing patterns in six human volunteers breathing normally. Infrared markers were placed on the volunteers' abdomens, and their trajectories were used to simulate the target motion. All tests were performed with one-dimensional motion in craniocaudal direction. The webcam captured the grid's motion and a laser beam was used to simulate the CyberKnife's beam. Tracking error was defined as the difference between the grid's center and the laser beam. With a stationary target, mean tracking error was measured at 0.4 mm. For sinusoidal motion, tracking error was less than 2 mm for any amplitude and breathing cycle. For the volunteers' breathing patterns, the mean tracking error range was 0.78-1.67 mm. Therefore, accurate lesion targeting requires individual quality assurance for each patient.
4. View of Clark Fork Vehicle Bridge facing northeast. Bridge ...
4. View of Clark Fork Vehicle Bridge facing northeast. Bridge from south shoreof Clark Fork River showing 4 spans. - Clark Fork Vehicle Bridge, Spanning Clark Fork River, serves Highway 200, Clark Fork, Bonner County, ID
Julian, B.R.; Evans, J.R.; Pritchard, M.J.; Foulger, G.R.
2000-01-01
Some computer programs based on the Aki-Christofferson-Husebye (ACH) method of teleseismic tomography contain an error caused by identifying local grid directions with azimuths on the spherical Earth. This error, which is most severe in high latitudes, introduces systematic errors into computed ray paths and distorts inferred Earth models. It is best dealt with by explicity correcting for the difference between true and grid directions. Methods for computing these directions are presented in this article and are likely to be useful in many other kinds of regional geophysical studies that use Cartesian coordinates and flat-earth approximations.
Increasing accuracy of dispersal kernels in grid-based population models
Slone, D.H.
2011-01-01
Dispersal kernels in grid-based population models specify the proportion, distance and direction of movements within the model landscape. Spatial errors in dispersal kernels can have large compounding effects on model accuracy. Circular Gaussian and Laplacian dispersal kernels at a range of spatial resolutions were investigated, and methods for minimizing errors caused by the discretizing process were explored. Kernels of progressively smaller sizes relative to the landscape grid size were calculated using cell-integration and cell-center methods. These kernels were convolved repeatedly, and the final distribution was compared with a reference analytical solution. For large Gaussian kernels (σ > 10 cells), the total kernel error was <10 &sup-11; compared to analytical results. Using an invasion model that tracked the time a population took to reach a defined goal, the discrete model results were comparable to the analytical reference. With Gaussian kernels that had σ ≤ 0.12 using the cell integration method, or σ ≤ 0.22 using the cell center method, the kernel error was greater than 10%, which resulted in invasion times that were orders of magnitude different than theoretical results. A goal-seeking routine was developed to adjust the kernels to minimize overall error. With this, corrections for small kernels were found that decreased overall kernel error to <10-11 and invasion time error to <5%.
Test functions for three-dimensional control-volume mixed finite-element methods on irregular grids
Naff, R.L.; Russell, T.F.; Wilson, J.D.; ,; ,; ,; ,; ,
2000-01-01
Numerical methods based on unstructured grids, with irregular cells, usually require discrete shape functions to approximate the distribution of quantities across cells. For control-volume mixed finite-element methods, vector shape functions are used to approximate the distribution of velocities across cells and vector test functions are used to minimize the error associated with the numerical approximation scheme. For a logically cubic mesh, the lowest-order shape functions are chosen in a natural way to conserve intercell fluxes that vary linearly in logical space. Vector test functions, while somewhat restricted by the mapping into the logical reference cube, admit a wider class of possibilities. Ideally, an error minimization procedure to select the test function from an acceptable class of candidates would be the best procedure. Lacking such a procedure, we first investigate the effect of possible test functions on the pressure distribution over the control volume; specifically, we look for test functions that allow for the elimination of intermediate pressures on cell faces. From these results, we select three forms for the test function for use in a control-volume mixed method code and subject them to an error analysis for different forms of grid irregularity; errors are reported in terms of the discrete L2 norm of the velocity error. Of these three forms, one appears to produce optimal results for most forms of grid irregularity.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Maasakkers, Joannes D.; Jacob, Daniel J.; Sulprizio, Melissa P.
Here we present a gridded inventory of US anthropogenic methane emissions with 0.1° × 0.1° spatial resolution, monthly temporal resolution, and detailed scaledependent error characterization. The inventory is designed to be consistent with the 2016 US Environmental Protection Agency (EPA) Inventory of US Greenhouse Gas Emissions and Sinks (GHGI) for 2012. The EPA inventory is available only as national totals for different source types. We use a wide range of databases at the state, county, local, and point source level to disaggregate the inventory and allocate the spatial and temporal distribution of emissions for individual source types. Results show largemore » differences with the EDGAR v4.2 global gridded inventory commonly used as a priori estimate in inversions of atmospheric methane observations. We derive grid-dependent error statistics for individual source types from comparison with the Environmental Defense Fund (EDF) regional inventory for Northeast Texas. These error statistics are independently verified by comparison with the California Greenhouse Gas Emissions Measurement (CALGEM) grid-resolved emission inventory. Finally, our gridded, time-resolved inventory provides an improved basis for inversion of atmospheric methane observations to estimate US methane emissions and interpret the results in terms of the underlying processes.« less
2. View of Clark Fork Vehicle Bridge facing northeast. Bridge ...
2. View of Clark Fork Vehicle Bridge facing northeast. Bridge from south shore of Clark Fork River showing 4 1/2 spans. - Clark Fork Vehicle Bridge, Spanning Clark Fork River, serves Highway 200, Clark Fork, Bonner County, ID
7. View of Clark Fork Vehicle Bridge facing northwest. Bridge ...
7. View of Clark Fork Vehicle Bridge facing northwest. Bridge from south shore of Clark Fork River showing 4 1/2 spans. - Clark Fork Vehicle Bridge, Spanning Clark Fork River, serves Highway 200, Clark Fork, Bonner County, ID
NASA Astrophysics Data System (ADS)
Saxena, Hemant; Singh, Alka; Rai, J. N.
2018-07-01
This article discusses the design and control of a single-phase grid-connected photovoltaic (PV) system. A 5-kW PV system is designed and integrated at the DC link of an H-bridge voltage source converter (VSC). The control of the VSC and switching logic is modelled using a generalised integrator (GI). The use of GI or its variants such as second-order GI have recently evolved for synchronisation and are being used as phase locked loop (PLL) circuits for grid integration. Design of PLL circuits and the use of transformations such as Park's and Clarke's are much easier in three-phase systems. But obtaining in-phase and quadrature components becomes an important and challenging issue in single-phase systems. This article addresses this issue and discusses an altogether different application of GI for the design of compensator based on the extraction of in-phase and quadrature components. GI is frequently used as a PLL; however, in this article, it is not used for synchronisation purposes. A new controller has been designed for a single-phase grid-connected PV system working as a single-phase active compensator. Extensive simulation results are shown for the working of integrated PV system under different atmospheric and operating conditions during daytime as well as night conditions. Experimental results showing the proposed control approach are presented and discussed for the hardware set-up developed in the laboratory.
1. View of Clark Fork Vehicle Bridge facing west. Panorama ...
1. View of Clark Fork Vehicle Bridge facing west. Panorama showing the entire span of bridge from north shore of the Clark Fork River. - Clark Fork Vehicle Bridge, Spanning Clark Fork River, serves Highway 200, Clark Fork, Bonner County, ID
Analyzing communication errors in an air medical transport service.
Dalto, Joseph D; Weir, Charlene; Thomas, Frank
2013-01-01
Poor communication can result in adverse events. Presently, no standards exist for classifying and analyzing air medical communication errors. This study sought to determine the frequency and types of communication errors reported within an air medical quality and safety assurance reporting system. Of 825 quality assurance reports submitted in 2009, 278 were randomly selected and analyzed for communication errors. Each communication error was classified and mapped to Clark's communication level hierarchy (ie, levels 1-4). Descriptive statistics were performed, and comparisons were evaluated using chi-square analysis. Sixty-four communication errors were identified in 58 reports (21% of 278). Of the 64 identified communication errors, only 18 (28%) were classified by the staff to be communication errors. Communication errors occurred most often at level 1 (n = 42/64, 66%) followed by level 4 (21/64, 33%). Level 2 and 3 communication failures were rare (, 1%). Communication errors were found in a fifth of quality and safety assurance reports. The reporting staff identified less than a third of these errors. Nearly all communication errors (99%) occurred at either the lowest level of communication (level 1, 66%) or the highest level (level 4, 33%). An air medical communication ontology is necessary to improve the recognition and analysis of communication errors. Copyright © 2013 Air Medical Journal Associates. Published by Elsevier Inc. All rights reserved.
Yan, Jun; Yu, Kegen; Chen, Ruizhi; Chen, Liang
2017-05-30
In this paper a two-phase compressive sensing (CS) and received signal strength (RSS)-based target localization approach is proposed to improve position accuracy by dealing with the unknown target population and the effect of grid dimensions on position error. In the coarse localization phase, by formulating target localization as a sparse signal recovery problem, grids with recovery vector components greater than a threshold are chosen as the candidate target grids. In the fine localization phase, by partitioning each candidate grid, the target position in a grid is iteratively refined by using the minimum residual error rule and the least-squares technique. When all the candidate target grids are iteratively partitioned and the measurement matrix is updated, the recovery vector is re-estimated. Threshold-based detection is employed again to determine the target grids and hence the target population. As a consequence, both the target population and the position estimation accuracy can be significantly improved. Simulation results demonstrate that the proposed approach achieves the best accuracy among all the algorithms compared.
5. View of Clark Fork Vehicle Bridge facing east. Bridge ...
5. View of Clark Fork Vehicle Bridge facing east. Bridge from south shore of Clark Fork River-southernmost span. 1900-era Northern Pacific Railway Bridge in background. - Clark Fork Vehicle Bridge, Spanning Clark Fork River, serves Highway 200, Clark Fork, Bonner County, ID
NASA Astrophysics Data System (ADS)
Wu, Heng
2000-10-01
In this thesis, an a-posteriori error estimator is presented and employed for solving viscous incompressible flow problems. In an effort to detect local flow features, such as vortices and separation, and to resolve flow details precisely, a velocity angle error estimator e theta which is based on the spatial derivative of velocity direction fields is designed and constructed. The a-posteriori error estimator corresponds to the antisymmetric part of the deformation-rate-tensor, and it is sensitive to the second derivative of the velocity angle field. Rationality discussions reveal that the velocity angle error estimator is a curvature error estimator, and its value reflects the accuracy of streamline curves. It is also found that the velocity angle error estimator contains the nonlinear convective term of the Navier-Stokes equations, and it identifies and computes the direction difference when the convective acceleration direction and the flow velocity direction have a disparity. Through benchmarking computed variables with the analytic solution of Kovasznay flow or the finest grid of cavity flow, it is demonstrated that the velocity angle error estimator has a better performance than the strain error estimator. The benchmarking work also shows that the computed profile obtained by using etheta can achieve the best matching outcome with the true theta field, and that it is asymptotic to the true theta variation field, with a promise of fewer unknowns. Unstructured grids are adapted by employing local cell division as well as unrefinement of transition cells. Using element class and node class can efficiently construct a hierarchical data structure which provides cell and node inter-reference at each adaptive level. Employing element pointers and node pointers can dynamically maintain the connection of adjacent elements and adjacent nodes, and thus avoids time-consuming search processes. The adaptive scheme is applied to viscous incompressible flow at different Reynolds numbers. It is found that the velocity angle error estimator can detect most flow characteristics and produce dense grids in the regions where flow velocity directions have abrupt changes. In addition, the e theta estimator makes the derivative error dilutely distribute in the whole computational domain and also allows the refinement to be conducted at regions of high error. Through comparison of the velocity angle error across the interface with neighbouring cells, it is verified that the adaptive scheme in using etheta provides an optimum mesh which can clearly resolve local flow features in a precise way. The adaptive results justify the applicability of the etheta estimator and prove that this error estimator is a valuable adaptive indicator for the automatic refinement of unstructured grids.
Dynamic mesh adaption for triangular and tetrahedral grids
NASA Technical Reports Server (NTRS)
Biswas, Rupak; Strawn, Roger
1993-01-01
The following topics are discussed: requirements for dynamic mesh adaption; linked-list data structure; edge-based data structure; adaptive-grid data structure; three types of element subdivision; mesh refinement; mesh coarsening; additional constraints for coarsening; anisotropic error indicator for edges; unstructured-grid Euler solver; inviscid 3-D wing; and mesh quality for solution-adaptive grids. The discussion is presented in viewgraph form.
Accuracy of Gradient Reconstruction on Grids with High Aspect Ratio
NASA Technical Reports Server (NTRS)
Thomas, James
2008-01-01
Gradient approximation methods commonly used in unstructured-grid finite-volume schemes intended for solutions of high Reynolds number flow equations are studied comprehensively. The accuracy of gradients within cells and within faces is evaluated systematically for both node-centered and cell-centered formulations. Computational and analytical evaluations are made on a series of high-aspect-ratio grids with different primal elements, including quadrilateral, triangular, and mixed element grids, with and without random perturbations to the mesh. Both rectangular and cylindrical geometries are considered; the latter serves to study the effects of geometric curvature. The study shows that the accuracy of gradient reconstruction on high-aspect-ratio grids is determined by a combination of the grid and the solution. The contributors to the error are identified and approaches to reduce errors are given, including the addition of higher-order terms in the direction of larger mesh spacing. A parameter GAMMA characterizing accuracy on curved high-aspect-ratio grids is discussed and an approximate-mapped-least-square method using a commonly-available distance function is presented; the method provides accurate gradient reconstruction on general grids. The study is intended to be a reference guide accompanying the construction of accurate and efficient methods for high Reynolds number applications
NASA Astrophysics Data System (ADS)
Crimmins, T. M.; Switzer, J.; Rosemartin, A.; Marsh, L.; Gerst, K.; Crimmins, M.; Weltzin, J. F.
2016-12-01
Since 2016 the USA National Phenology Network (USA-NPN; www.usanpn.org) has produced and delivered daily maps and short-term forecasts of accumulated growing degree days and spring onset dates at fine spatial scale for the conterminous United States. Because accumulated temperature is a strong driver of phenological transitions in plants and animals, including leaf-out, flowering, fruit ripening, and migration, these data products have utility for a wide range of natural resource planning and management applications, including scheduling invasive species and pest detection and control activities, determining planting dates, anticipating allergy outbreaks and planning agricultural harvest dates. The USA-NPN is a national-scale program that supports scientific advancement and decision-making by collecting, storing, and sharing phenology data and information. We will be expanding the suite of gridded map products offered by the USA-NPN to include predictive species-specific maps of phenological transitions in plants and animals at fine spatial and temporal resolution in the future. Data products, such as the gridded maps currently produced by the USA-NPN, inherently contain uncertainty and error arising from multiple sources, including error propagated forward from underlying climate data and from the models implemented. As providing high-quality, vetted data in a transparent way is central to the USA-NPN, we aim to identify and report the sources and magnitude of uncertainty and error in gridded maps and forecast products. At present, we compare our real-time gridded products to independent, trustworthy data sources, such as the Climate Reference Network, on a daily basis and report Mean Absolute Error and bias through an interactive online dashboard.
Impacts of uncertainties in European gridded precipitation observations on regional climate analysis
Gobiet, Andreas
2016-01-01
ABSTRACT Gridded precipitation data sets are frequently used to evaluate climate models or to remove model output biases. Although precipitation data are error prone due to the high spatio‐temporal variability of precipitation and due to considerable measurement errors, relatively few attempts have been made to account for observational uncertainty in model evaluation or in bias correction studies. In this study, we compare three types of European daily data sets featuring two Pan‐European data sets and a set that combines eight very high‐resolution station‐based regional data sets. Furthermore, we investigate seven widely used, larger scale global data sets. Our results demonstrate that the differences between these data sets have the same magnitude as precipitation errors found in regional climate models. Therefore, including observational uncertainties is essential for climate studies, climate model evaluation, and statistical post‐processing. Following our results, we suggest the following guidelines for regional precipitation assessments. (1) Include multiple observational data sets from different sources (e.g. station, satellite, reanalysis based) to estimate observational uncertainties. (2) Use data sets with high station densities to minimize the effect of precipitation undersampling (may induce about 60% error in data sparse regions). The information content of a gridded data set is mainly related to its underlying station density and not to its grid spacing. (3) Consider undercatch errors of up to 80% in high latitudes and mountainous regions. (4) Analyses of small‐scale features and extremes are especially uncertain in gridded data sets. For higher confidence, use climate‐mean and larger scale statistics. In conclusion, neglecting observational uncertainties potentially misguides climate model development and can severely affect the results of climate change impact assessments. PMID:28111497
Prein, Andreas F; Gobiet, Andreas
2017-01-01
Gridded precipitation data sets are frequently used to evaluate climate models or to remove model output biases. Although precipitation data are error prone due to the high spatio-temporal variability of precipitation and due to considerable measurement errors, relatively few attempts have been made to account for observational uncertainty in model evaluation or in bias correction studies. In this study, we compare three types of European daily data sets featuring two Pan-European data sets and a set that combines eight very high-resolution station-based regional data sets. Furthermore, we investigate seven widely used, larger scale global data sets. Our results demonstrate that the differences between these data sets have the same magnitude as precipitation errors found in regional climate models. Therefore, including observational uncertainties is essential for climate studies, climate model evaluation, and statistical post-processing. Following our results, we suggest the following guidelines for regional precipitation assessments. (1) Include multiple observational data sets from different sources (e.g. station, satellite, reanalysis based) to estimate observational uncertainties. (2) Use data sets with high station densities to minimize the effect of precipitation undersampling (may induce about 60% error in data sparse regions). The information content of a gridded data set is mainly related to its underlying station density and not to its grid spacing. (3) Consider undercatch errors of up to 80% in high latitudes and mountainous regions. (4) Analyses of small-scale features and extremes are especially uncertain in gridded data sets. For higher confidence, use climate-mean and larger scale statistics. In conclusion, neglecting observational uncertainties potentially misguides climate model development and can severely affect the results of climate change impact assessments.
NASA Astrophysics Data System (ADS)
Liguori, Sara; O'Loughlin, Fiachra; Souvignet, Maxime; Coxon, Gemma; Freer, Jim; Woods, Ross
2014-05-01
This research presents a newly developed observed sub-daily gridded precipitation product for England and Wales. Importantly our analysis specifically allows a quantification of rainfall errors from grid to the catchment scale, useful for hydrological model simulation and the evaluation of prediction uncertainties. Our methodology involves the disaggregation of the current one kilometre daily gridded precipitation records available for the United Kingdom[1]. The hourly product is created using information from: 1) 2000 tipping-bucket rain gauges; and 2) the United Kingdom Met-Office weather radar network. These two independent datasets provide rainfall estimates at temporal resolutions much smaller than the current daily gridded rainfall product; thus allowing the disaggregation of the daily rainfall records to an hourly timestep. Our analysis is conducted for the period 2004 to 2008, limited by the current availability of the datasets. We analyse the uncertainty components affecting the accuracy of this product. Specifically we explore how these uncertainties vary spatially, temporally and with climatic regimes. Preliminary results indicate scope for improvement of hydrological model performance by the utilisation of this new hourly gridded rainfall product. Such product will improve our ability to diagnose and identify structural errors in hydrological modelling by including the quantification of input errors. References [1] Keller V, Young AR, Morris D, Davies H (2006) Continuous Estimation of River Flows. Technical Report: Estimation of Precipitation Inputs. in Agency E (ed.). Environmental Agency.
Evaluation of tracking accuracy of the CyberKnife system using a webcam and printed calibrated grid
Shiomi, Hiroya; Higashinaka, Naokazu; Murashima, Yoshikazu; Miyamoto, Youichi; Yamazaki, Hideya; Mabuchi, Nobuhisa; Tsuda, Eimei; Ogawa, Kazuhiko
2016-01-01
Tracking accuracy for the CyberKnife's Synchrony system is commonly evaluated using a film‐based verification method. We have evaluated a verification system that uses a webcam and a printed calibrated grid to verify tracking accuracy over three different motion patterns. A box with an attached printed calibrated grid and four fiducial markers was attached to the motion phantom. A target marker was positioned at the grid's center. The box was set up using the other three markers. Target tracking accuracy was evaluated under three conditions: 1) stationary; 2) sinusoidal motion with different amplitudes of 5, 10, 15, and 20 mm for the same cycle of 4 s and different cycles of 2, 4, 6, and 8 s with the same amplitude of 15 mm; and 3) irregular breathing patterns in six human volunteers breathing normally. Infrared markers were placed on the volunteers’ abdomens, and their trajectories were used to simulate the target motion. All tests were performed with one‐dimensional motion in craniocaudal direction. The webcam captured the grid's motion and a laser beam was used to simulate the CyberKnife's beam. Tracking error was defined as the difference between the grid's center and the laser beam. With a stationary target, mean tracking error was measured at 0.4 mm. For sinusoidal motion, tracking error was less than 2 mm for any amplitude and breathing cycle. For the volunteers’ breathing patterns, the mean tracking error range was 0.78‐1.67 mm. Therefore, accurate lesion targeting requires individual quality assurance for each patient. PACS number(s): 87.55.D‐, 87.55.km, 87.55.Qr, 87.56.Fc PMID:27074474
NASA Astrophysics Data System (ADS)
Sun, K.; Zhu, L.; Gonzalez Abad, G.; Nowlan, C. R.; Miller, C. E.; Huang, G.; Liu, X.; Chance, K.; Yang, K.
2017-12-01
It has been well demonstrated that regridding Level 2 products (satellite observations from individual footprints, or pixels) from multiple sensors/species onto regular spatial and temporal grids makes the data more accessible for scientific studies and can even lead to additional discoveries. However, synergizing multiple species retrieved from multiple satellite sensors faces many challenges, including differences in spatial coverage, viewing geometry, and data filtering criteria. These differences will lead to errors and biases if not treated carefully. Operational gridded products are often at 0.25°×0.25° resolution with a global scale, which is too coarse for local heterogeneous emission sources (e.g., urban areas), and at fixed temporal intervals (e.g., daily or monthly). We propose a consistent framework to fully use and properly weight the information of all possible individual satellite observations. A key aspect of this work is an accurate knowledge of the spatial response function (SRF) of the satellite Level 2 pixels. We found that the conventional overlap-area-weighting method (tessellation) is accurate only when the SRF is homogeneous within the parameterized pixel boundary and zero outside the boundary. There will be a tessellation error if the SRF is a smooth distribution, and if this distribution is not properly considered. On the other hand, discretizing the SRF at the destination grid will also induce errors. By balancing these error sources, we found that the SRF should be used in gridding OMI data to 0.2° for fine resolutions. Case studies by merging multiple species and wind data into 0.01° grid will be shown in the presentation.
Mehl, S.; Hill, M.C.
2002-01-01
A new method of local grid refinement for two-dimensional block-centered finite-difference meshes is presented in the context of steady-state groundwater-flow modeling. The method uses an iteration-based feedback with shared nodes to couple two separate grids. The new method is evaluated by comparison with results using a uniform fine mesh, a variably spaced mesh, and a traditional method of local grid refinement without a feedback. Results indicate: (1) The new method exhibits quadratic convergence for homogeneous systems and convergence equivalent to uniform-grid refinement for heterogeneous systems. (2) Coupling the coarse grid with the refined grid in a numerically rigorous way allowed for improvement in the coarse-grid results. (3) For heterogeneous systems, commonly used linear interpolation of heads from the large model onto the boundary of the refined model produced heads that are inconsistent with the physics of the flow field. (4) The traditional method works well in situations where the better resolution of the locally refined grid has little influence on the overall flow-system dynamics, but if this is not true, lack of a feedback mechanism produced errors in head up to 3.6% and errors in cell-to-cell flows up to 25%. ?? 2002 Elsevier Science Ltd. All rights reserved.
8. View of Clark Fork Vehicle Bridge facing southwest. Looking ...
8. View of Clark Fork Vehicle Bridge facing southwest. Looking at understructure of northernmost span. - Clark Fork Vehicle Bridge, Spanning Clark Fork River, serves Highway 200, Clark Fork, Bonner County, ID
20. View of Clark Fork Vehicle Bridge facing up. Looking ...
20. View of Clark Fork Vehicle Bridge facing up. Looking at understructure of northernmost span. - Clark Fork Vehicle Bridge, Spanning Clark Fork River, serves Highway 200, Clark Fork, Bonner County, ID
Analyzing Effect of System Inertia on Grid Frequency Forecasting Usnig Two Stage Neuro-Fuzzy System
NASA Astrophysics Data System (ADS)
Chourey, Divyansh R.; Gupta, Himanshu; Kumar, Amit; Kumar, Jitesh; Kumar, Anand; Mishra, Anup
2018-04-01
Frequency forecasting is an important aspect of power system operation. The system frequency varies with load-generation imbalance. Frequency variation depends upon various parameters including system inertia. System inertia determines the rate of fall of frequency after the disturbance in the grid. Though, inertia of the system is not considered while forecasting the frequency of power system during planning and operation. This leads to significant errors in forecasting. In this paper, the effect of inertia on frequency forecasting is analysed for a particular grid system. In this paper, a parameter equivalent to system inertia is introduced. This parameter is used to forecast the frequency of a typical power grid for any instant of time. The system gives appreciable result with reduced error.
Xie, Hongtu; Shi, Shaoying; Xiao, Hui; Xie, Chao; Wang, Feng; Fang, Qunle
2016-01-01
With the rapid development of the one-stationary bistatic forward-looking synthetic aperture radar (OS-BFSAR) technology, the huge amount of the remote sensing data presents challenges for real-time imaging processing. In this paper, an efficient time-domain algorithm (ETDA) considering the motion errors for the OS-BFSAR imaging processing, is presented. This method can not only precisely handle the large spatial variances, serious range-azimuth coupling and motion errors, but can also greatly improve the imaging efficiency compared with the direct time-domain algorithm (DTDA). Besides, it represents the subimages on polar grids in the ground plane instead of the slant-range plane, and derives the sampling requirements considering motion errors for the polar grids to offer a near-optimum tradeoff between the imaging precision and efficiency. First, OS-BFSAR imaging geometry is built, and the DTDA for the OS-BFSAR imaging is provided. Second, the polar grids of subimages are defined, and the subaperture imaging in the ETDA is derived. The sampling requirements for polar grids are derived from the point of view of the bandwidth. Finally, the implementation and computational load of the proposed ETDA are analyzed. Experimental results based on simulated and measured data validate that the proposed ETDA outperforms the DTDA in terms of the efficiency improvement. PMID:27845757
18. View of Clark Fork Vehicle Bridge facing north. Looking ...
18. View of Clark Fork Vehicle Bridge facing north. Looking at north concrete abutment and timber stringers. - Clark Fork Vehicle Bridge, Spanning Clark Fork River, serves Highway 200, Clark Fork, Bonner County, ID
Global Marine Gravity and Bathymetry at 1-Minute Resolution
NASA Astrophysics Data System (ADS)
Sandwell, D. T.; Smith, W. H.
2008-12-01
We have developed global gravity and bathymetry grids at 1-minute resolution. Three approaches are used to reduce the error in the satellite-derived marine gravity anomalies. First, we have retracked the raw waveforms from the ERS-1 and Geosat/GM missions resulting in improvements in range precision of 40% and 27%, respectively. Second, we have used the recently published EGM2008 global gravity model as a reference field to provide a seamless gravity transition from land to ocean. Third we have used a biharmonic spline interpolation method to construct residual vertical deflection grids. Comparisons between shipboard gravity and the global gravity grid show errors ranging from 2.0 mGal in the Gulf of Mexico to 4.0 mGal in areas with rugged seafloor topography. The largest errors occur on the crests of narrow large seamounts. The bathymetry grid is based on prediction from satellite gravity and available ship soundings. Global soundings were assembled from a wide variety of sources including NGDC/GEODAS, NOAA Coastal Relief, CCOM, IFREMER, JAMSTEC, NSF Polar Programs, UKHO, LDEO, HIG, SIO and numerous miscellaneous contributions. The National Geospatial-intelligence Agency and other volunteering hydrographic offices within the International Hydrographic Organization provided global significant shallow water (< 300 m) soundings derived from their nautical charts. All soundings were converted to a common format and were hand-edited in relation to a smooth bathymetric model. Land elevations and shoreline location are based on a combination SRTM30, GTOPO30, and ICESAT data. A new feature of the bathymetry grid is a matching grid of source identification number that enables one to establish the origin of the depth estimate in each grid cell. Both the gravity and bathymetry grids are freely available.
Exploring Hypersonic, Unstructured-Grid Issues through Structured Grids
NASA Technical Reports Server (NTRS)
Mazaheri, Ali R.; Kleb, Bill
2007-01-01
Pure-tetrahedral unstructured grids have been shown to produce asymmetric heat transfer rates for symmetric problems. Meanwhile, two-dimensional structured grids produce symmetric solutions and as documented here, introducing a spanwise degree of freedom to these structured grids also yields symmetric solutions. The effects of grid skewness and other perturbations of structured-grids are investigated to uncover possible mechanisms behind the unstructured-grid solution asymmetries. By using controlled experiments around a known, good solution, the effects of particular grid pathologies are uncovered. These structured-grid experiments reveal that similar solution degradation occurs as for unstructured grids, especially for heat transfer rates. Non-smooth grids within the boundary layer is also shown to produce large local errors in heat flux but do not affect surface pressures.
NASA Astrophysics Data System (ADS)
Li, Chang; Wang, Qing; Shi, Wenzhong; Zhao, Sisi
2018-05-01
The accuracy of earthwork calculations that compute terrain volume is critical to digital terrain analysis (DTA). The uncertainties in volume calculations (VCs) based on a DEM are primarily related to three factors: 1) model error (ME), which is caused by an adopted algorithm for a VC model, 2) discrete error (DE), which is usually caused by DEM resolution and terrain complexity, and 3) propagation error (PE), which is caused by the variables' error. Based on these factors, the uncertainty modelling and analysis of VCs based on a regular grid DEM are investigated in this paper. Especially, how to quantify the uncertainty of VCs is proposed by a confidence interval based on truncation error (TE). In the experiments, the trapezoidal double rule (TDR) and Simpson's double rule (SDR) were used to calculate volume, where the TE is the major ME, and six simulated regular grid DEMs with different terrain complexity and resolution (i.e. DE) were generated by a Gauss synthetic surface to easily obtain the theoretical true value and eliminate the interference of data errors. For PE, Monte-Carlo simulation techniques and spatial autocorrelation were used to represent DEM uncertainty. This study can enrich uncertainty modelling and analysis-related theories of geographic information science.
19. View of Clark Fork Vehicle Bridge facing north. Looking ...
19. View of Clark Fork Vehicle Bridge facing north. Looking at north abutment and underside of northernmost span. - Clark Fork Vehicle Bridge, Spanning Clark Fork River, serves Highway 200, Clark Fork, Bonner County, ID
22. View of Clark Fork Vehicle Bridge facing downwest side. ...
22. View of Clark Fork Vehicle Bridge facing down-west side. Looking at road deck and vertical laced channel. - Clark Fork Vehicle Bridge, Spanning Clark Fork River, serves Highway 200, Clark Fork, Bonner County, ID
NASA Technical Reports Server (NTRS)
Troy, B. E., Jr.; Maier, E. J.
1975-01-01
The effects of the grid transparency and finite collector size on the values of thermal ion density and temperature determined by the standard RPA (retarding potential analyzer) analysis method are investigated. The current-voltage curves calculated for varying RPA parameters and a given ion mass, temperature, and density are analyzed by the standard RPA method. It is found that only small errors in temperature and density are introduced for an RPA with typical dimensions, and that even when the density error is substantial for nontypical dimensions, the temperature error remains minimum.
Time-dependent grid adaptation for meshes of triangles and tetrahedra
NASA Technical Reports Server (NTRS)
Rausch, Russ D.
1993-01-01
This paper presents in viewgraph form a method of optimizing grid generation for unsteady CFD flow calculations that distributes the numerical error evenly throughout the mesh. Adaptive meshing is used to locally enrich in regions of relatively large errors and to locally coarsen in regions of relatively small errors. The enrichment/coarsening procedures are robust for isotropic cells; however, enrichment of high aspect ratio cells may fail near boundary surfaces with relatively large curvature. The enrichment indicator worked well for the cases shown, but in general requires user supervision for a more efficient solution.
21. View of Clark Fork Vehicle Bridge facing west. Looking ...
21. View of Clark Fork Vehicle Bridge facing west. Looking at bridge deck, guard rail, juncture of two bridge spans. - Clark Fork Vehicle Bridge, Spanning Clark Fork River, serves Highway 200, Clark Fork, Bonner County, ID
11. View of Clark Fork Vehicle Bridge facing northwest. Southernmost ...
11. View of Clark Fork Vehicle Bridge facing northwest. Southernmost span. Plaque was originally located where striped traffic sign is posted. - Clark Fork Vehicle Bridge, Spanning Clark Fork River, serves Highway 200, Clark Fork, Bonner County, ID
Diagnosing Diagnosis Errors: Lessons From A Multi-Institutional Collaborative Project
2005-01-01
Breast Cancer Inappropriately reassured to have benign lesions - 21/435 (5%); 14 (3%) misread mammogram, 4 (1%) misread pathologic finding, 5 (1...diagnostic tests they are using. It is well known that a normal mammogram in a woman with a breast lump does not rule out the diagnosis of breast cancer ...physician delay in the diagnosis of breast cancer . Arch Intern Med 2002;162:1343–8. 27. Clark S. Spinal infections go undetected. Lancet 1998;351
Cross Validation Through Two-Dimensional Solution Surface for Cost-Sensitive SVM.
Gu, Bin; Sheng, Victor S; Tay, Keng Yeow; Romano, Walter; Li, Shuo
2017-06-01
Model selection plays an important role in cost-sensitive SVM (CS-SVM). It has been proven that the global minimum cross validation (CV) error can be efficiently computed based on the solution path for one parameter learning problems. However, it is a challenge to obtain the global minimum CV error for CS-SVM based on one-dimensional solution path and traditional grid search, because CS-SVM is with two regularization parameters. In this paper, we propose a solution and error surfaces based CV approach (CV-SES). More specifically, we first compute a two-dimensional solution surface for CS-SVM based on a bi-parameter space partition algorithm, which can fit solutions of CS-SVM for all values of both regularization parameters. Then, we compute a two-dimensional validation error surface for each CV fold, which can fit validation errors of CS-SVM for all values of both regularization parameters. Finally, we obtain the CV error surface by superposing K validation error surfaces, which can find the global minimum CV error of CS-SVM. Experiments are conducted on seven datasets for cost sensitive learning and on four datasets for imbalanced learning. Experimental results not only show that our proposed CV-SES has a better generalization ability than CS-SVM with various hybrids between grid search and solution path methods, and than recent proposed cost-sensitive hinge loss SVM with three-dimensional grid search, but also show that CV-SES uses less running time.
NASA Technical Reports Server (NTRS)
Troy, B. E., Jr.; Maier, E. J.
1973-01-01
The analysis of ion data from retarding potential analyzers (RPA's) is generally done under the planar approximation, which assumes that the grid transparency is constant with angle of incidence and that all ions reaching the plane of the collectors are collected. These approximations are not valid for situations in which the ion thermal velocity is comparable to the vehicle velocity, causing ions to enter the RPA with high average transverse velocity. To investigate these effects, the current-voltage curves for H+ at 4000 K were calculated, taking into account the finite collector size and the variation of grid transparency with angle. These curves are then analyzed under the planar approximation. The results show that only small errors in temperature and density are introduced for an RPA with typical dimensions; and that even when the density error is substantial for non-typical dimensions, the temperature error remains minimal.
A design approach for improving the performance of single-grid planar retarding potential analyzers
DOE Office of Scientific and Technical Information (OSTI.GOV)
Davidson, R. L.; Earle, G. D.
2011-01-15
Planar retarding potential analyzers (RPAs) have a long flight history and have been included on numerous spaceflight missions including Dynamics Explorer, the Defense Meteorological Satellite Program, and the Communications/Navigation Outage Forecast System. RPAs allow for simultaneous measurement of plasma composition, density, temperature, and the component of the velocity vector normal to the aperture plane. Internal conductive grids are used to approximate ideal potential planes within the instrument, but these grids introduce perturbations to the potential map inside the RPA and cause errors in the measurement of the parameters listed above. A numerical technique is presented herein for minimizing these gridmore » errors for a specific mission by varying the depth and spacing of the grid wires. The example mission selected concentrates on plasma dynamics near the sunset terminator in the equatorial region. The international reference ionosphere model is used to discern the average conditions expected for this mission, and a numerical model of the grid-particle interaction is used to choose a grid design that will best fulfill the mission goals.« less
NASA Technical Reports Server (NTRS)
Montesano, P. M.; Cook, B. D.; Sun, G.; Simard, M.; Zhang, Z.; Nelson, R. F.; Ranson, K. J.; Lutchke, S.; Blair, J. B.
2012-01-01
The synergistic use of active and passive remote sensing (i.e., data fusion) demonstrates the ability of spaceborne light detection and ranging (LiDAR), synthetic aperture radar (SAR) and multispectral imagery for achieving the accuracy requirements of a global forest biomass mapping mission. This data fusion approach also provides a means to extend 3D information from discrete spaceborne LiDAR measurements of forest structure across scales much larger than that of the LiDAR footprint. For estimating biomass, these measurements mix a number of errors including those associated with LiDAR footprint sampling over regional - global extents. A general framework for mapping above ground live forest biomass (AGB) with a data fusion approach is presented and verified using data from NASA field campaigns near Howland, ME, USA, to assess AGB and LiDAR sampling errors across a regionally representative landscape. We combined SAR and Landsat-derived optical (passive optical) image data to identify forest patches, and used image and simulated spaceborne LiDAR data to compute AGB and estimate LiDAR sampling error for forest patches and 100m, 250m, 500m, and 1km grid cells. Forest patches were delineated with Landsat-derived data and airborne SAR imagery, and simulated spaceborne LiDAR (SSL) data were derived from orbit and cloud cover simulations and airborne data from NASA's Laser Vegetation Imaging Sensor (L VIS). At both the patch and grid scales, we evaluated differences in AGB estimation and sampling error from the combined use of LiDAR with both SAR and passive optical and with either SAR or passive optical alone. This data fusion approach demonstrates that incorporating forest patches into the AGB mapping framework can provide sub-grid forest information for coarser grid-level AGB reporting, and that combining simulated spaceborne LiDAR with SAR and passive optical data are most useful for estimating AGB when measurements from LiDAR are limited because they minimized forest AGB sampling errors by 15 - 38%. Furthermore, spaceborne global scale accuracy requirements were achieved. At least 80% of the grid cells at 100m, 250m, 500m, and 1km grid levels met AGB density accuracy requirements using a combination of passive optical and SAR along with machine learning methods to predict vegetation structure metrics for forested areas without LiDAR samples. Finally, using either passive optical or SAR, accuracy requirements were met at the 500m and 250m grid level, respectively.
23. View of Clark Fork Vehicle Bridge facing upwest side. ...
23. View of Clark Fork Vehicle Bridge facing up-west side. Looking at structural connection of top chord, vertical laced channel and diagonal bars. - Clark Fork Vehicle Bridge, Spanning Clark Fork River, serves Highway 200, Clark Fork, Bonner County, ID
13. View of Clark Fork Vehicle Bridge facing south. Concrete ...
13. View of Clark Fork Vehicle Bridge facing south. Concrete barrier blocks access. Plaque was originally located where strioed traffic sign is posted at right. - Clark Fork Vehicle Bridge, Spanning Clark Fork River, serves Highway 200, Clark Fork, Bonner County, ID
12. View of Clark Fork Vehicle Bridge facing south. Approach ...
12. View of Clark Fork Vehicle Bridge facing south. Approach from the north road. Plaque was originally located where striped traffic sign is posted. - Clark Fork Vehicle Bridge, Spanning Clark Fork River, serves Highway 200, Clark Fork, Bonner County, ID
DOE Office of Scientific and Technical Information (OSTI.GOV)
Gustafson, William I.; Qian, Yun; Fast, Jerome D.
2011-07-13
Recent improvements to many global climate models include detailed, prognostic aerosol calculations intended to better reproduce the observed climate. However, the trace gas and aerosol fields are treated at the grid-cell scale with no attempt to account for sub-grid impacts on the aerosol fields. This paper begins to quantify the error introduced by the neglected sub-grid variability for the shortwave aerosol radiative forcing for a representative climate model grid spacing of 75 km. An analysis of the value added in downscaling aerosol fields is also presented to give context to the WRF-Chem simulations used for the sub-grid analysis. We foundmore » that 1) the impact of neglected sub-grid variability on the aerosol radiative forcing is strongest in regions of complex topography and complicated flow patterns, and 2) scale-induced differences in emissions contribute strongly to the impact of neglected sub-grid processes on the aerosol radiative forcing. The two of these effects together, when simulated at 75 km vs. 3 km in WRF-Chem, result in an average daytime mean bias of over 30% error in top-of-atmosphere shortwave aerosol radiative forcing for a large percentage of central Mexico during the MILAGRO field campaign.« less
Wesolowski, E.A.; Nelson, R.A.
1987-01-01
As part of the Sour is River water-quality assessment, traveltime, longitudinal-dispersion, and reaeration measurements were made during September 1983 on segments of the 186-mile reach of the Sour is River from Lake Darling Dam to the J. Clark Salyer National Wildlife Refuge. The primary objective was to determine traveltime, longitudinal-dispersion, and reaeration coefficients during low flow. Streamflow in the reach ranged from 10.5 to 47.0 cubic feet per second during the measurement period.On the basis of channel and hydraulic characteristics, the 186-mile reach was subdivided into five subreaches that ranged from 18 to 55 river miles in length. Within each subreach, representative test reaches that ranged from 5.0 to 9.1 river miles in length were selected for tracer injection and sample collection. Standard fluorometric techniques were used to measure traveltime and longitudinal dispersion, and a modified tracer technique that used ethylene and propane gas was used to measure reaeration. Mean test-reach velocities ranged from 0.05 to 0.30 foot per second, longitudinal-dispersion coefficients ranged from 4.2 to 61 square feet per second, and reaeration coefficients based on propane ranged from 0.39 to 1.66 per day. Predictive reaeration coefficients obtained from 18 equations (8 semiempirical and 10 empirical) were compared with each measured reaeration coefficient by use of an error-of-estimate analysis. The predictive reaeration coefficients ranged from 0.0008 to 3.4 per day. A semiempirical equation that produced coefficients most similar to the measured coefficients had the smallest absolute error of estimate (0.35). The smallest absolute error of estimate for the empirical equations was 0.41.
Use of upscaled elevation and surface roughness data in two-dimensional surface water models
Hughes, J.D.; Decker, J.D.; Langevin, C.D.
2011-01-01
In this paper, we present an approach that uses a combination of cell-block- and cell-face-averaging of high-resolution cell elevation and roughness data to upscale hydraulic parameters and accurately simulate surface water flow in relatively low-resolution numerical models. The method developed allows channelized features that preferentially connect large-scale grid cells at cell interfaces to be represented in models where these features are significantly smaller than the selected grid size. The developed upscaling approach has been implemented in a two-dimensional finite difference model that solves a diffusive wave approximation of the depth-integrated shallow surface water equations using preconditioned Newton–Krylov methods. Computational results are presented to show the effectiveness of the mixed cell-block and cell-face averaging upscaling approach in maintaining model accuracy, reducing model run-times, and how decreased grid resolution affects errors. Application examples demonstrate that sub-grid roughness coefficient variations have a larger effect on simulated error than sub-grid elevation variations.
Grids in topographic maps reduce distortions in the recall of learned object locations.
Edler, Dennis; Bestgen, Anne-Kathrin; Kuchinke, Lars; Dickmann, Frank
2014-01-01
To date, it has been shown that cognitive map representations based on cartographic visualisations are systematically distorted. The grid is a traditional element of map graphics that has rarely been considered in research on perception-based spatial distortions. Grids do not only support the map reader in finding coordinates or locations of objects, they also provide a systematic structure for clustering visual map information ("spatial chunks"). The aim of this study was to examine whether different cartographic kinds of grids reduce spatial distortions and improve recall memory for object locations. Recall performance was measured as both the percentage of correctly recalled objects (hit rate) and the mean distance errors of correctly recalled objects (spatial accuracy). Different kinds of grids (continuous lines, dashed lines, crosses) were applied to topographic maps. These maps were also varied in their type of characteristic areas (LANDSCAPE) and different information layer compositions (DENSITY) to examine the effects of map complexity. The study involving 144 participants shows that all experimental cartographic factors (GRID, LANDSCAPE, DENSITY) improve recall performance and spatial accuracy of learned object locations. Overlaying a topographic map with a grid significantly reduces the mean distance errors of correctly recalled map objects. The paper includes a discussion of a square grid's usefulness concerning object location memory, independent of whether the grid is clearly visible (continuous or dashed lines) or only indicated by crosses.
14. View of Clark Fork Vehicle Bridge facing north. Approach ...
14. View of Clark Fork Vehicle Bridge facing north. Approach from the south. Concrete barrier blocks access. Plaque was originally located where striped traffic sign is posted at right. - Clark Fork Vehicle Bridge, Spanning Clark Fork River, serves Highway 200, Clark Fork, Bonner County, ID
Gundle, Kenneth R; White, Jedediah K; Conrad, Ernest U; Ching, Randal P
2017-01-01
Surgical navigation systems are increasingly used to aid resection and reconstruction of osseous malignancies. In the process of implementing image-based surgical navigation systems, there are numerous opportunities for error that may impact surgical outcome. This study aimed to examine modifiable sources of error in an idealized scenario, when using a bidirectional infrared surgical navigation system. Accuracy and precision were assessed using a computerized-numerical-controlled (CNC) machined grid with known distances between indentations while varying: 1) the distance from the grid to the navigation camera (range 150 to 247cm), 2) the distance from the grid to the patient tracker device (range 20 to 40cm), and 3) whether the minimum or maximum number of bidirectional infrared markers were actively functioning. For each scenario, distances between grid points were measured at 10-mm increments between 10 and 120mm, with twelve measurements made at each distance. The accuracy outcome was the root mean square (RMS) error between the navigation system distance and the actual grid distance. To assess precision, four indentations were recorded six times for each scenario while also varying the angle of the navigation system pointer. The outcome for precision testing was the standard deviation of the distance between each measured point to the mean three-dimensional coordinate of the six points for each cluster. Univariate and multiple linear regression revealed that as the distance from the navigation camera to the grid increased, the RMS error increased (p<0.001). The RMS error also increased when not all infrared markers were actively tracking (p=0.03), and as the measured distance increased (p<0.001). In a multivariate model, these factors accounted for 58% of the overall variance in the RMS error. Standard deviations in repeated measures also increased when not all infrared markers were active (p<0.001), and as the distance between navigation camera and physical space increased (p=0.005). Location of the patient tracker did not affect accuracy (0.36) or precision (p=0.97). In our model laboratory test environment, the infrared bidirectional navigation system was more accurate and precise when the distance from the navigation camera to the physical (working) space was minimized and all bidirectional markers were active. These findings may require alterations in operating room setup and software changes to improve the performance of this system.
Importance of interpolation and coincidence errors in data fusion
NASA Astrophysics Data System (ADS)
Ceccherini, Simone; Carli, Bruno; Tirelli, Cecilia; Zoppetti, Nicola; Del Bianco, Samuele; Cortesi, Ugo; Kujanpää, Jukka; Dragani, Rossana
2018-02-01
The complete data fusion (CDF) method is applied to ozone profiles obtained from simulated measurements in the ultraviolet and in the thermal infrared in the framework of the Sentinel 4 mission of the Copernicus programme. We observe that the quality of the fused products is degraded when the fusing profiles are either retrieved on different vertical grids or referred to different true profiles. To address this shortcoming, a generalization of the complete data fusion method, which takes into account interpolation and coincidence errors, is presented. This upgrade overcomes the encountered problems and provides products of good quality when the fusing profiles are both retrieved on different vertical grids and referred to different true profiles. The impact of the interpolation and coincidence errors on number of degrees of freedom and errors of the fused profile is also analysed. The approach developed here to account for the interpolation and coincidence errors can also be followed to include other error components, such as forward model errors.
Mehl, S.; Hill, M.C.
2004-01-01
This paper describes work that extends to three dimensions the two-dimensional local-grid refinement method for block-centered finite-difference groundwater models of Mehl and Hill [Development and evaluation of a local grid refinement method for block-centered finite-difference groundwater models using shared nodes. Adv Water Resour 2002;25(5):497-511]. In this approach, the (parent) finite-difference grid is discretized more finely within a (child) sub-region. The grid refinement method sequentially solves each grid and uses specified flux (parent) and specified head (child) boundary conditions to couple the grids. Iteration achieves convergence between heads and fluxes of both grids. Of most concern is how to interpolate heads onto the boundary of the child grid such that the physics of the parent-grid flow is retained in three dimensions. We develop a new two-step, "cage-shell" interpolation method based on the solution of the flow equation on the boundary of the child between nodes shared with the parent grid. Error analysis using a test case indicates that the shared-node local grid refinement method with cage-shell boundary head interpolation is accurate and robust, and the resulting code is used to investigate three-dimensional local grid refinement of stream-aquifer interactions. Results reveal that (1) the parent and child grids interact to shift the true head and flux solution to a different solution where the heads and fluxes of both grids are in equilibrium, (2) the locally refined model provided a solution for both heads and fluxes in the region of the refinement that was more accurate than a model without refinement only if iterations are performed so that both heads and fluxes are in equilibrium, and (3) the accuracy of the coupling is limited by the parent-grid size - A coarse parent grid limits correct representation of the hydraulics in the feedback from the child grid.
Image stretching on a curved surface to improve satellite gridding
NASA Technical Reports Server (NTRS)
Ormsby, J. P.
1975-01-01
A method for substantially reducing gridding errors due to satellite roll, pitch and yaw is given. A gimbal-mounted curved screen, scaled to 1:7,500,000, is used to stretch the satellite image whereby visible landmarks coincide with a projected map outline. The resulting rms position errors averaged 10.7 km as compared with 25.6 and 34.9 km for two samples of satellite imagery upon which image stretching was not performed.
A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model
Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.; ...
2016-09-16
Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less
A new vertical grid nesting capability in the Weather Research and Forecasting (WRF) Model
DOE Office of Scientific and Technical Information (OSTI.GOV)
Daniels, Megan H.; Lundquist, Katherine A.; Mirocha, Jeffrey D.
Mesoscale atmospheric models are increasingly used for high-resolution (<3 km) simulations to better resolve smaller-scale flow details. Increased resolution is achieved using mesh refinement via grid nesting, a procedure where multiple computational domains are integrated either concurrently or in series. A constraint in the concurrent nesting framework offered by the Weather Research and Forecasting (WRF) Model is that mesh refinement is restricted to the horizontal dimensions. This limitation prevents control of the grid aspect ratio, leading to numerical errors due to poor grid quality and preventing grid optimization. Here, a procedure permitting vertical nesting for one-way concurrent simulation is developedmore » and validated through idealized cases. The benefits of vertical nesting are demonstrated using both mesoscale and large-eddy simulations (LES). Mesoscale simulations of the Terrain-Induced Rotor Experiment (T-REX) show that vertical grid nesting can alleviate numerical errors due to large aspect ratios on coarse grids, while allowing for higher vertical resolution on fine grids. Furthermore, the coarsening of the parent domain does not result in a significant loss of accuracy on the nested domain. LES of neutral boundary layer flow shows that, by permitting optimal grid aspect ratios on both parent and nested domains, use of vertical nesting yields improved agreement with the theoretical logarithmic velocity profile on both domains. Lastly, vertical grid nesting in WRF opens the path forward for multiscale simulations, allowing more accurate simulations spanning a wider range of scales than previously possible.« less
Malik, Bilal H.; Gresham, Vincent C.; Coté, Gerard L.
2012-01-01
Abstract Objective Over the past 35 years considerable research has been performed toward the investigation of noninvasive and minimally invasive glucose monitoring techniques. Optical polarimetry is one noninvasive technique that has shown promise as a means to ascertain blood glucose levels through measuring the glucose concentrations in the anterior chamber of the eye. However, one of the key limitations to the use of optical polarimetry as a means to noninvasively measure glucose levels is the presence of sample noise caused by motion-induced time-varying corneal birefringence. Research Design and Methods In this article our group presents, for the first time, results that show dual-wavelength polarimetry can be used to accurately detect glucose concentrations in the presence of motion-induced birefringence in vivo using New Zealand White rabbits. Results In total, nine animal studies (three New Zealand White rabbits across three separate days) were conducted. Using the dual-wavelength optical polarimetric approach, in vivo, an overall mean average relative difference of 4.49% (11.66 mg/dL) was achieved with 100% Zone A+B hits on a Clarke error grid, including 100% falling in Zone A. Conclusions The results indicate that dual-wavelength polarimetry can effectively be used to significantly reduce the noise due to time-varying corneal birefringence in vivo, allowing the accurate measurement of glucose concentration in the aqueous humor of the eye and correlating that with blood glucose. PMID:22691020
Continuous glucose monitoring--a study of the Enlite sensor during hypo- and hyperbaric conditions.
Adolfsson, Peter; Örnhagen, Hans; Eriksson, Bengt M; Cooper, Ken; Jendle, Johan
2012-06-01
The performance and accuracy of the Enlite(™) (Medtronic, Inc., Northridge, CA) sensor may be affected by microbubble formation at the electrode surface during hypo- and hyperbaric conditions. The effects of acute pressure changes and of prewetting of sensors were investigated. On Day 1, 24 sensors were inserted on the right side of the abdomen and back in one healthy individual; 12 were prewetted with saline solution, and 12 were inserted dry. On Day 2, this procedure was repeated on the left side. All sensors were attached to an iPro continuous glucose monitoring (CGM) recorder. Hypobaric and hyperbaric tests were conducted in a pressure chamber, with each test lasting 105 min. Plasma glucose values were obtained at 5-min intervals with a HemoCue(®) (Ängelholm, Sweden) model 201 glucose analyzer for comparison with sensor glucose values. Ninety percent of the CGM systems operated during the tests. The mean absolute relative difference was lower during hyperbaric than hypobaric conditions (6.7% vs. 14.9%, P<0.001). Sensor sensitivity was slightly decreased (P<0.05) during hypobaric but not during hyperbaric conditions. Clarke Error Grid Analysis showed that 100% of the values were found in the A+B region. No differences were found between prewetted and dry sensors. The Enlite sensor performed adequately during acute pressure changes and was more accurate during hyperbaric than hypobaric conditions. Prewetting the sensors did not improve accuracy. Further studies on type 1 diabetes subjects are needed under various pressure conditions.
Romey, Matthew; Jovanovič, Lois; Bevier, Wendy; Markova, Kateryna; Strasma, Paul; Zisser, Howard
2012-11-01
Stress hyperglycemia in the critically ill is associated with increased morbidity and mortality. Continuous glucose monitoring offers a solution to the difficulties of dosing intravenous insulin properly to maintain glycemic control. The purpose of this study was to evaluate an intravascular continuous glucose monitoring (IV-CGM) system with a sensing element based on the concept of quenched fluorescence. A second-generation intravascular continuous glucose sensor was evaluated in 13 volunteer subjects with type 1 diabetes mellitus. There were 21 study sessions of up to 24 h in duration. Sensors were inserted into peripheral veins of the upper extremity, up to two sensors per subject per study session. Sensor output was compared with temporally correlated reference measurements obtained from venous samples on a laboratory glucose analyzer. Data were obtained from 23 sensors in 13 study sessions with 942 paired reference values. Fourteen out of 23 sensors (60.9%) had a mean absolute relative difference ≤ 10%. Eighty-nine percent of paired points were in the clinically accurate A zone of the Clarke error grid and met ISO 15197 performance criteria. Adequate venous blood flow was identified as a necessary condition for accuracy when local sensor readings are compared with venous blood glucose. The IV-CGM system was capable of achieving a high level of glucose measurement accuracy. However, superficial peripheral veins may not provide adequate blood flow for reliable indwelling blood glucose monitoring. © 2012 Diabetes Technology Society.
Romey, Matthew; Jovanovič, Lois; Bevier, Wendy; Markova, Kateryna; Strasma, Paul; Zisser, Howard
2012-01-01
Background Stress hyperglycemia in the critically ill is associated with increased morbidity and mortality. Continuous glucose monitoring offers a solution to the difficulties of dosing intravenous insulin properly to maintain glycemic control. The purpose of this study was to evaluate an intravascular continuous glucose monitoring (IV-CGM) system with a sensing element based on the concept of quenched fluorescence. Method A second-generation intravascular continuous glucose sensor was evaluated in 13 volunteer subjects with type 1 diabetes mellitus. There were 21 study sessions of up to 24 h in duration. Sensors were inserted into peripheral veins of the upper extremity, up to two sensors per subject per study session. Sensor output was compared with temporally correlated reference measurements obtained from venous samples on a laboratory glucose analyzer. Results Data were obtained from 23 sensors in 13 study sessions with 942 paired reference values. Fourteen out of 23 sensors (60.9%) had a mean absolute relative difference ≤ 10%. Eighty-nine percent of paired points were in the clinically accurate A zone of the Clarke error grid and met ISO 15197 performance criteria. Adequate venous blood flow was identified as a necessary condition for accuracy when local sensor readings are compared with venous blood glucose. Conclusions The IV-CGM system was capable of achieving a high level of glucose measurement accuracy. However, superficial peripheral veins may not provide adequate blood flow for reliable indwelling blood glucose monitoring. PMID:23294770
Park, Hae-Il; Lee, Seong-Su; Son, Jang-Won; Kwon, Hee-Sun; Kim, Sung Rae; Chae, Hyojin; Kim, Myungshin; Kim, Yonggoo; Yoo, Soonjib
2016-11-01
Element™ Auto-coding Blood Glucose Monitoring System (BGMS; Infopia Co., Ltd., Anyang-si, Korea) was developed for self-monitoring of blood glucose (SMBG). Precision, linearity, and interference were tested. Eighty-four capillary blood samples measured by Element™ BGMS were compared with central laboratory method (CLM) results in venous serum. Accuracy was evaluated using ISO 15197:2013 criteria. Coefficients of variation (CVs; mean) were 2.4% (44.2 mg/dl), 3.7% (100.6 mg/dl), and 2.1% (259.8 mg/dl). Linearity was shown at concentrations 39.25-456.25 mg/l (y = 0.989 + 0.984x, SE = 17.63). Up to 15 mg/dl of galactose, ascorbic acid, and acetaminophen, interference > 10.4% was not observed. Element™ BGMS glucose was higher than CLM levels by 3.2 mg/dl (at 200 mg/dl) to 8.2 mg/dl (at 100 mg/dl). The minimum specification for bias (3.3%) was met at 140 and 200 mg/l glucose. In the Clarke and consensus error grids, 100% of specimens were within zone A and B. For Element™ BGMS values, 92.9% (78/84) to 94.0% (79/84) were within a 15 mg/dl (< 100 mg/dl) or 15% (> 100 mg/dl) of the average CLM value. Element™ BGMS was considered an appropriate SMBG for home use; however, the positive bias at low-to-mid glucose levels requires further improvement. © 2016 Wiley Periodicals, Inc.
Real-time improvement of continuous glucose monitoring accuracy: the smart sensor concept.
Facchinetti, Andrea; Sparacino, Giovanni; Guerra, Stefania; Luijf, Yoeri M; DeVries, J Hans; Mader, Julia K; Ellmerer, Martin; Benesch, Carsten; Heinemann, Lutz; Bruttomesso, Daniela; Avogaro, Angelo; Cobelli, Claudio
2013-04-01
Reliability of continuous glucose monitoring (CGM) sensors is key in several applications. In this work we demonstrate that real-time algorithms can render CGM sensors smarter by reducing their uncertainty and inaccuracy and improving their ability to alert for hypo- and hyperglycemic events. The smart CGM (sCGM) sensor concept consists of a commercial CGM sensor whose output enters three software modules, able to work in real time, for denoising, enhancement, and prediction. These three software modules were recently presented in the CGM literature, and here we apply them to the Dexcom SEVEN Plus continuous glucose monitor. We assessed the performance of the sCGM on data collected in two trials, each containing 12 patients with type 1 diabetes. The denoising module improves the smoothness of the CGM time series by an average of ∼57%, the enhancement module reduces the mean absolute relative difference from 15.1 to 10.3%, increases by 12.6% the pairs of values falling in the A-zone of the Clarke error grid, and finally, the prediction module forecasts hypo- and hyperglycemic events an average of 14 min ahead of time. We have introduced and implemented the sCGM sensor concept. Analysis of data from 24 patients demonstrates that incorporation of suitable real-time signal processing algorithms for denoising, enhancement, and prediction can significantly improve the performance of CGM applications. This can be of great clinical impact for hypo- and hyperglycemic alert generation as well in artificial pancreas devices.
Mahmoudi, Zeinab; Johansen, Mette Dencker; Christiansen, Jens Sandahl
2014-01-01
Background: The purpose of this study was to investigate the effect of using a 1-point calibration approach instead of a 2-point calibration approach on the accuracy of a continuous glucose monitoring (CGM) algorithm. Method: A previously published real-time CGM algorithm was compared with its updated version, which used a 1-point calibration instead of a 2-point calibration. In addition, the contribution of the corrective intercept (CI) to the calibration performance was assessed. Finally, the sensor background current was estimated real-time and retrospectively. The study was performed on 132 type 1 diabetes patients. Results: Replacing the 2-point calibration with the 1-point calibration improved the CGM accuracy, with the greatest improvement achieved in hypoglycemia (18.4% median absolute relative differences [MARD] in hypoglycemia for the 2-point calibration, and 12.1% MARD in hypoglycemia for the 1-point calibration). Using 1-point calibration increased the percentage of sensor readings in zone A+B of the Clarke error grid analysis (EGA) in the full glycemic range, and also enhanced hypoglycemia sensitivity. Exclusion of CI from calibration reduced hypoglycemia accuracy, while slightly increased euglycemia accuracy. Both real-time and retrospective estimation of the sensor background current suggest that the background current can be considered zero in the calibration of the SCGM1 sensor. Conclusions: The sensor readings calibrated with the 1-point calibration approach indicated to have higher accuracy than those calibrated with the 2-point calibration approach. PMID:24876420
The Coast Artillery Journal. Volume 73, Number 2, August 1930
1930-08-01
lieu- tenant, C. A. C. Clark Neil Piper, graduate U. S . Military Academy, appointed second lieu- tenant, C. A. C. James S . Sutton , graduate U. S ...THE COAST ARTILLERY JOURNAL Published as th.e Jowrnal U. S . Artillery from 1892 to 1922 MAJ. STEWART S . GIFFIN, C. A. C mm •• h mm m mm .. m...ontispiece JOINT ARMY AND NAVY ACTION IN COAST DEFENSE By CAPT. W. D. PULESTON,U. S . N. 101 MORE ABOUT PROBABLE ERROR By 1ST LIEUT. PHILIP SCHWARTZ
Optimizing dynamic downscaling in one-way nesting using a regional ocean model
NASA Astrophysics Data System (ADS)
Pham, Van Sy; Hwang, Jin Hwan; Ku, Hyeyun
2016-10-01
Dynamical downscaling with nested regional oceanographic models has been demonstrated to be an effective approach for both operationally forecasted sea weather on regional scales and projections of future climate change and its impact on the ocean. However, when nesting procedures are carried out in dynamic downscaling from a larger-scale model or set of observations to a smaller scale, errors are unavoidable due to the differences in grid sizes and updating intervals. The present work assesses the impact of errors produced by nesting procedures on the downscaled results from Ocean Regional Circulation Models (ORCMs). Errors are identified and evaluated based on their sources and characteristics by employing the Big-Brother Experiment (BBE). The BBE uses the same model to produce both nesting and nested simulations; so it addresses those error sources separately (i.e., without combining the contributions of errors from different sources). Here, we focus on discussing errors resulting from the spatial grids' differences, the updating times and the domain sizes. After the BBE was separately run for diverse cases, a Taylor diagram was used to analyze the results and recommend an optimal combination of grid size, updating period and domain sizes. Finally, suggested setups for the downscaling were evaluated by examining the spatial correlations of variables and the relative magnitudes of variances between the nested model and the original data.
24. View of one of the plaques from Clark Fork ...
24. View of one of the plaques from Clark Fork Vehicle Bridge. Presently located at the Bonner County Historical Museum in Sandpoint, Idaho. A plaque was attached at each end of the bridge. Only one remains. - Clark Fork Vehicle Bridge, Spanning Clark Fork River, serves Highway 200, Clark Fork, Bonner County, ID
Federal Register 2010, 2011, 2012, 2013, 2014
2013-03-21
... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Project No. 12429-009] Clark Canyon...: 12429-009. c. Date Filed: January 28, 2013. d. Applicant: Northwest Power Services on behalf of Clark Canyon Hydro, LLC. e. Name of Project: Clark Canyon Dam Hydroelectric Project. f. Location: The Clark...
CFD Script for Rapid TPS Damage Assessment
NASA Technical Reports Server (NTRS)
McCloud, Peter
2013-01-01
This grid generation script creates unstructured CFD grids for rapid thermal protection system (TPS) damage aeroheating assessments. The existing manual solution is cumbersome, open to errors, and slow. The invention takes a large-scale geometry grid and its large-scale CFD solution, and creates a unstructured patch grid that models the TPS damage. The flow field boundary condition for the patch grid is then interpolated from the large-scale CFD solution. It speeds up the generation of CFD grids and solutions in the modeling of TPS damages and their aeroheating assessment. This process was successfully utilized during STS-134.
Near-Body Grid Adaption for Overset Grids
NASA Technical Reports Server (NTRS)
Buning, Pieter G.; Pulliam, Thomas H.
2016-01-01
A solution adaption capability for curvilinear near-body grids has been implemented in the OVERFLOW overset grid computational fluid dynamics code. The approach follows closely that used for the Cartesian off-body grids, but inserts refined grids in the computational space of original near-body grids. Refined curvilinear grids are generated using parametric cubic interpolation, with one-sided biasing based on curvature and stretching ratio of the original grid. Sensor functions, grid marking, and solution interpolation tasks are implemented in the same fashion as for off-body grids. A goal-oriented procedure, based on largest error first, is included for controlling growth rate and maximum size of the adapted grid system. The adaption process is almost entirely parallelized using MPI, resulting in a capability suitable for viscous, moving body simulations. Two- and three-dimensional examples are presented.
Moving overlapping grids with adaptive mesh refinement for high-speed reactive and non-reactive flow
NASA Astrophysics Data System (ADS)
Henshaw, William D.; Schwendeman, Donald W.
2006-08-01
We consider the solution of the reactive and non-reactive Euler equations on two-dimensional domains that evolve in time. The domains are discretized using moving overlapping grids. In a typical grid construction, boundary-fitted grids are used to represent moving boundaries, and these grids overlap with stationary background Cartesian grids. Block-structured adaptive mesh refinement (AMR) is used to resolve fine-scale features in the flow such as shocks and detonations. Refinement grids are added to base-level grids according to an estimate of the error, and these refinement grids move with their corresponding base-level grids. The numerical approximation of the governing equations takes place in the parameter space of each component grid which is defined by a mapping from (fixed) parameter space to (moving) physical space. The mapped equations are solved numerically using a second-order extension of Godunov's method. The stiff source term in the reactive case is handled using a Runge-Kutta error-control scheme. We consider cases when the boundaries move according to a prescribed function of time and when the boundaries of embedded bodies move according to the surface stress exerted by the fluid. In the latter case, the Newton-Euler equations describe the motion of the center of mass of the each body and the rotation about it, and these equations are integrated numerically using a second-order predictor-corrector scheme. Numerical boundary conditions at slip walls are described, and numerical results are presented for both reactive and non-reactive flows that demonstrate the use and accuracy of the numerical approach.
Spatial uncertainty analysis: Propagation of interpolation errors in spatially distributed models
Phillips, D.L.; Marks, D.G.
1996-01-01
In simulation modelling, it is desirable to quantify model uncertainties and provide not only point estimates for output variables but confidence intervals as well. Spatially distributed physical and ecological process models are becoming widely used, with runs being made over a grid of points that represent the landscape. This requires input values at each grid point, which often have to be interpolated from irregularly scattered measurement sites, e.g., weather stations. Interpolation introduces spatially varying errors which propagate through the model We extended established uncertainty analysis methods to a spatial domain for quantifying spatial patterns of input variable interpolation errors and how they propagate through a model to affect the uncertainty of the model output. We applied this to a model of potential evapotranspiration (PET) as a demonstration. We modelled PET for three time periods in 1990 as a function of temperature, humidity, and wind on a 10-km grid across the U.S. portion of the Columbia River Basin. Temperature, humidity, and wind speed were interpolated using kriging from 700- 1000 supporting data points. Kriging standard deviations (SD) were used to quantify the spatially varying interpolation uncertainties. For each of 5693 grid points, 100 Monte Carlo simulations were done, using the kriged values of temperature, humidity, and wind, plus random error terms determined by the kriging SDs and the correlations of interpolation errors among the three variables. For the spring season example, kriging SDs averaged 2.6??C for temperature, 8.7% for relative humidity, and 0.38 m s-1 for wind. The resultant PET estimates had coefficients of variation (CVs) ranging from 14% to 27% for the 10-km grid cells. Maps of PET means and CVs showed the spatial patterns of PET with a measure of its uncertainty due to interpolation of the input variables. This methodology should be applicable to a variety of spatially distributed models using interpolated inputs.
Effect of grid resolution on large eddy simulation of wall-bounded turbulence
NASA Astrophysics Data System (ADS)
Rezaeiravesh, S.; Liefvendahl, M.
2018-05-01
The effect of grid resolution on a large eddy simulation (LES) of a wall-bounded turbulent flow is investigated. A channel flow simulation campaign involving a systematic variation of the streamwise (Δx) and spanwise (Δz) grid resolution is used for this purpose. The main friction-velocity-based Reynolds number investigated is 300. Near the walls, the grid cell size is determined by the frictional scaling, Δx+ and Δz+, and strongly anisotropic cells, with first Δy+ ˜ 1, thus aiming for the wall-resolving LES. Results are compared to direct numerical simulations, and several quality measures are investigated, including the error in the predicted mean friction velocity and the error in cross-channel profiles of flow statistics. To reduce the total number of channel flow simulations, techniques from the framework of uncertainty quantification are employed. In particular, a generalized polynomial chaos expansion (gPCE) is used to create metamodels for the errors over the allowed parameter ranges. The differing behavior of the different quality measures is demonstrated and analyzed. It is shown that friction velocity and profiles of the velocity and Reynolds stress tensor are most sensitive to Δz+, while the error in the turbulent kinetic energy is mostly influenced by Δx+. Recommendations for grid resolution requirements are given, together with the quantification of the resulting predictive accuracy. The sensitivity of the results to the subgrid-scale (SGS) model and varying Reynolds number is also investigated. All simulations are carried out with second-order accurate finite-volume-based solver OpenFOAM. It is shown that the choice of numerical scheme for the convective term significantly influences the error portraits. It is emphasized that the proposed methodology, involving the gPCE, can be applied to other modeling approaches, i.e., other numerical methods and the choice of SGS model.
Analyzing Hydraulic Conductivity Sampling Schemes in an Idealized Meandering Stream Model
NASA Astrophysics Data System (ADS)
Stonedahl, S. H.; Stonedahl, F.
2017-12-01
Hydraulic conductivity (K) is an important parameter affecting the flow of water through sediments under streams, which can vary by orders of magnitude within a stream reach. Measuring heterogeneous K distributions in the field is limited by time and resources. This study investigates hypothetical sampling practices within a modeling framework on a highly idealized meandering stream. We generated three sets of 100 hydraulic conductivity grids containing two sands with connectivity values of 0.02, 0.08, and 0.32. We investigated systems with twice as much fast (K=0.1 cm/s) sand as slow sand (K=0.01 cm/s) and the reverse ratio on the same grids. The K values did not vary with depth. For these 600 cases, we calculated the homogenous K value, Keq, that would yield the same flux into the sediments as the corresponding heterogeneous grid. We then investigated sampling schemes with six weighted probability distributions derived from the homogenous case: uniform, flow-paths, velocity, in-stream, flux-in, and flux-out. For each grid, we selected locations from these distributions and compared the arithmetic, geometric, and harmonic means of these lists to the corresponding Keq using the root-mean-square deviation. We found that arithmetic averaging of samples outperformed geometric or harmonic means for all sampling schemes. Of the sampling schemes, flux-in (sampling inside the stream in an inward flux-weighted manner) yielded the least error and flux-out yielded the most error. All three sampling schemes outside of the stream yielded very similar results. Grids with lower connectivity values (fewer and larger clusters) showed the most sensitivity to the choice of sampling scheme, and thus improved the most with the flux-insampling. We also explored the relationship between the number of samples taken and the resulting error. Increasing the number of sampling points reduced error for the arithmetic mean with diminishing returns, but did not substantially reduce error associated with geometric and harmonic means.
Mapping Error in Southern Ocean Transport Computed from Satellite Altimetry and Argo
NASA Astrophysics Data System (ADS)
Kosempa, M.; Chambers, D. P.
2016-02-01
Argo profiling floats afford basin-scale coverage of the Southern Ocean since 2005. When density estimates from Argo are combined with surface geostrophic currents derived from satellite altimetry, one can estimate integrated geostrophic transport above 2000 dbar [e.g., Kosempa and Chambers, JGR, 2014]. However, the interpolation techniques relied upon to generate mapped data from Argo and altimetry will impart a mapping error. We quantify this mapping error by sampling the high-resolution Southern Ocean State Estimate (SOSE) at the locations of Argo floats and Jason-1, and -2 altimeter ground tracks, then create gridded products using the same optimal interpolation algorithms used for the Argo/altimetry gridded products. We combine these surface and subsurface grids to compare the sampled-then-interpolated transport grids to those from the original SOSE data in an effort to quantify the uncertainty in volume transport integrated across the Antarctic Circumpolar Current (ACC). This uncertainty is then used to answer two fundamental questions: 1) What is the minimum linear trend that can be observed in ACC transport given the present length of the instrument record? 2) How long must the instrument record be to observe a trend with an accuracy of 0.1 Sv/year?
On NUFFT-based gridding for non-Cartesian MRI
NASA Astrophysics Data System (ADS)
Fessler, Jeffrey A.
2007-10-01
For MRI with non-Cartesian sampling, the conventional approach to reconstructing images is to use the gridding method with a Kaiser-Bessel (KB) interpolation kernel. Recently, Sha et al. [L. Sha, H. Guo, A.W. Song, An improved gridding method for spiral MRI using nonuniform fast Fourier transform, J. Magn. Reson. 162(2) (2003) 250-258] proposed an alternative method based on a nonuniform FFT (NUFFT) with least-squares (LS) design of the interpolation coefficients. They described this LS_NUFFT method as shift variant and reported that it yielded smaller reconstruction approximation errors than the conventional shift-invariant KB approach. This paper analyzes the LS_NUFFT approach in detail. We show that when one accounts for a certain linear phase factor, the core of the LS_NUFFT interpolator is in fact real and shift invariant. Furthermore, we find that the KB approach yields smaller errors than the original LS_NUFFT approach. We show that optimizing certain scaling factors can lead to a somewhat improved LS_NUFFT approach, but the high computation cost seems to outweigh the modest reduction in reconstruction error. We conclude that the standard KB approach, with appropriate parameters as described in the literature, remains the practical method of choice for gridding reconstruction in MRI.
78 FR 48315 - Drawbridge Operation Regulation; Lewis and Clark River, Astoria, OR
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-08
... Operation Regulation; Lewis and Clark River, Astoria, OR AGENCY: Coast Guard, DHS. ACTION: Notice of... operating schedule that governs the Lewis and Clark Bridge which crosses the Lewis and Clark River, mile 1.0... Transportation has requested that the Lewis and Clark Drawbridge, mile 1.0, remain in the closed position and not...
Grid Resolution Study over Operability Space for a Mach 1.7 Low Boom External Compression Inlet
NASA Technical Reports Server (NTRS)
Anderson, Bernhard H.
2014-01-01
This paper presents a statistical methodology whereby the probability limits associated with CFD grid resolution of inlet flow analysis can be determined which provide quantitative information on the distribution of that error over the specified operability range. The objectives of this investigation is to quantify the effects of both random (accuracy) and systemic (biasing) errors associated with grid resolution in the analysis of the Lockheed Martin Company (LMCO) N+2 Low Boom external compression supersonic inlet. The study covers the entire operability space as defined previously by the High Speed Civil Transport (HSCT) High Speed Research (HSR) program goals. The probability limits in terms of a 95.0% confidence interval on the analysis data were evaluated for four ARP1420 inlet metrics, namely (1) total pressure recovery (PFAIP), (2) radial hub distortion (DPH/P), (3) ) radial tip distortion (DPT/P), and (4) ) circumferential distortion (DPC/P). In general, the resulting +/-0.95 delta Y interval was unacceptably large in comparison to the stated goals of the HSCT program. Therefore, the conclusion was reached that the "standard grid" size was insufficient for this type of analysis. However, in examining the statistical data, it was determined that the CFD analysis results at the outer fringes of the operability space were the determining factor in the measure of statistical uncertainty. Adequate grids are grids that are free of biasing (systemic) errors and exhibit low random (precision) errors in comparison to their operability goals. In order to be 100% certain that the operability goals have indeed been achieved for each of the inlet metrics, the Y+/-0.95 delta Y limit must fall inside the stated operability goals. For example, if the operability goal for DPC/P circumferential distortion is =0.06, then the forecast Y for DPC/P plus the 95% confidence interval on DPC/P, i.e. +/-0.95 delta Y, must all be less than or equal to 0.06.
Kenneth B. Clark in the patterns of American culture.
Keppel, Ben
2002-01-01
Kenneth B. Clark is most well-remembered as the social scientist cited by the U.S. Supreme Court in footnote 11 of its decision in Brown v. Board of Education in 1954. His presence in that decision came to symbolize the role that social science could play in changing social policy and public attitudes. As an African American social scientist who was prominent during a time of great turmoil over racial issues in the United States, Clark also became a "participant-symbol" in America's discussion of race. Clark contributed to this discussion in the three books he wrote for the general public: Prejudice and Your Child (Clark, 1955), Dark Ghetto (Clark, 1965), and Pathos of Power (Clark, 1974). In this article, the author discusses how these works document Clark's growing pessimism about the prospects for improving race relations. In addition, Clark's place in contemporary American debates about Brown v. Board of Education and the persistence of racial equality is considered.
Covariance analysis of the airborne laser ranging system
NASA Technical Reports Server (NTRS)
Englar, T. S., Jr.; Hammond, C. L.; Gibbs, B. P.
1981-01-01
The requirements and limitations of employing an airborne laser ranging system for detecting crustal shifts of the Earth within centimeters over a region of approximately 200 by 400 km are presented. The system consists of an aircraft which flies over a grid of ground deployed retroreflectors, making six passes over the grid at two different altitudes. The retroreflector baseline errors are assumed to result from measurement noise, a priori errors on the aircraft and retroreflector positions, tropospheric refraction, and sensor biases.
NASA Astrophysics Data System (ADS)
Mizukami, N.; Smith, M. B.
2010-12-01
It is common for the error characteristics of long-term precipitation data to change over time due to various factors such as gauge relocation and changes in data processing methods. The temporal consistency of precipitation data error characteristics is as important as data accuracy itself for hydrologic model calibration and subsequent use of the calibrated model for streamflow prediction. In mountainous areas, the generation of precipitation grids relies on sparse gage networks, the makeup of which often varies over time. This causes a change in error characteristics of the long-term precipitation data record. We will discuss the diagnostic analysis of the consistency of gridded precipitation time series and illustrate the adverse effect of inconsistent precipitation data on a hydrologic model simulation. We used hourly 4 km gridded precipitation time series over a mountainous basin in the Sierra Nevada Mountains of California from October 1988 through September 2006. The basin is part of the broader study area that served as the focus of the second phase of the Distributed Model Intercomparison Project (DMIP-2), organized by the U.S. National Weather Service (NWS) of the National Oceanographic and Atmospheric Administration (NOAA). To check the consistency of the gridded precipitation time series, double mass analysis was performed using single pixel and basin mean areal precipitation (MAP) values derived from gridded DMIP-2 and Parameter-Elevation Regressions on Independent Slopes Model (PRISM) precipitation data. The analysis leads to the conclusion that over the entire study time period, a clear change in error characteristics in the DMIP-2 data occurred in the beginning of 2003. This matches the timing of one of the major gage network changes. The inconsistency of two MAP time series computed from the gridded precipitation fields over two elevation zones was corrected by adjusting hourly values based on the double mass analysis. We show that model simulations using the adjusted MAP data produce improved stream flow compared to simulations using the inconsistent MAP input data.
NASA Astrophysics Data System (ADS)
Meyer, B.; Chulliat, A.; Saltus, R.
2017-12-01
The Earth Magnetic Anomaly Grid at 2 arc min resolution version 3, EMAG2v3, combines marine and airborne trackline observations, satellite data, and magnetic observatory data to map the location, intensity, and extent of lithospheric magnetic anomalies. EMAG2v3 includes over 50 million new data points added to NCEI's Geophysical Database System (GEODAS) in recent years. The new grid relies only on observed data, and does not utilize a priori geologic structure or ocean-age information. Comparing this grid to other global magnetic anomaly compilations (e.g., EMAG2 and WDMAM), we can see that the inclusion of a priori ocean-age patterns forces an artificial linear pattern to the grid; the data-only approach allows for greater complexity in representing the evolution along oceanic spreading ridges and continental margins. EMAG2v3 also makes use of the satellite-derived lithospheric field model MF7 in order to accurately represent anomalies with wavelengths greater than 300 km and to create smooth grid merging boundaries. The heterogeneous distribution of errors in the observations used in compiling the EMAG2v3 was explored, and is reported in the final distributed grid. This grid is delivered at both 4 km continuous altitude above WGS84, as well as at sea level for all oceanic and coastal regions.
Multi-off-grid methods in multi-step integration of ordinary differential equations
NASA Technical Reports Server (NTRS)
Beaudet, P. R.
1974-01-01
Description of methods of solving first- and second-order systems of differential equations in which all derivatives are evaluated at off-grid locations in order to circumvent the Dahlquist stability limitation on the order of on-grid methods. The proposed multi-off-grid methods require off-grid state predictors for the evaluation of the n derivatives at each step. Progressing forward in time, the off-grid states are predicted using a linear combination of back on-grid state values and off-grid derivative evaluations. A comparison is made between the proposed multi-off-grid methods and the corresponding Adams and Cowell on-grid integration techniques in integrating systems of ordinary differential equations, showing a significant reduction in the error at larger step sizes in the case of the multi-off-grid integrator.
The first Australian gravimetric quasigeoid model with location-specific uncertainty estimates
NASA Astrophysics Data System (ADS)
Featherstone, W. E.; McCubbine, J. C.; Brown, N. J.; Claessens, S. J.; Filmer, M. S.; Kirby, J. F.
2018-02-01
We describe the computation of the first Australian quasigeoid model to include error estimates as a function of location that have been propagated from uncertainties in the EGM2008 global model, land and altimeter-derived gravity anomalies and terrain corrections. The model has been extended to include Australia's offshore territories and maritime boundaries using newer datasets comprising an additional {˜ }280,000 land gravity observations, a newer altimeter-derived marine gravity anomaly grid, and terrain corrections at 1^' ' }× 1^' ' } resolution. The error propagation uses a remove-restore approach, where the EGM2008 quasigeoid and gravity anomaly error grids are augmented by errors propagated through a modified Stokes integral from the errors in the altimeter gravity anomalies, land gravity observations and terrain corrections. The gravimetric quasigeoid errors (one sigma) are 50-60 mm across most of the Australian landmass, increasing to {˜ }100 mm in regions of steep horizontal gravity gradients or the mountains, and are commensurate with external estimates.
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-29
... Classification, Clark County, NV AGENCY: Bureau of Land Management, Interior. ACTION: Notice of Realty Action..., approximately 303.66 acres of public land in Clark County, Nevada. Clark County proposes to use the land for a... Executive Order No. 6910, the following described public land in Clark County, Nevada, has been examined and...
NASA Astrophysics Data System (ADS)
Planche, C.; Flossmann, A. I.; Wobrock, W.
2009-04-01
A 3D cloud model with detailed microphysics for ice, water and aerosol particles (AP) is used to study the role of AP on the evolution of summertime convective mixed phase clouds and the subsequent precipitation. The model couples the dynamics of the NCAR Clark-Hall cloud scale model (Clark et al., 1996) with the detailed scavenging model (DESCAM) of Flossmann and Pruppacher (1988) and the ice phase module of Leroy et al. (2007). The microphysics follows the evolution of AP, drop, and ice crystal spectra each with 39 bins. Aerosol mass in drops and ice crystals is also predicted by two distribution functions to close the aerosol budget. The simulated cases are compared with radar observations over the northern Vosges mountains and the Rhine valley which were performed on 12 and 13 August 2007 during the COPS field campaign. Using a 3D grid resolution of 250m, our model, called DESCAM-3D, is able to simulate very well the dynamical, cloud and precipitation features observed for the two different cloud systems. The high horizontal grid resolution provides new elements for the understanding of the formation of orographic convection. In addition the fine numerical scale compares well with the high resolved radar observation given by the LaMP X-band radar and Poldirad. The prediction of the liquid and ice hydrometeor spectra allows a detailed calculation of the cloud radar reflectivity. Sensitivity studies realized by the use of different mass-diameter relationships for ice crystals demonstrate the role of the crystal habits on the simulated reflectivities. In order to better understand the role of AP on cloud evolution and precipitation formation several sensitivity studies were performed by modifying not only aerosol number concentration but also their physico-chemical properties. The numerical results show a strong influence of the aerosol number concentration on the precipitation intensity but no effect of the aerosol particle solubility on the rain formation can be found.
Prevention 0f Unwanted Free-Declaration of Static Obstacles in Probability Occupancy Grids
NASA Astrophysics Data System (ADS)
Krause, Stefan; Scholz, M.; Hohmann, R.
2017-10-01
Obstacle detection and avoidance are major research fields in unmanned aviation. Map based obstacle detection approaches often use discrete world representations such as probabilistic grid maps to fuse incremental environment data from different views or sensors to build a comprehensive representation. The integration of continuous measurements into a discrete representation can result in rounding errors which, in turn, leads to differences between the artificial model and real environment. The cause of these deviations is a low spatial resolution of the world representation comparison to the used sensor data. Differences between artificial representations which are used for path planning or obstacle avoidance and the real world can lead to unexpected behavior up to collisions with unmapped obstacles. This paper presents three approaches to the treatment of errors that can occur during the integration of continuous laser measurement in the discrete probabilistic grid. Further, the quality of the error prevention and the processing performance are compared with real sensor data.
Modern Exploration of the Lewis and Clark Expedition
NASA Technical Reports Server (NTRS)
2006-01-01
The Lewis and Clark Geosystem is an online collection of private, state, local, and Federal data resources associated with the geography of the Lewis and Clark Expedition. Data were compiled from key partners including NASA s Stennis Space Center, the U.S. Army Corps of Engineers, the U.S. Fish and Wildlife Service, the U.S. Geological Survey (USGS), the University of Montana, the U.S. Department of Agriculture Forest Service, and from a collection of Lewis and Clark scholars. It combines modern views of the landscape with historical aerial photography, cartography, and other geographical data resources and historical sources, including: The Journals of the Lewis and Clark Expedition, the Academy of Natural Science's Lewis and Clark Herbarium, high-resolution copies of the American Philosophical Society s primary-source Lewis and Clark Journals, The Library of Congress Lewis and Clark cartography collection, as well as artifacts from the Smithsonian Institution and other sources.
Clinical assessment of the accuracy of blood glucose measurement devices.
Pfützner, Andreas; Mitri, Michael; Musholt, Petra B; Sachsenheimer, Daniela; Borchert, Marcus; Yap, Andrew; Forst, Thomas
2012-04-01
Blood glucose meters for patient self-measurement need to comply with the accuracy standards of the ISO 15197 guideline. We investigated the accuracy of the two new blood glucose meters BG*Star and iBG*Star (Sanofi-Aventis) in comparison to four other competitive devices (Accu-Chek Aviva, Roche Diagnostics; FreeStyle Freedom Lite, Abbott Medisense; Contour, Bayer; OneTouch Ultra 2, Lifescan) at different blood glucose ranges in a clinical setting with healthy subjects and patients with type 1 and type 2 diabetes. BGStar and iBGStar are employ dynamic electrochemistry, which is supposed to result in highly accurate results. The study was performed on 106 participants (53 female, 53 male, age (mean ± SD): 46 ± 16 years, type 1: 32 patients, type 2: 34 patients, and 40 healthy subjects). Two devices from each type and strips from two different production lots were used for glucose assessment (∼200 readings/meter). Spontaneous glucose assessments and glucose or insulin interventions under medical supervision were applied to perform measurements in the different glucose ranges in accordance with the ISO 15197 requirements. Sample values <50 mg/dL and >400 mg/dL were prepared by laboratory manipulations. The YSI glucose analyzer (glucose oxidase method) served as the standard reference method which may be considered to be a limitation in light of glucose hexokinase-based meters. For all devices, there was a very close correlation between the glucose results compared to the YSI reference method results. The correlation coefficients were r = 0.995 for BGStar and r = 0.992 for iBGStar (Aviva: 0.995, Freedom Lite: 0.990, Contour: 0.993, Ultra 2: 0.990). Error-grid analysis according to Parkes and Clarke revealed both 100% of the readings to be within the clinically acceptable areas (Clarke: A + B with BG*Star (100 + 0), Aviva (97 + 3), and Contour (97 + 3); and 99.5% with iBG*Star (97.5 + 2), Freedom Lite (98 + 1.5), and Ultra 2 (97.5 + 2)). This study demonstrated the very high accuracy of BG*Star, iBG*Star, and the competitive blood glucose meters in a clinical setting.
Grid Quality and Resolution Issues from the Drag Prediction Workshop Series
NASA Technical Reports Server (NTRS)
Mavriplis, Dimitri J.; Vassberg, John C.; Tinoco, Edward N.; Mani, Mori; Brodersen, Olaf P.; Eisfeld, Bernhard; Wahls, Richard A.; Morrison, Joseph H.; Zickuhr, Tom; Levy, David;
2008-01-01
The drag prediction workshop series (DPW), held over the last six years, and sponsored by the AIAA Applied Aerodynamics Committee, has been extremely useful in providing an assessment of the state-of-the-art in computationally based aerodynamic drag prediction. An emerging consensus from the three workshop series has been the identification of spatial discretization errors as a dominant error source in absolute as well as incremental drag prediction. This paper provides an overview of the collective experience from the workshop series regarding the effect of grid-related issues on overall drag prediction accuracy. Examples based on workshop results are used to illustrate the effect of grid resolution and grid quality on drag prediction, and grid convergence behavior is examined in detail. For fully attached flows, various accurate and successful workshop results are demonstrated, while anomalous behavior is identified for a number of cases involving substantial regions of separated flow. Based on collective workshop experiences, recommendations for improvements in mesh generation technology which have the potential to impact the state-of-the-art of aerodynamic drag prediction are given.
Wilde, M C; Boake, C; Sherer, M
2000-01-01
Final broken configuration errors on the Wechsler Adult Intelligence Scale-Revised (WAIS-R; Wechsler, 1981) Block Design subtest were examined in 50 moderate and severe nonpenetrating traumatically brain injured adults. Patients were divided into left (n = 15) and right hemisphere (n = 19) groups based on a history of unilateral craniotomy for treatment of an intracranial lesion and were compared to a group with diffuse or negative brain CT scan findings and no history of neurosurgery (n = 16). The percentage of final broken configuration errors was related to injury severity, Benton Visual Form Discrimination Test (VFD; Benton, Hamsher, Varney, & Spreen, 1983) total score and the number of VFD rotation and peripheral errors. The percentage of final broken configuration errors was higher in the patients with right craniotomies than in the left or no craniotomy groups, which did not differ. Broken configuration errors did not occur more frequently on designs without an embedded grid pattern. Right craniotomy patients did not show a greater percentage of broken configuration errors on nongrid designs as compared to grid designs.
Gundle, Kenneth R.; White, Jedediah K.; Conrad, Ernest U.; Ching, Randal P.
2017-01-01
Introduction: Surgical navigation systems are increasingly used to aid resection and reconstruction of osseous malignancies. In the process of implementing image-based surgical navigation systems, there are numerous opportunities for error that may impact surgical outcome. This study aimed to examine modifiable sources of error in an idealized scenario, when using a bidirectional infrared surgical navigation system. Materials and Methods: Accuracy and precision were assessed using a computerized-numerical-controlled (CNC) machined grid with known distances between indentations while varying: 1) the distance from the grid to the navigation camera (range 150 to 247cm), 2) the distance from the grid to the patient tracker device (range 20 to 40cm), and 3) whether the minimum or maximum number of bidirectional infrared markers were actively functioning. For each scenario, distances between grid points were measured at 10-mm increments between 10 and 120mm, with twelve measurements made at each distance. The accuracy outcome was the root mean square (RMS) error between the navigation system distance and the actual grid distance. To assess precision, four indentations were recorded six times for each scenario while also varying the angle of the navigation system pointer. The outcome for precision testing was the standard deviation of the distance between each measured point to the mean three-dimensional coordinate of the six points for each cluster. Results: Univariate and multiple linear regression revealed that as the distance from the navigation camera to the grid increased, the RMS error increased (p<0.001). The RMS error also increased when not all infrared markers were actively tracking (p=0.03), and as the measured distance increased (p<0.001). In a multivariate model, these factors accounted for 58% of the overall variance in the RMS error. Standard deviations in repeated measures also increased when not all infrared markers were active (p<0.001), and as the distance between navigation camera and physical space increased (p=0.005). Location of the patient tracker did not affect accuracy (0.36) or precision (p=0.97) Conclusion: In our model laboratory test environment, the infrared bidirectional navigation system was more accurate and precise when the distance from the navigation camera to the physical (working) space was minimized and all bidirectional markers were active. These findings may require alterations in operating room setup and software changes to improve the performance of this system. PMID:28694888
Improving Barotropic Tides by Two-way Nesting High and Low Resolution Domains
NASA Astrophysics Data System (ADS)
Jeon, C. H.; Buijsman, M. C.; Wallcraft, A. J.; Shriver, J. F.; Hogan, P. J.; Arbic, B. K.; Richman, J. G.
2017-12-01
In a realistically forced global ocean model, relatively large sea-surface-height root-mean-square (RMS) errors are observed in the North Atlantic near the Hudson Strait. These may be associated with large tidal resonances interacting with coastal bathymetry that are not correctly represented with a low resolution grid. This issue can be overcome by using high resolution grids, but at a high computational cost. In this paper we apply two-way nesting as an alternative solution. This approach applies high resolution to the area with large RMS errors and a lower resolution to the rest. It is expected to improve the tidal solution as well as reduce the computational cost. To minimize modification of the original source codes of the ocean circulation model (HYCOM), we apply the coupler OASIS3-MCT. This coupler is used to exchange barotropic pressures and velocity fields through its APIs (Application Programming Interface) between the parent and the child components. The developed two-way nesting framework has been validated with an idealized test case where the parent and the child domains have identical grid resolutions. The result of the idealized case shows very small RMS errors between the child and parent solutions. We plan to show results for a case with realistic tidal forcing in which the resolution of the child grid is three times that of the parent grid. The numerical results of this realistic case are compared to TPXO data.
Predicting mosaics and wildlife diversity resulting from fire disturbance to a forest ecosystem
NASA Astrophysics Data System (ADS)
Potter, Meredith W.; Kessell, Stephen R.
1980-05-01
A model for predicting community mosaics and wildlife diversity resulting from fire disturbance to a forest ecosystem is presented. It applies an algorithm that delineates the size and shape of each patch from grid-based input data and calculates standard diversity measures for the entire mosaic of community patches and their included animal species. The user can print these diversity calculations, maps of the current community-type-age-class mosaic, and maps of habitat utilization by each animal species. Furthermore, the user can print estimates of changes in each resulting from natural disturbance. Although data and resolution level independent, the model is demonstrated and tested with data from the Lewis and Clark National Forest in Montana.
New ghost-node method for linking different models with varied grid refinement
James, S.C.; Dickinson, J.E.; Mehl, S.W.; Hill, M.C.; Leake, S.A.; Zyvoloski, G.A.; Eddebbarh, A.-A.
2006-01-01
A flexible, robust method for linking grids of locally refined ground-water flow models constructed with different numerical methods is needed to address a variety of hydrologic problems. This work outlines and tests a new ghost-node model-linking method for a refined "child" model that is contained within a larger and coarser "parent" model that is based on the iterative method of Steffen W. Mehl and Mary C. Hill (2002, Advances in Water Res., 25, p. 497-511; 2004, Advances in Water Res., 27, p. 899-912). The method is applicable to steady-state solutions for ground-water flow. Tests are presented for a homogeneous two-dimensional system that has matching grids (parent cells border an integer number of child cells) or nonmatching grids. The coupled grids are simulated by using the finite-difference and finite-element models MODFLOW and FEHM, respectively. The simulations require no alteration of the MODFLOW or FEHM models and are executed using a batch file on Windows operating systems. Results indicate that when the grids are matched spatially so that nodes and child-cell boundaries are aligned, the new coupling technique has error nearly equal to that when coupling two MODFLOW models. When the grids are nonmatching, model accuracy is slightly increased compared to that for matching-grid cases. Overall, results indicate that the ghost-node technique is a viable means to couple distinct models because the overall head and flow errors relative to the analytical solution are less than if only the regional coarse-grid model was used to simulate flow in the child model's domain.
Hazardous Waste Cleanup: Hyatt Clark Industries in Clark, New Jersey
The Former Hyatt Clark site was located at 3100 Raritan Road in Clark, New Jersey. The site was comprised of 32 acres of manufacturing areas, 32 acres of parking lots, and 23 acres of woodland. The plant originally manufactured hard-rubber products, such a
NASA Technical Reports Server (NTRS)
Phillips, J. R.
1996-01-01
In this paper we derive error bounds for a collocation-grid-projection scheme tuned for use in multilevel methods for solving boundary-element discretizations of potential integral equations. The grid-projection scheme is then combined with a precorrected FFT style multilevel method for solving potential integral equations with 1/r and e(sup ikr)/r kernels. A complexity analysis of this combined method is given to show that for homogeneous problems, the method is order n natural log n nearly independent of the kernel. In addition, it is shown analytically and experimentally that for an inhomogeneity generated by a very finely discretized surface, the combined method slows to order n(sup 4/3). Finally, examples are given to show that the collocation-based grid-projection plus precorrected-FFT scheme is competitive with fast-multipole algorithms when considering realistic problems and 1/r kernels, but can be used over a range of spatial frequencies with only a small performance penalty.
77 FR 8890 - Clarks River National Wildlife Refuge, KY; Draft Comprehensive Conservation Plan and...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-02-15
...-FF04R02000] Clarks River National Wildlife Refuge, KY; Draft Comprehensive Conservation Plan and... availability of a draft comprehensive conservation plan and environmental assessment (Draft CCP/EA) for Clarks... (telephone). SUPPLEMENTARY INFORMATION: Introduction With this notice, we continue the CCP process for Clarks...
75 FR 42460 - Minor Boundary Revision at Lewis and Clark National Historical Park
Federal Register 2010, 2011, 2012, 2013, 2014
2010-07-21
... DEPARTMENT OF THE INTERIOR National Park Service Minor Boundary Revision at Lewis and Clark... Clark National Historical Park is modified to include an additional 106.74+/- acres of land identified..., Oregon, immediately adjacent to the southern boundary of the Sunset Beach portion of Lewis and Clark...
Efficient Solar Scene Wavefront Estimation with Reduced Systematic and RMS Errors: Summary
NASA Astrophysics Data System (ADS)
Anugu, N.; Garcia, P.
2016-04-01
Wave front sensing for solar telescopes is commonly implemented with the Shack-Hartmann sensors. Correlation algorithms are usually used to estimate the extended scene Shack-Hartmann sub-aperture image shifts or slopes. The image shift is computed by correlating a reference sub-aperture image with the target distorted sub-aperture image. The pixel position where the maximum correlation is located gives the image shift in integer pixel coordinates. Sub-pixel precision image shifts are computed by applying a peak-finding algorithm to the correlation peak Poyneer (2003); Löfdahl (2010). However, the peak-finding algorithm results are usually biased towards the integer pixels, these errors are called as systematic bias errors Sjödahl (1994). These errors are caused due to the low pixel sampling of the images. The amplitude of these errors depends on the type of correlation algorithm and the type of peak-finding algorithm being used. To study the systematic errors in detail, solar sub-aperture synthetic images are constructed by using a Swedish Solar Telescope solar granulation image1. The performance of cross-correlation algorithm in combination with different peak-finding algorithms is investigated. The studied peak-finding algorithms are: parabola Poyneer (2003); quadratic polynomial Löfdahl (2010); threshold center of gravity Bailey (2003); Gaussian Nobach & Honkanen (2005) and Pyramid Bailey (2003). The systematic error study reveals that that the pyramid fit is the most robust to pixel locking effects. The RMS error analysis study reveals that the threshold centre of gravity behaves better in low SNR, although the systematic errors in the measurement are large. It is found that no algorithm is best for both the systematic and the RMS error reduction. To overcome the above problem, a new solution is proposed. In this solution, the image sampling is increased prior to the actual correlation matching. The method is realized in two steps to improve its computational efficiency. In the first step, the cross-correlation is implemented at the original image spatial resolution grid (1 pixel). In the second step, the cross-correlation is performed using a sub-pixel level grid by limiting the field of search to 4 × 4 pixels centered at the first step delivered initial position. The generation of these sub-pixel grid based region of interest images is achieved with the bi-cubic interpolation. The correlation matching with sub-pixel grid technique was previously reported in electronic speckle photography Sjö'dahl (1994). This technique is applied here for the solar wavefront sensing. A large dynamic range and a better accuracy in the measurements are achieved with the combination of the original pixel grid based correlation matching in a large field of view and a sub-pixel interpolated image grid based correlation matching within a small field of view. The results revealed that the proposed method outperforms all the different peak-finding algorithms studied in the first approach. It reduces both the systematic error and the RMS error by a factor of 5 (i.e., 75% systematic error reduction), when 5 times improved image sampling was used. This measurement is achieved at the expense of twice the computational cost. With the 5 times improved image sampling, the wave front accuracy is increased by a factor of 5. The proposed solution is strongly recommended for wave front sensing in the solar telescopes, particularly, for measuring large dynamic image shifts involved open loop adaptive optics. Also, by choosing an appropriate increment of image sampling in trade-off between the computational speed limitation and the aimed sub-pixel image shift accuracy, it can be employed in closed loop adaptive optics. The study is extended to three other class of sub-aperture images (a point source; a laser guide star; a Galactic Center extended scene). The results are planned to submit for the Optical Express journal.
SU-E-J-81: Beveled Needle Tip Detection Error in Ultrasound-Guided Prostate Brachytherapy.
Leu, S; Ruiz, B; Podder, T
2012-06-01
To quantify the needle tip detection errors in ultrasound images due to bevel-tip orientation in relation to the location on template grid. Transrectal ultrasound (TRUS) system (BK Medical) with physical template grid and 18-gauge bevel-tip (20-deg beveled angle) brachytherapy needle (Bard Medical, Covington, GA) were used. The TRUS was set at 6.5MHz in water phantom at 40°C and measurements were taken with 50% and 100% TRUS gains. Needles were oriented with bevel-tip facing up (0-degree) and inserted through template grid-holes. Reference needle depths were measured when needle tip image intensity was bright enough for potentially consistent readings. High-resolution digital vernier caliper was used to measure needle depth. Needle bevel-tip orientation was then changed to bevel down (by rotating 180-degree) and needle depth was adjusted by retracting so that the needle-tip image intensity appeared similar to when the needle bevel-tip was at 0-degree orientation. Clinically relevant locations were considered for needle placement on the template grids (1st row to 9th row, and 'a-f' columns). For 50% TRUS gain, bevel tip detection errors/differences were 0.69±0.30mm (1st row) to 3.23±0.22mm (9th row) and 0.78±0.71mm (1st row) to 4.14±0.56mm (9th row) in columns 'a' and 'D', respectively. The corresponding errors for 100% TRUS gain were 0.57±0.25mm to 5.24±0.36mm and 0.84±0.30mm to 4.2±0.20mm in columns 'a' and 'D', respectively. These errors/differences varied linearly for grid-hole locations on the rows and columns in between, smaller to large depending on distance from the TRUS probe. Observed no effect of gains (50% vs. 100%) along 'D' column, which was directly above the TRUS probe. Experiment results revealed that the beveled needle tip orientation could significantly impact the detection accuracy of the needle tips, based on which the seeds might be delivered. These errors may lead to considerable dosimetric deviations in prostate brachytherapy seed implantation. © 2012 American Association of Physicists in Medicine.
NASA Astrophysics Data System (ADS)
Ran, Youhua; Li, Xin; Jin, Rui; Kang, Jian; Cosh, Michael H.
2017-01-01
Monitoring and estimating grid-mean soil moisture is very important for assessing many hydrological, biological, and biogeochemical processes and for validating remotely sensed surface soil moisture products. Temporal stability analysis (TSA) is a valuable tool for identifying a small number of representative sampling points to estimate the grid-mean soil moisture content. This analysis was evaluated and improved using high-quality surface soil moisture data that were acquired by a wireless sensor network in a high-intensity irrigated agricultural landscape in an arid region of northwestern China. The performance of the TSA was limited in areas where the representative error was dominated by random events, such as irrigation events. This shortcoming can be effectively mitigated by using a stratified TSA (STSA) method, proposed in this paper. In addition, the following methods were proposed for rapidly and efficiently identifying representative sampling points when using TSA. (1) Instantaneous measurements can be used to identify representative sampling points to some extent; however, the error resulting from this method is significant when validating remotely sensed soil moisture products. Thus, additional representative sampling points should be considered to reduce this error. (2) The calibration period can be determined from the time span of the full range of the grid-mean soil moisture content during the monitoring period. (3) The representative error is sensitive to the number of calibration sampling points, especially when only a few representative sampling points are used. Multiple sampling points are recommended to reduce data loss and improve the likelihood of representativeness at two scales.
Ocean regional circulation model sensitizes to resolution of the lateral boundary conditions
NASA Astrophysics Data System (ADS)
Pham, Van Sy; Hwang, Jin Hwan
2017-04-01
Dynamical downscaling with nested regional oceanographic models is an effective approach for forecasting operationally coastal weather and projecting long term climate on the ocean. Nesting procedures deliver the unwanted in dynamic downscaling due to the differences of numerical grid sizes and updating steps. Therefore, such unavoidable errors restrict the application of the Ocean Regional Circulation Model (ORCMs) in both short-term forecasts and long-term projections. The current work identifies the effects of errors induced by computational limitations during nesting procedures on the downscaled results of the ORCMs. The errors are quantitatively evaluated for each error source and its characteristics by the Big-Brother Experiments (BBE). The BBE separates identified errors from each other and quantitatively assess the amount of uncertainties employing the same model to simulate for both nesting and nested model. Here, we focus on discussing errors resulting from two main matters associated with nesting procedures. They should be the spatial grids' differences and the temporal updating steps. After the diverse cases from separately running of the BBE, a Taylor diagram was adopted to analyze the results and suggest an optimization intern of grid size and updating period and domain sizes. Key words: lateral boundary condition, error, ocean regional circulation model, Big-Brother Experiment. Acknowledgement: This research was supported by grants from the Korean Ministry of Oceans and Fisheries entitled "Development of integrated estuarine management system" and a National Research Foundation of Korea (NRF) Grant (No. 2015R1A5A 7037372) funded by MSIP of Korea. The authors thank the Integrated Research Institute of Construction and Environmental Engineering of Seoul National University for administrative support.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-22
... Approval and Disapproval of Air Quality Implementation Plans; Nevada; Clark County; Stationary Source... Clark County, Nevada. DATES: Any comments on this proposal must arrive by September 7, 2012. ADDRESSES... regulations submitted for approval into the Clark County portion of the Nevada State Implementation Plan (SIP...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-08-23
... DEPARTMENT OF ENERGY Federal Energy Regulatory Commission [Project No. 12429-007] Clark Canyon... b. Project No.: 12429-007. c. Date Filed: May 31, 2012. d. Applicant: Clark Canyon Hydro, LLC . e. Name of Project: Clark Canyon Dam Hydroelectric Project. f. Location: When constructed, the project...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-10-18
... Approval and Disapproval of Air Quality Implementation Plans; Nevada; Clark County; Stationary Source... limited approval and limited disapproval of revisions to the Clark County portion of the applicable state... limited approval and limited disapproval action is to update the applicable SIP with current Clark County...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-20
... DEPARTMENT OF LABOR Employment and Training Administration [TA-W-72,773] Clark Engineering Co... of Clark Engineering Co., Inc., including on-site leased workers of Kelly Services, Owosso, Michigan... from Qualified Staffing were employed on-site at the Owosso, Michigan location of Clark Engineering Co...
Performance Evaluation of Three Blood Glucose Monitoring Systems Using ISO 15197
Bedini, José Luis; Wallace, Jane F.; Pardo, Scott; Petruschke, Thorsten
2015-01-01
Background: Blood glucose monitoring is an essential component of diabetes management. Inaccurate blood glucose measurements can severely impact patients’ health. This study evaluated the performance of 3 blood glucose monitoring systems (BGMS), Contour® Next USB, FreeStyle InsuLinx®, and OneTouch® Verio™ IQ, under routine hospital conditions. Methods: Venous blood samples (N = 236) obtained for routine laboratory procedures were collected at a Spanish hospital, and blood glucose (BG) concentrations were measured with each BGMS and with the available reference (hexokinase) method. Accuracy of the 3 BGMS was compared according to ISO 15197:2013 accuracy limit criteria, by mean absolute relative difference (MARD), consensus error grid (CEG) and surveillance error grid (SEG) analyses, and an insulin dosing error model. Results: All BGMS met the accuracy limit criteria defined by ISO 15197:2013. While all measurements of the 3 BGMS were within low-risk zones in both error grid analyses, the Contour Next USB showed significantly smaller MARDs between reference values compared to the other 2 BGMS. Insulin dosing errors were lowest for the Contour Next USB than compared to the other systems. Conclusions: All BGMS fulfilled ISO 15197:2013 accuracy limit criteria and CEG criterion. However, taking together all analyses, differences in performance of potential clinical relevance may be observed. Results showed that Contour Next USB had lowest MARD values across the tested glucose range, as compared with the 2 other BGMS. CEG and SEG analyses as well as calculation of the hypothetical bolus insulin dosing error suggest a high accuracy of the Contour Next USB. PMID:26445813
Kretzschmar, A; Durand, E; Maisonnasse, A; Vallon, J; Le Conte, Y
2015-06-01
A new procedure of stratified sampling is proposed in order to establish an accurate estimation of Varroa destructor populations on sticky bottom boards of the hive. It is based on the spatial sampling theory that recommends using regular grid stratification in the case of spatially structured process. The distribution of varroa mites on sticky board being observed as spatially structured, we designed a sampling scheme based on a regular grid with circles centered on each grid element. This new procedure is then compared with a former method using partially random sampling. Relative error improvements are exposed on the basis of a large sample of simulated sticky boards (n=20,000) which provides a complete range of spatial structures, from a random structure to a highly frame driven structure. The improvement of varroa mite number estimation is then measured by the percentage of counts with an error greater than a given level. © The Authors 2015. Published by Oxford University Press on behalf of Entomological Society of America. All rights reserved. For Permissions, please email: journals.permissions@oup.com.
Yucca Mountain: How Do Global and Federal Initiatives Impact Clark County's Nuclear Waste Program?
DOE Office of Scientific and Technical Information (OSTI.GOV)
Navis, I.; McGehee, B.
2008-07-01
Since 1987, Clark County has been designated by the U.S. Department of Energy (DOE) as an 'Affected Unit of Local Government' (AULG). The AULG designation is an acknowledgement by the federal government that activities associated with the Yucca Mountain proposal could result in considerable impacts on Clark County residents and the community as a whole. As an AULG, Clark County is authorized to identify 'any potential economic, social, public health and safety, and environmental impacts of a repository', 42 U.S.C. Section 10135(c)(1)(B)(i) under provisions of the Nuclear Waste Policy Act Amendments (NWPAA). Clark County's oversight program contains key elements ofmore » (1) technical and scientific analysis (2) transportation analysis (3) impact assessment and monitoring (4) policy and legislative analysis and monitoring, and (5) public outreach. Clark County has conducted numerous studies of potential impacts, many of which are summarized in Clark County's Impact Assessment Report that was submitted DOE and the President of the United States in February 2002. Given the unprecedented magnitude and duration of DOE's proposal, as well as the many unanswered questions about the transportation routes, number of shipments, and the modal mix that will ultimately be used, impacts to public health and safety and security, as well as socioeconomic impacts, can only be estimated. In order to refine these estimates, Clark County Comprehensive Planning Department's Nuclear Waste Division updates, assesses, and monitors impacts on a regular basis. Clark County's Impact Assessment program covers not only unincorporated Clark County but all five jurisdictions of Las Vegas, North Las Vegas, Henderson, Mesquite, and Boulder City as well as tribal jurisdictions that fall within Clark County's geographic boundary. National and global focus on nuclear power and nuclear waste could have significant impact on the Yucca Mountain Program, and therefore, Clark County's oversight of that program. (authors)« less
NASA Astrophysics Data System (ADS)
Shen, S. S.
2015-12-01
This presentation describes the detection of interdecadal climate signals in a newly reconstructed precipitation data from 1850-present. Examples are on precipitation signatures of East Asian Monsoon (EAM), Pacific Decadal Oscillation (PDO) and Atlantic Multidecadal Oscillations (AMO). The new reconstruction dataset is an enhanced edition of a suite of global precipitation products reconstructed by Spectral Optimal Gridding of Precipitation Version 1.0 (SOGP 1.0). The maximum temporal coverage is 1850-present and the spatial coverage is quasi-global (75S, 75N). This enhanced version has three different temporal resolutions (5-day, monthly, and annual) and two different spatial resolutions (2.5 deg and 5.0 deg). It also has a friendly Graphical User Interface (GUI). SOGP uses a multivariate regression method using an empirical orthogonal function (EOF) expansion. The Global Precipitation Climatology Project (GPCP) precipitation data from 1981-20010 are used to calculate the EOFs. The Global Historical Climatology Network (GHCN) gridded data are used to calculate the regression coefficients for reconstructions. The sampling errors of the reconstruction are analyzed according to the number of EOF modes used in the reconstruction. Our reconstructed 1900-2011 time series of the global average annual precipitation shows a 0.024 (mm/day)/100a trend, which is very close to the trend derived from the mean of 25 models of the CMIP5 (Coupled Model Intercomparison Project Phase 5). Our reconstruction has been validated by GPCP data after 1979. Our reconstruction successfully displays the 1877 El Nino (see the attached figure), which is considered a validation before 1900. Our precipitation products are publically available online, including digital data, precipitation animations, computer codes, readme files, and the user manual. This work is a joint effort of San Diego State University (Sam Shen, Gregori Clarke, Christian Junjinger, Nancy Tafolla, Barbara Sperberg, and Melanie Thorn), UCLA (Yongkang Xue), and University of Maryland (Tom Smith and Phil Arkin) and supported in part by the U.S. National Science Foundation (Awards No. AGS-1419256 and AGS-1015957).
Isotopic Dependence of GCR Fluence behind Shielding
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Wilson, John W.; Saganti, Premkumar; Kim, Myung-Hee Y.; Cleghorn, Timothy; Zeitlin, Cary; Tripathi, Ram K.
2006-01-01
In this paper we consider the effects of the isotopic composition of the primary galactic cosmic rays (GCR), nuclear fragmentation cross-sections, and isotopic-grid on the solution to transport models used for shielding studies. Satellite measurements are used to describe the isotopic composition of the GCR. For the nuclear interaction data-base and transport solution, we use the quantum multiple-scattering theory of nuclear fragmentation (QMSFRG) and high-charge and energy (HZETRN) transport code, respectively. The QMSFRG model is shown to accurately describe existing fragmentation data including proper description of the odd-even effects as function of the iso-spin dependence on the projectile nucleus. The principle finding of this study is that large errors (+/-100%) will occur in the mass-fluence spectra when comparing transport models that use a complete isotopic-grid (approx.170 ions) to ones that use a reduced isotopic-grid, for example the 59 ion-grid used in the HZETRN code in the past, however less significant errors (<+/-20%) occur in the elemental-fluence spectra. Because a complete isotopic-grid is readily handled on small computer workstations and is needed for several applications studying GCR propagation and scattering, it is recommended that they be used for future GCR studies.
NASA Astrophysics Data System (ADS)
Liu, Jia; Liu, Longli; Xue, Yong; Dong, Jing; Hu, Yingcui; Hill, Richard; Guang, Jie; Li, Chi
2017-01-01
Workflow for remote sensing quantitative retrieval is the ;bridge; between Grid services and Grid-enabled application of remote sensing quantitative retrieval. Workflow averts low-level implementation details of the Grid and hence enables users to focus on higher levels of application. The workflow for remote sensing quantitative retrieval plays an important role in remote sensing Grid and Cloud computing services, which can support the modelling, construction and implementation of large-scale complicated applications of remote sensing science. The validation of workflow is important in order to support the large-scale sophisticated scientific computation processes with enhanced performance and to minimize potential waste of time and resources. To research the semantic correctness of user-defined workflows, in this paper, we propose a workflow validation method based on tacit knowledge research in the remote sensing domain. We first discuss the remote sensing model and metadata. Through detailed analysis, we then discuss the method of extracting the domain tacit knowledge and expressing the knowledge with ontology. Additionally, we construct the domain ontology with Protégé. Through our experimental study, we verify the validity of this method in two ways, namely data source consistency error validation and parameters matching error validation.
Blood glucose concentrations of arm and finger during dynamic glucose conditions.
Szuts, Ete Z; Lock, J Paul; Malomo, Kenneth J; Anagnostopoulos, Althea
2002-01-01
We set out to determine the physiological difference between the capillary blood of the arm and finger with the greatest possible accuracy using the HemoCue B-glucose analyzer on subjects undergoing a meal tolerance test (MTT) or oral glucose tolerance test (OGTT). MTT study was performed on 50 subjects who drank a liquid meal (Ensure, 40 g of carbohydrates) and who were tested on the arm and finger every 30 min for up to 4 h. OGTT study was performed on 12 subjects who drank a 100-g glucose solution (Glucola) and were tested on the arm and finger every 15 min during the first hour and thereafter every 30 min for up to 3 h. Average percent glucose difference between arm and finger reached a maximal value about 1 h following glucose load, with arm glucose being about 5% lower than that of finger. At other times, average differences were less than this. At the greatest rate of glucose change (>2 mg/dL-min), mean percent bias was found to be about 6%. Despite these measurable differences, when arm results were plotted on the Clarke error grid against finger values, >97% of the data were within zone A (rest in zone B). Thus, physiological differences between arm and finger were clinically insignificant. Our studies with HemoCue confirmed the existence of measurable physiological glucose differences between arm and finger following a glucose challenge, but these differences were found to be clinically insignificant even in those subjects in whom they were measurable.
Yu, Songlin; Li, Dachao; Chong, Hao; Sun, Changyue; Yu, Haixia; Xu, Kexin
2013-01-01
Because mid-infrared (mid-IR) spectroscopy is not a promising method to noninvasively measure glucose in vivo, a method for minimally invasive high-precision glucose determination in vivo by mid-IR laser spectroscopy combined with a tunable laser source and small fiber-optic attenuated total reflection (ATR) sensor is introduced. The potential of this method was evaluated in vitro. This research presents a mid-infrared tunable laser with a broad emission spectrum band of 9.19 to 9.77μm(1024~1088 cm−1) and proposes a method to control and stabilize the laser emission wavelength and power. Moreover, several fiber-optic ATR sensors were fabricated and investigated to determine glucose in combination with the tunable laser source, and the effective sensing optical length of these sensors was determined for the first time. In addition, the sensitivity of this system was four times that of a Fourier transform infrared (FT-IR) spectrometer. The noise-equivalent concentration (NEC) of this laser measurement system was as low as 3.8 mg/dL, which is among the most precise glucose measurements using mid-infrared spectroscopy. Furthermore, a partial least-squares regression and Clarke error grid were used to quantify the predictability and evaluate the prediction accuracy of glucose concentration in the range of 5 to 500 mg/dL (physiologically relevant range: 30~400 mg/dL). The experimental results were clinically acceptable. The high sensitivity, tunable laser source, low NEC and small fiber-optic ATR sensor demonstrate an encouraging step in the work towards precisely monitoring glucose levels in vivo. PMID:24466493
Choleau, C; Klein, J C; Reach, G; Aussedat, B; Demaria-Pesce, V; Wilson, G S; Gifford, R; Ward, W K
2002-08-01
Calibration, i.e. the transformation in real time of the signal I(t) generated by the glucose sensor at time t into an estimation of glucose concentration G(t), represents a key issue for the development of a continuous glucose monitoring system. To compare two calibration procedures. In the one-point calibration, which assumes that I(o) is negligible, S is simply determined as the ratio I/G, and G(t) = I(t)/S. The two-point calibration consists in the determination of a sensor sensitivity S and of a background current I(o) by plotting two values of the sensor signal versus the concomitant blood glucose concentrations. The subsequent estimation of G(t) is given by G(t) = (I(t)-I(o))/S. A glucose sensor was implanted in the abdominal subcutaneous tissue of nine type 1 diabetic patients during 3 (n = 2) and 7 days (n = 7). The one-point calibration was performed a posteriori either once per day before breakfast, or twice per day before breakfast and dinner, or three times per day before each meal. The two-point calibration was performed each morning during breakfast. The percentages of points present in zones A and B of the Clarke Error Grid were significantly higher when the system was calibrated using the one-point calibration. Use of two one-point calibrations per day before meals was virtually as accurate as three one-point calibrations. This study demonstrates the feasibility of a simple method for calibrating a continuous glucose monitoring system.
Prabhudesai, Sumant; Kanjani, Amruta; Bhagat, Isha; Ravikumar, Karnam G; Ramachandran, Bala
2015-11-01
The aim of this prospective, observational study was to determine the accuracy of a real-time continuous glucose monitoring system (CGMS) in children with septic shock. Children aged 30 days to 18 years admitted to the Pediatric Intensive Care Unit with septic shock were included. A real-time CGMS sensor was used to obtain interstitial glucose readings. CGMS readings were compared statistically with simultaneous laboratory blood glucose (BG). Nineteen children were included, and 235 pairs of BG-CGMS readings were obtained. BG and CGMS had a correlation coefficient of 0.61 (P < 0.001) and a median relative absolute difference of 17.29%. On Clarke's error grid analysis, 222 (94.5%) readings were in the clinically acceptable zones (A and B). When BG was < 70, 70-180, and > 180 mg/dL, 44%, 100%, and 76.9% readings were in zones A and B, respectively (P < 0.001). The accuracy of CGMS was not affected by the presence of edema, acidosis, vasopressors, steroids, or renal replacement therapy. On receiver operating characteristics curve analysis, a CGMS reading <97 mg/dL predicted hypoglycemia (sensitivity 85.2%, specificity 75%, area under the curve [AUC] =0.85). A reading > 141 mg/dL predicted hyperglycemia (sensitivity 84.6%, specificity 89.6%, AUC = 0.87). CGMS provides a fairly, accurate estimate of BG in children with septic shock. It is unaffected by a variety of clinical variables. The accuracy over extremes of blood sugar may be a concern. We recommend larger studies to evaluate its use for the early detection of hypoglycemia and hyperglycemia.
75 FR 26709 - Clarke County Water Supply Project, Clarke County, IA
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-12
... Project, Clarke County, IA AGENCY: Natural Resources Conservation Service. ACTION: Notice of intent to... Conservationist for Planning, 210 Walnut Street, Room 693, Des Moines, IA 50309-2180, telephone: 515-284- 4769... available at the Iowa NRCS Web site at http://www.ia.nrcs.usda.gov . A map of the Clarke County Water Supply...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-01-05
... Public Purposes in Clark County, NV AGENCY: Bureau of Land Management, Interior. ACTION: Notice of Realty... Church Community in the City of Las Vegas, Clark County, Nevada. FOR FURTHER INFORMATION CONTACT: Shawna..., more or less in Clark County, Nevada. Authority: 43 CFR 2741.5. Vanessa L. Hice, Assistant Field...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-23
... Purposes in Clark County, NV AGENCY: Bureau of Land Management, Interior. ACTION: Notice of correction..., Clark County, Nevada. FOR FURTHER INFORMATION CONTACT: Philip Rhinehart, (702) 515-5182, or [email protected] conveyance to the Clark County Department of Aviation for the Henderson Executive Airport are correctly and...
An adaptive grid to improve the efficiency and accuracy of modelling underwater noise from shipping
NASA Astrophysics Data System (ADS)
Trigg, Leah; Chen, Feng; Shapiro, Georgy; Ingram, Simon; Embling, Clare
2017-04-01
Underwater noise from shipping is becoming a significant concern and has been listed as a pollutant under Descriptor 11 of the Marine Strategy Framework Directive. Underwater noise models are an essential tool to assess and predict noise levels for regulatory procedures such as environmental impact assessments and ship noise monitoring. There are generally two approaches to noise modelling. The first is based on simplified energy flux models, assuming either spherical or cylindrical propagation of sound energy. These models are very quick but they ignore important water column and seabed properties, and produce significant errors in the areas subject to temperature stratification (Shapiro et al., 2014). The second type of model (e.g. ray-tracing and parabolic equation) is based on an advanced physical representation of sound propagation. However, these acoustic propagation models are computationally expensive to execute. Shipping noise modelling requires spatial discretization in order to group noise sources together using a grid. A uniform grid size is often selected to achieve either the greatest efficiency (i.e. speed of computations) or the greatest accuracy. In contrast, this work aims to produce efficient and accurate noise level predictions by presenting an adaptive grid where cell size varies with distance from the receiver. The spatial range over which a certain cell size is suitable was determined by calculating the distance from the receiver at which propagation loss becomes uniform across a grid cell. The computational efficiency and accuracy of the resulting adaptive grid was tested by comparing it to uniform 1 km and 5 km grids. These represent an accurate and computationally efficient grid respectively. For a case study of the Celtic Sea, an application of the adaptive grid over an area of 160×160 km reduced the number of model executions required from 25600 for a 1 km grid to 5356 in December and to between 5056 and 13132 in August, which represents a 2 to 5-fold increase in efficiency. The 5 km grid reduces the number of model executions further to 1024. However, over the first 25 km the 5 km grid produces errors of up to 13.8 dB when compared to the highly accurate but inefficient 1 km grid. The newly developed adaptive grid generates much smaller errors of less than 0.5 dB while demonstrating high computational efficiency. Our results show that the adaptive grid provides the ability to retain the accuracy of noise level predictions and improve the efficiency of the modelling process. This can help safeguard sensitive marine ecosystems from noise pollution by improving the underwater noise predictions that inform management activities. References Shapiro, G., Chen, F., Thain, R., 2014. The Effect of Ocean Fronts on Acoustic Wave Propagation in a Shallow Sea, Journal of Marine System, 139: 217 - 226. http://dx.doi.org/10.1016/j.jmarsys.2014.06.007.
Comparing Anisotropic Output-Based Grid Adaptation Methods by Decomposition
NASA Technical Reports Server (NTRS)
Park, Michael A.; Loseille, Adrien; Krakos, Joshua A.; Michal, Todd
2015-01-01
Anisotropic grid adaptation is examined by decomposing the steps of flow solution, ad- joint solution, error estimation, metric construction, and simplex grid adaptation. Multiple implementations of each of these steps are evaluated by comparison to each other and expected analytic results when available. For example, grids are adapted to analytic metric fields and grid measures are computed to illustrate the properties of multiple independent implementations of grid adaptation mechanics. Different implementations of each step in the adaptation process can be evaluated in a system where the other components of the adaptive cycle are fixed. Detailed examination of these properties allows comparison of different methods to identify the current state of the art and where further development should be targeted.
Adaptive grid generation in a patient-specific cerebral aneurysm
NASA Astrophysics Data System (ADS)
Hodis, Simona; Kallmes, David F.; Dragomir-Daescu, Dan
2013-11-01
Adapting grid density to flow behavior provides the advantage of increasing solution accuracy while decreasing the number of grid elements in the simulation domain, therefore reducing the computational time. One method for grid adaptation requires successive refinement of grid density based on observed solution behavior until the numerical errors between successive grids are negligible. However, such an approach is time consuming and it is often neglected by the researchers. We present a technique to calculate the grid size distribution of an adaptive grid for computational fluid dynamics (CFD) simulations in a complex cerebral aneurysm geometry based on the kinematic curvature and torsion calculated from the velocity field. The relationship between the kinematic characteristics of the flow and the element size of the adaptive grid leads to a mathematical equation to calculate the grid size in different regions of the flow. The adaptive grid density is obtained such that it captures the more complex details of the flow with locally smaller grid size, while less complex flow characteristics are calculated on locally larger grid size. The current study shows that kinematic curvature and torsion calculated from the velocity field in a cerebral aneurysm can be used to find the locations of complex flow where the computational grid needs to be refined in order to obtain an accurate solution. We found that the complexity of the flow can be adequately described by velocity and vorticity and the angle between the two vectors. For example, inside the aneurysm bleb, at the bifurcation, and at the major arterial turns the element size in the lumen needs to be less than 10% of the artery radius, while at the boundary layer, the element size should be smaller than 1% of the artery radius, for accurate results within a 0.5% relative approximation error. This technique of quantifying flow complexity and adaptive remeshing has the potential to improve results accuracy and reduce computational time for patient-specific hemodynamics simulations, which are used to help assess the likelihood of aneurysm rupture using CFD calculated flow patterns.
Prediction of discretization error using the error transport equation
NASA Astrophysics Data System (ADS)
Celik, Ismail B.; Parsons, Don Roscoe
2017-06-01
This study focuses on an approach to quantify the discretization error associated with numerical solutions of partial differential equations by solving an error transport equation (ETE). The goal is to develop a method that can be used to adequately predict the discretization error using the numerical solution on only one grid/mesh. The primary problem associated with solving the ETE is the formulation of the error source term which is required for accurately predicting the transport of the error. In this study, a novel approach is considered which involves fitting the numerical solution with a series of locally smooth curves and then blending them together with a weighted spline approach. The result is a continuously differentiable analytic expression that can be used to determine the error source term. Once the source term has been developed, the ETE can easily be solved using the same solver that is used to obtain the original numerical solution. The new methodology is applied to the two-dimensional Navier-Stokes equations in the laminar flow regime. A simple unsteady flow case is also considered. The discretization error predictions based on the methodology presented in this study are in good agreement with the 'true error'. While in most cases the error predictions are not quite as accurate as those from Richardson extrapolation, the results are reasonable and only require one numerical grid. The current results indicate that there is much promise going forward with the newly developed error source term evaluation technique and the ETE.
Federal Register 2010, 2011, 2012, 2013, 2014
2011-09-14
... Public Lands in Clark County, NV AGENCY: Bureau of Land Management, Interior. ACTION: Notice of Realty... sale and mineral conveyance regulations. The proposed sale also includes one 5-acre parcel in Clark... described contains 1.25 acres, more or less, in Clark County. The map delineating the proposed sale parcel...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-26
... Classification, Clark County, NV AGENCY: Bureau of Land Management, Interior. ACTION: Notice of realty action... conveyance of approximately 2.5 acres of public land in Las Vegas, Clark County, Nevada. The City proposes to... Clark County. In accordance with the R&PP Act, the City of Las Vegas filed an R&PP application to...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-12-28
... Clark, Lincoln, and White Pine Counties Groundwater Development Project Right- of-Way, NV AGENCY: Bureau... (BLM) announces the availability of the Record of Decision (ROD) for the Clark, Lincoln, and White Pine... in Lincoln, and Clark counties, Nevada for this project. The ROW grant will authorize the use of...
Federal Register 2010, 2011, 2012, 2013, 2014
2012-11-08
... Land in Clark County, NV AGENCY: Bureau of Land Management, Interior. ACTION: Notice. SUMMARY: The.../4\\SW\\1/4\\. The area described contains 12.5 acres, more or less, in Clark County. The map...-63015 for road purposes granted to Clark County, its successors or assigns, pursuant to the Act of...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-12-27
... Classification for Lease and/or Subsequent Conveyance of Public Lands in Clark County, Nevada AGENCY: Bureau of... land in the City of Las Vegas, Clark County, Nevada. The Clark County School District proposes to use...\\1/4\\;NW\\1/4\\. The area described contains 40 acres, more or less, in Clark County. In accordance...
33 CFR 117.899 - Youngs Bay and Lewis and Clark River.
Code of Federal Regulations, 2010 CFR
2010-07-01
... 33 Navigation and Navigable Waters 1 2010-07-01 2010-07-01 false Youngs Bay and Lewis and Clark... Lewis and Clark River. (a) The draw of the US101 (New Youngs Bay) highway bridge, mile 0.7, across... notice is given to the drawtender at the Lewis and Clark River Bridge by marine radio, telephone, or...
33 CFR 117.899 - Youngs Bay and Lewis and Clark River.
Code of Federal Regulations, 2011 CFR
2011-07-01
... 33 Navigation and Navigable Waters 1 2011-07-01 2011-07-01 false Youngs Bay and Lewis and Clark... Lewis and Clark River. (a) The draw of the US101 (New Youngs Bay) highway bridge, mile 0.7, across... notice is given to the drawtender at the Lewis and Clark River Bridge by marine radio, telephone, or...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-03-26
... Classification, Clark County, NV AGENCY: Bureau of Land Management, Interior. ACTION: Notice of realty action... or conveyance of approximately 7.5 acres of public land in Las Vegas, Clark County, Nevada. The City..., more or less, in Clark County. In accordance with the R&PP Act, the City of Las Vegas filed an R&PP...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-04
... Lands in Clark County, NV AGENCY: Bureau of Land Management, Interior. ACTION: Notice of realty action... contains 5 acres, more or less, in Clark County. The map delineating the proposed sale parcel is available... saleable mineral deposits on the lands in Clark County, if any, are reserved to the United States, in...
Equations for estimating Clark Unit-hydrograph parameters for small rural watersheds in Illinois
Straub, Timothy D.; Melching, Charles S.; Kocher, Kyle E.
2000-01-01
Simulation of the measured discharge hydrographs for the verification storms utilizing TC and R obtained from the estimation equations yielded good results. The error in peak discharge for 21 of the 29 verification storms was less than 25 percent, and the error in time-to-peak discharge for 18 of the 29 verification storms also was less than 25 percent. Therefore, applying the estimation equations to determine TC and R for design-storm simulation may result in reliable design hydrographs, as long as the physical characteristics of the watersheds under consideration are within the range of those characteristics for the watersheds in this study [area: 0.02-2.3 mi2, main-channel length: 0.17-3.4 miles, main-channel slope: 10.5-229 feet per mile, and insignificant percentage of impervious cover].
Oka, Hiroshi; Tanaka, Masaru; Kobayashi, Seiichiro; Argenziano, Giuseppe; Soyer, H Peter; Nishikawa, Takeji
2004-04-01
As a first step to develop a screening system for pigmented skin lesions, we performed digital discriminant analyses between early melanomas and Clark naevi. A total of 59 cases of melanoma, including 23 melanoma in situ and 36 thin invasive melanomas (Breslow thickness < or =0.75 mm), and 188 clinically equivocal, histopathologically diagnosed Clark naevi were used in our study. After calculating 62 mathematical variables related to the colour, texture, asymmetry and circularity based on the dermoscopic findings of the pigmented skin lesions, we performed multivariate stepwise discriminant analysis using these variables to differentiate melanomas from naevi. The sensitivities and specificities of our model were 94.4 and 98.4%, respectively, for discriminating between melanomas (Breslow thickness < or =0.75 mm) and Clark naevi, and 73.9 and 85.6%, respectively, for discriminating between melanoma in situ and Clark naevi. Our algorithm accurately discriminated invasive melanomas from Clark naevi, but not melanomas in situ from Clark naevi.
Research on LLCL Filtering Grid - Connected inverter under the Control of PFI
NASA Astrophysics Data System (ADS)
Li, Ren-qing; Zong, Ke-yong; Wang, Yan-ping; Li, Yang; Zhang, Jing
2018-03-01
This passage puts forward a kind of LLCL inverter which is based on the proportional feedback integral(PFI) control so as so satisfy the request of the grid-current outputed by the renewable energy generation system. The passage builds the topological graph of grid-connected inverter and makes an analysis of principle of linear superposition aims to reveal the essence of the problem of steady-state error that exists in proportional integral control. We use LLCL filter and the method of passive damping to solve the problem of resonant peak. We make simulation of the grid system with the software named MATLAB/Simulink. The result shows that the grid current enters steady state quickly and in the same time, which has the identical phase and frequency of grid-voltage. The harmonic content in grid current satisfies the request of grid standard.
NASA Astrophysics Data System (ADS)
Mulyukova, Elvira; Dabrowski, Marcin; Steinberger, Bernhard
2015-04-01
Many problems in geodynamic applications may be described as viscous flow of chemically heterogeneous materials. Examples include subduction of compositionally stratified lithospheric plates, folding of rheologically layered rocks, and thermochemical convection of the Earth's mantle. The associated time scales are significantly shorter than that of chemical diffusion, which justifies the commonly featured phenomena in geodynamic flow models termed contact discontinuities. These are spatially sharp interfaces separating regions of different material properties. Numerical modelling of advection of fields with sharp interfaces is challenging. Typical errors include numerical diffusion, which arises due to the repeated action of numerical interpolation. Mathematically, a material field can be represented by discrete indicator functions, whose values are interpreted as logical statements (e.g. whether or not the location is occupied by a given material). Interpolation of a discrete function boils down to determining where in the intermediate node-positions one material ends, and the other begins. The numerical diffusion error thus manifests itself as an erroneous location of the material-interface. Lagrangian advection-schemes are known to be less prone to numerical diffusion errors, compared to their Eulerian counterparts. The tracer-ratio method, where Lagrangian markers are used to discretize the bulk of materials filling the entire domain, is a popular example of such methods. The Stokes equation in this case is solved on a separate, static grid, and in order to do it - material properties must be interpolated from the markers to the grid. This involves the difficulty related to interpolation of discrete fields. The material distribution, and thus material-properties like viscosity and density, seen by the grid is polluted by the interpolation error, which enters the solution of the momentum equation. Errors due to the uncertainty of interface-location can be avoided when using interface tracking methods for advection. Marker-chain method is one such approach, where rather than discretizing the volume of each material, only their interface is discretized by a connected set of markers. Together with the boundary of the domain, the marker-chain constitutes closed polygon-boundaries which enclose the regions spanned by each material. Communicating material properties to the static grid can be done by determining which polygon each grid-node (or integration point) falls into, eliminating the need for interpolation. In our chosen implementation, an efficient parallelized algorithm for the point-in-polygon location is used, so this part of the code takes up only a small fraction of the CPU-time spent on each time step, and allows for spatial resolution of the compositional field beyond that which is practical with markers-in-bulk methods. An additional advantage of using marker-chains for material advection is that it offers a possibility to use some of its markers, or even edges, to generate a FEM grid. One can tailor a grid for obtaining a Stokes solution with optimal accuracy, while controlling the quality and size of its elements. Where geometry of the interface allows - element-edges may be aligned with it, which is known to significantly improve the quality of Stokes solution, compared to when the interface cuts through the elements (Moresi et al., 1996; Deubelbeiss and Kaus, 2008). In more geometrically complex interface-regions, the grid may simply be refined to reduce the error. As materials get deformed in the course of a simulation, the interface may get stretched and entangled. Addition of new markers along the chain may be required in order to properly resolve the increasingly complicated geometry. Conversely, some markers may be removed from regions where they get clustered. Such resampling of the interface requires additional computational effort (although small compared to other parts of the code), and introduces an error in the interface-location (similar to numerical diffusion). Our implementation of this procedure, which utilizes an auxiliary high-resolution structured grid, allows a high degree of control on the magnitude of this error, although cannot eliminate it completely. We will present our chosen numerical implementation of the markers-in-bulk and markers-in-chain methods outlined above, together with the simulation results of the especially designed benchmarks that demonstrate the relative successes and limitations of these methods.
Advanced Computational Aeroacoustics Methods for Fan Noise Prediction
NASA Technical Reports Server (NTRS)
Envia, Edmane (Technical Monitor); Tam, Christopher
2003-01-01
Direct computation of fan noise is presently not possible. One of the major difficulties is the geometrical complexity of the problem. In the case of fan noise, the blade geometry is critical to the loading on the blade and hence the intensity of the radiated noise. The precise geometry must be incorporated into the computation. In computational fluid dynamics (CFD), there are two general ways to handle problems with complex geometry. One way is to use unstructured grids. The other is to use body fitted overset grids. In the overset grid method, accurate data transfer is of utmost importance. For acoustic computation, it is not clear that the currently used data transfer methods are sufficiently accurate as not to contaminate the very small amplitude acoustic disturbances. In CFD, low order schemes are, invariably, used in conjunction with unstructured grids. However, low order schemes are known to be numerically dispersive and dissipative. dissipative errors are extremely undesirable for acoustic wave problems. The objective of this project is to develop a high order unstructured grid Dispersion-Relation-Preserving (DRP) scheme. would minimize numerical dispersion and dissipation errors. contains the results of the funded portion of the project. scheme on an unstructured grid has been developed. constructed in the wave number space. The characteristics of the scheme can be improved by the inclusion of additional constraints. Stability of the scheme has been investigated. Stability can be improved by adopting the upwinding strategy.
Performance of an off-grid solar home in northwestern Vermont
DOE Office of Scientific and Technical Information (OSTI.GOV)
Rawlings, L.K.
1997-12-31
In 1995 an off-grid integrated solar home was built in Middlesex, VT for Peter Clark and Gloria DeSousa. This home was included as a pilot home in the US DOE PV:BONUS program to develop factory-built integrated solar homes. The home incorporates a 1.44 KW PV system, 0.6 KW of wind turbine capacity, and very high-efficiency electrical loads. The home also features passive solar design, high-efficiency heating systems, and a greenhouse-based septic treatment system. The performance of the PV system and the wind system, and the total power usage of the household, are measured and recorded by a data acquisition system.more » The home`s electrical loads have operated very efficiently, using on average about one tenth the power used by the average American residence. The PV system has operated reliably and efficiently, providing about 97% of the power needs of the home. The wind turbines have operated efficiently, but the wind regime at the site has not been sufficient to generate more than 1% of the total power needs. The other 2% has been provided by a gasoline backup generator.« less
Automated food microbiology: potential for the hydrophobic grid-membrane filter.
Sharpe, A N; Diotte, M P; Dudas, I; Michaud, G L
1978-01-01
Bacterial counts obtained on hydrophobic grid-membrane filters were comparable to conventional plate counts for Pseudomonas aeruginosa, Escherichia coli, and Staphylococcus aureus in homogenates from a range of foods. The wide numerical operating range of the hydrophobic grid-membrane filters allowed sequential diluting to be reduced or even eliminated, making them attractive as components in automated systems of analysis. Food debris could be rinsed completely from the unincubated hydrophobic grid-membrane filter surface without affecting the subsequent count, thus eliminating the possibility of counting food particles, a common source of error in electronic counting systems. PMID:100054
Error analysis for the proposed close grid geodynamic satellite measurement system (CLOGEOS)
NASA Technical Reports Server (NTRS)
Mueller, I. I.; Vangelder, B. H. W.; Kumar, M.
1975-01-01
The close grid geodynamic measurement system experiment which envisages an active ranging satellite and a grid of retro-reflectors or transponders in the San Andreas fault area is a detailed simulated study for recovering the relative positions in the grid. The close grid geodynamic measurement system for determining the relative motion of two plates in the California region (if feasible) could be used in other areas of the world to delineate and complete the picture of crustal motions over the entire globe and serve as a geodetic survey system. In addition, with less stringent accuracy standards, the system would also find usage in allied geological and marine geodesy fields.
ADAPTIVE TETRAHEDRAL GRID REFINEMENT AND COARSENING IN MESSAGE-PASSING ENVIRONMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Hallberg, J.; Stagg, A.
2000-10-01
A grid refinement and coarsening scheme has been developed for tetrahedral and triangular grid-based calculations in message-passing environments. The element adaption scheme is based on an edge bisection of elements marked for refinement by an appropriate error indicator. Hash-table/linked-list data structures are used to store nodal and element formation. The grid along inter-processor boundaries is refined and coarsened consistently with the update of these data structures via MPI calls. The parallel adaption scheme has been applied to the solution of a transient, three-dimensional, nonlinear, groundwater flow problem. Timings indicate efficiency of the grid refinement process relative to the flow solvermore » calculations.« less
ERIC Educational Resources Information Center
Parker, J. R.; Becker, Katrin; Sawyer, Ben
2008-01-01
Everything old is new again. In a recent "Point of View" editorial commentary in "Educational Technology," Richard E. Clark revisits the now-famous media-effects debate with a focus on serious games. Clark argues that serious games have little to offer that improves upon traditional methods. This article responds to those claims. While Clark's…
Modal analysis of untransposed bilateral three-phase lines -- a perturbation approach
DOE Office of Scientific and Technical Information (OSTI.GOV)
Faria, J.A.B.; Mendes, J.H.B.
1997-01-01
Model analysis of three-phase power lines exhibiting bilateral symmetry leads to modal transformation matrices that closely resemble Clarke`s transformation. The authors develop a perturbation theory approach to justify, interpret, and gain understanding of this well known fact. Further, the authors show how to find new frequency dependent correction terms that once added to Clarke`s transformation lead to improved accuracy.
Five-equation and robust three-equation methods for solution verification of large eddy simulation
NASA Astrophysics Data System (ADS)
Dutta, Rabijit; Xing, Tao
2018-02-01
This study evaluates the recently developed general framework for solution verification methods for large eddy simulation (LES) using implicitly filtered LES of periodic channel flows at friction Reynolds number of 395 on eight systematically refined grids. The seven-equation method shows that the coupling error based on Hypothesis I is much smaller as compared with the numerical and modeling errors and therefore can be neglected. The authors recommend five-equation method based on Hypothesis II, which shows a monotonic convergence behavior of the predicted numerical benchmark ( S C ), and provides realistic error estimates without the need of fixing the orders of accuracy for either numerical or modeling errors. Based on the results from seven-equation and five-equation methods, less expensive three and four-equation methods for practical LES applications were derived. It was found that the new three-equation method is robust as it can be applied to any convergence types and reasonably predict the error trends. It was also observed that the numerical and modeling errors usually have opposite signs, which suggests error cancellation play an essential role in LES. When Reynolds averaged Navier-Stokes (RANS) based error estimation method is applied, it shows significant error in the prediction of S C on coarse meshes. However, it predicts reasonable S C when the grids resolve at least 80% of the total turbulent kinetic energy.
Clark's Nutcracker Breeding Season Space Use and Foraging Behavior.
Schaming, Taza D
2016-01-01
Considering the entire life history of a species is fundamental to developing effective conservation strategies. Decreasing populations of five-needle white pines may be leading to the decline of Clark's nutcrackers (Nucifraga columbiana). These birds are important seed dispersers for at least ten conifer species in the western U.S., including whitebark pine (Pinus albicaulis), an obligate mutualist of Clark's nutcrackers. For effective conservation of both Clark's nutcrackers and whitebark pine, it is essential to ensure stability of Clark's nutcracker populations. My objectives were to examine Clark's nutcracker breeding season home range size, territoriality, habitat selection, and foraging behavior in the southern Greater Yellowstone Ecosystem, a region where whitebark pine is declining. I radio-tracked Clark's nutcrackers in 2011, a population-wide nonbreeding year following a low whitebark pine cone crop, and 2012, a breeding year following a high cone crop. Results suggest Douglas-fir (Pseudotsuga menziesii) communities are important habitat for Clark's nutcrackers because they selected it for home ranges. In contrast, they did not select whitebark pine habitat. However, Clark's nutcrackers did adjust their use of whitebark pine habitat between years, suggesting that, in some springs, whitebark pine habitat may be used more than previously expected. Newly extracted Douglas-fir seeds were an important food source both years. On the other hand, cached seeds made up a relatively lower proportion of the diet in 2011, suggesting cached seeds are not a reliable spring food source. Land managers focus on restoring whitebark pine habitat with the assumption that Clark's nutcrackers will be available to continue seed dispersal. In the Greater Yellowstone Ecosystem, Clark's nutcracker populations may be more likely to be retained year-round when whitebark pine restoration efforts are located adjacent to Douglas-fir habitat. By extrapolation, whitebark pine restoration efforts in other regions may consider prioritizing restoration of whitebark pine stands near alternative seed sources.
Convergence Analysis of Triangular MAC Schemes for Two Dimensional Stokes Equations
Wang, Ming; Zhong, Lin
2015-01-01
In this paper, we consider the use of H(div) elements in the velocity–pressure formulation to discretize Stokes equations in two dimensions. We address the error estimate of the element pair RT0–P0, which is known to be suboptimal, and render the error estimate optimal by the symmetry of the grids and by the superconvergence result of Lagrange inter-polant. By enlarging RT0 such that it becomes a modified BDM-type element, we develop a new discretization BDM1b–P0. We, therefore, generalize the classical MAC scheme on rectangular grids to triangular grids and retain all the desirable properties of the MAC scheme: exact divergence-free, solver-friendly, and local conservation of physical quantities. Further, we prove that the proposed discretization BDM1b–P0 achieves the optimal convergence rate for both velocity and pressure on general quasi-uniform grids, and one and half order convergence rate for the vorticity and a recovered pressure. We demonstrate the validity of theories developed here by numerical experiments. PMID:26041948
Power transformations improve interpolation of grids for molecular mechanics interaction energies.
Minh, David D L
2018-02-18
A common strategy for speeding up molecular docking calculations is to precompute nonbonded interaction energies between a receptor molecule and a set of three-dimensional grids. The grids are then interpolated to compute energies for ligand atoms in many different binding poses. Here, I evaluate a smoothing strategy of taking a power transformation of grid point energies and inverse transformation of the result from trilinear interpolation. For molecular docking poses from 85 protein-ligand complexes, this smoothing procedure leads to significant accuracy improvements, including an approximately twofold reduction in the root mean square error at a grid spacing of 0.4 Å and retaining the ability to rank docking poses even at a grid spacing of 0.7 Å. © 2018 Wiley Periodicals, Inc. © 2018 Wiley Periodicals, Inc.
Polarizing Grids, their Assemblies and Beams of Radiation
NASA Technical Reports Server (NTRS)
Houde, Martin; Akeson, Rachel L.; Carlstrom, John E.; Lamb, James W.; Schleuning, David A.; Woody, David P.
2001-01-01
This article gives an analysis of the behavior of polarizing grids and reflecting polarizers by solving Maxwell's equations, for arbitrary angles of incidence and grid rotation, for cases where the excitation is provided by an incident plane wave or a beam of radiation. The scattering and impedance matrix representations are derived and used to solve more complicated configurations of grid assemblies. The results are also compared with data obtained in the calibration of reflecting polarizers at the Owens Valley Radio Observatory (OVRO). From these analysis, we propose a method for choosing the optimum grid parameters (wire radius and spacing). We also provide a study of the effects of two types of errors (in wire separation and radius size) that can be introduced in the fabrication of a grid.
NB-PLC channel modelling with cyclostationary noise addition & OFDM implementation for smart grid
NASA Astrophysics Data System (ADS)
Thomas, Togis; Gupta, K. K.
2016-03-01
Power line communication (PLC) technology can be a viable solution for the future ubiquitous networks because it provides a cheaper alternative to other wired technology currently being used for communication. In smart grid Power Line Communication (PLC) is used to support communication with low rate on low voltage (LV) distribution network. In this paper, we propose the channel modelling of narrowband (NB) PLC in the frequency range 5 KHz to 500 KHz by using ABCD parameter with cyclostationary noise addition. Behaviour of the channel was studied by the addition of 11KV/230V transformer, by varying load location and load. Bit error rate (BER) Vs signal to noise ratio SNR) was plotted for the proposed model by employing OFDM. Our simulation results based on the proposed channel model show an acceptable performance in terms of bit error rate versus signal to noise ratio, which enables communication required for smart grid applications.
Commodity Flow Study - Clark County, Nevada, USA
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conway, S.Ph.D.; Navis, I.
2008-07-01
The United States Department of Energy has designated Clark County, Nevada as an 'Affected Unit of Local Government' due to the potential for impacts by activities associated with the Yucca Mountain High Level Nuclear Waste Repository project. Urban Transit, LLC has led a project team of transportation including experts from the University of Nevada Las Vegas Transportation Research Center to conduct a hazardous materials community flow study along Clark County's rail and truck corridors. In addition, a critical infrastructure analysis has also been carried out in order to assess the potential impacts of transportation within Clark County of high levelmore » nuclear waste and spent nuclear fuel to a proposed repository 90 miles away in an adjacent county on the critical infrastructure in Clark County. These studies were designed to obtain information relating to the transportation, identification and routing of hazardous materials through Clark County. Coordinating with the United States Department of Energy, the U.S. Department of Agriculture, the U. S. Federal Highway Administration, the Nevada Department of Transportation, and various other stakeholders, these studies and future research will examine the risk factors along the entire transportation corridor within Clark County and provide a context for understanding the additional vulnerability associated with shipping spent fuel through Clark County. (authors)« less
Response to March 12, 2001 Nevada Environmental Coalition Comments on Clark County's Title V Program
This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Policy and Guidance Database available at www2.epa.gov/title-v-operating-permits/title-v-operating-permit-policy-and-guidance-document-index. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Analysis of deformable image registration accuracy using computational modeling
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhong Hualiang; Kim, Jinkoo; Chetty, Indrin J.
2010-03-15
Computer aided modeling of anatomic deformation, allowing various techniques and protocols in radiation therapy to be systematically verified and studied, has become increasingly attractive. In this study the potential issues in deformable image registration (DIR) were analyzed based on two numerical phantoms: One, a synthesized, low intensity gradient prostate image, and the other a lung patient's CT image data set. Each phantom was modeled with region-specific material parameters with its deformation solved using a finite element method. The resultant displacements were used to construct a benchmark to quantify the displacement errors of the Demons and B-Spline-based registrations. The results showmore » that the accuracy of these registration algorithms depends on the chosen parameters, the selection of which is closely associated with the intensity gradients of the underlying images. For the Demons algorithm, both single resolution (SR) and multiresolution (MR) registrations required approximately 300 iterations to reach an accuracy of 1.4 mm mean error in the lung patient's CT image (and 0.7 mm mean error averaged in the lung only). For the low gradient prostate phantom, these algorithms (both SR and MR) required at least 1600 iterations to reduce their mean errors to 2 mm. For the B-Spline algorithms, best performance (mean errors of 1.9 mm for SR and 1.6 mm for MR, respectively) on the low gradient prostate was achieved using five grid nodes in each direction. Adding more grid nodes resulted in larger errors. For the lung patient's CT data set, the B-Spline registrations required ten grid nodes in each direction for highest accuracy (1.4 mm for SR and 1.5 mm for MR). The numbers of iterations or grid nodes required for optimal registrations depended on the intensity gradients of the underlying images. In summary, the performance of the Demons and B-Spline registrations have been quantitatively evaluated using numerical phantoms. The results show that parameter selection for optimal accuracy is closely related to the intensity gradients of the underlying images. Also, the result that the DIR algorithms produce much lower errors in heterogeneous lung regions relative to homogeneous (low intensity gradient) regions, suggests that feature-based evaluation of deformable image registration accuracy must be viewed cautiously.« less
NASA Astrophysics Data System (ADS)
Xiong, Qiufen; Hu, Jianglin
2013-05-01
The minimum/maximum (Min/Max) temperature in the Yangtze River valley is decomposed into the climatic mean and anomaly component. A spatial interpolation is developed which combines the 3D thin-plate spline scheme for climatological mean and the 2D Barnes scheme for the anomaly component to create a daily Min/Max temperature dataset. The climatic mean field is obtained by the 3D thin-plate spline scheme because the relationship between the decreases in Min/Max temperature with elevation is robust and reliable on a long time-scale. The characteristics of the anomaly field tend to be related to elevation variation weakly, and the anomaly component is adequately analyzed by the 2D Barnes procedure, which is computationally efficient and readily tunable. With this hybridized interpolation method, a daily Min/Max temperature dataset that covers the domain from 99°E to 123°E and from 24°N to 36°N with 0.1° longitudinal and latitudinal resolution is obtained by utilizing daily Min/Max temperature data from three kinds of station observations, which are national reference climatological stations, the basic meteorological observing stations and the ordinary meteorological observing stations in 15 provinces and municipalities in the Yangtze River valley from 1971 to 2005. The error estimation of the gridded dataset is assessed by examining cross-validation statistics. The results show that the statistics of daily Min/Max temperature interpolation not only have high correlation coefficient (0.99) and interpolation efficiency (0.98), but also the mean bias error is 0.00 °C. For the maximum temperature, the root mean square error is 1.1 °C and the mean absolute error is 0.85 °C. For the minimum temperature, the root mean square error is 0.89 °C and the mean absolute error is 0.67 °C. Thus, the new dataset provides the distribution of Min/Max temperature over the Yangtze River valley with realistic, successive gridded data with 0.1° × 0.1° spatial resolution and daily temporal scale. The primary factors influencing the dataset precision are elevation and terrain complexity. In general, the gridded dataset has a relatively high precision in plains and flatlands and a relatively low precision in mountainous areas.
NASA Astrophysics Data System (ADS)
Zolina, Olga; Simmer, Clemens; Kapala, Alice; Mächel, Hermann; Gulev, Sergey; Groisman, Pavel
2014-05-01
We present new high resolution precipitation daily grids developed at Meteorological Institute, University of Bonn and German Weather Service (DWD) under the STAMMEX project (Spatial and Temporal Scales and Mechanisms of Extreme Precipitation Events over Central Europe). Daily precipitation grids have been developed from the daily-observing precipitation network of DWD, which runs one of the World's densest rain gauge networks comprising more than 7500 stations. Several quality-controlled daily gridded products with homogenized sampling were developed covering the periods 1931-onwards (with 0.5 degree resolution), 1951-onwards (0.25 degree and 0.5 degree), and 1971-2000 (0.1 degree). Different methods were tested to select the best gridding methodology that minimizes errors of integral grid estimates over hilly terrain. Besides daily precipitation values with uncertainty estimates (which include standard estimates of the kriging uncertainty as well as error estimates derived by a bootstrapping algorithm), the STAMMEX data sets include a variety of statistics that characterize temporal and spatial dynamics of the precipitation distribution (quantiles, extremes, wet/dry spells, etc.). Comparisons with existing continental-scale daily precipitation grids (e.g., CRU, ECA E-OBS, GCOS) which include considerably less observations compared to those used in STAMMEX, demonstrate the added value of high-resolution grids for extreme rainfall analyses. These data exhibit spatial variability pattern and trends in precipitation extremes, which are missed or incorrectly reproduced over Central Europe from coarser resolution grids based on sparser networks. The STAMMEX dataset can be used for high-quality climate diagnostics of precipitation variability, as a reference for reanalyses and remotely-sensed precipitation products (including the upcoming Global Precipitation Mission products), and for input into regional climate and operational weather forecast models. We will present numerous application of the STAMMEX grids spanning from case studies of the major Central European floods to long-term changes in different precipitation statistics, including those accounting for the alternation of dry and wet periods and precipitation intensities associated with prolonged rainy episodes.
Code of Federal Regulations, 2011 CFR
2011-07-01
... County Unclassifiable/Attainment Champaign County Unclassifiable/Attainment Clark County Unclassifiable... Attainment Dayton-Springfield Area: Clark County Attainment Greene County Attainment Miami County Attainment...-Springfield, OH: Clark County August 13, 2007 Attainment. Greene County. Miami County. Montgomery County. Lima...
Decodoku: Quantum error rorrection as a simple puzzle game
NASA Astrophysics Data System (ADS)
Wootton, James
To build quantum computers, we need to detect and manage any noise that occurs. This will be done using quantum error correction. At the hardware level, QEC is a multipartite system that stores information non-locally. Certain measurements are made which do not disturb the stored information, but which do allow signatures of errors to be detected. Then there is a software problem. How to take these measurement outcomes and determine: a) The errors that caused them, and (b) how to remove their effects. For qubit error correction, the algorithms required to do this are well known. For qudits, however, current methods are far from optimal. We consider the error correction problem of qubit surface codes. At the most basic level, this is a problem that can be expressed in terms of a grid of numbers. Using this fact, we take the inherent problem at the heart of quantum error correction, remove it from its quantum context, and presented in terms of simple grid based puzzle games. We have developed three versions of these puzzle games, focussing on different aspects of the required algorithms. These have been presented and iOS and Android apps, allowing the public to try their hand at developing good algorithms to solve the puzzles. For more information, see www.decodoku.com. Funding from the NCCR QSIT.
2008-07-01
dropout rate amongst Grid participants suggests participants found the Grid more frustrating to use, and subjective satisfaction scores show... learned more than N years of graduate school could ever teach me, and my sister, who was always there for me when my Black Friday letters came. Abstract...greatly affect whether policies match their authors’ intentions ; a bad user interface can lead to policies with many errors, while a good user interface
Accurate path integration in continuous attractor network models of grid cells.
Burak, Yoram; Fiete, Ila R
2009-02-01
Grid cells in the rat entorhinal cortex display strikingly regular firing responses to the animal's position in 2-D space and have been hypothesized to form the neural substrate for dead-reckoning. However, errors accumulate rapidly when velocity inputs are integrated in existing models of grid cell activity. To produce grid-cell-like responses, these models would require frequent resets triggered by external sensory cues. Such inadequacies, shared by various models, cast doubt on the dead-reckoning potential of the grid cell system. Here we focus on the question of accurate path integration, specifically in continuous attractor models of grid cell activity. We show, in contrast to previous models, that continuous attractor models can generate regular triangular grid responses, based on inputs that encode only the rat's velocity and heading direction. We consider the role of the network boundary in the integration performance of the network and show that both periodic and aperiodic networks are capable of accurate path integration, despite important differences in their attractor manifolds. We quantify the rate at which errors in the velocity integration accumulate as a function of network size and intrinsic noise within the network. With a plausible range of parameters and the inclusion of spike variability, our model networks can accurately integrate velocity inputs over a maximum of approximately 10-100 meters and approximately 1-10 minutes. These findings form a proof-of-concept that continuous attractor dynamics may underlie velocity integration in the dorsolateral medial entorhinal cortex. The simulations also generate pertinent upper bounds on the accuracy of integration that may be achieved by continuous attractor dynamics in the grid cell network. We suggest experiments to test the continuous attractor model and differentiate it from models in which single cells establish their responses independently of each other.
NASA Technical Reports Server (NTRS)
Merceret, Francis; Lane, John; Immer, Christopher; Case, Jonathan; Manobianco, John
2005-01-01
The contour error map (CEM) algorithm and the software that implements the algorithm are means of quantifying correlations between sets of time-varying data that are binarized and registered on spatial grids. The present version of the software is intended for use in evaluating numerical weather forecasts against observational sea-breeze data. In cases in which observational data come from off-grid stations, it is necessary to preprocess the observational data to transform them into gridded data. First, the wind direction is gridded and binarized so that D(i,j;n) is the input to CEM based on forecast data and d(i,j;n) is the input to CEM based on gridded observational data. Here, i and j are spatial indices representing 1.25-km intervals along the west-to-east and south-to-north directions, respectively; and n is a time index representing 5-minute intervals. A binary value of D or d = 0 corresponds to an offshore wind, whereas a value of D or d = 1 corresponds to an onshore wind. CEM includes two notable subalgorithms: One identifies and verifies sea-breeze boundaries; the other, which can be invoked optionally, performs an image-erosion function for the purpose of attempting to eliminate river-breeze contributions in the wind fields.
Parallel Anisotropic Tetrahedral Adaptation
NASA Technical Reports Server (NTRS)
Park, Michael A.; Darmofal, David L.
2008-01-01
An adaptive method that robustly produces high aspect ratio tetrahedra to a general 3D metric specification without introducing hybrid semi-structured regions is presented. The elemental operators and higher-level logic is described with their respective domain-decomposed parallelizations. An anisotropic tetrahedral grid adaptation scheme is demonstrated for 1000-1 stretching for a simple cube geometry. This form of adaptation is applicable to more complex domain boundaries via a cut-cell approach as demonstrated by a parallel 3D supersonic simulation of a complex fighter aircraft. To avoid the assumptions and approximations required to form a metric to specify adaptation, an approach is introduced that directly evaluates interpolation error. The grid is adapted to reduce and equidistribute this interpolation error calculation without the use of an intervening anisotropic metric. Direct interpolation error adaptation is illustrated for 1D and 3D domains.
Analyzing diffuse scattering with supercomputers. Corrigendum
DOE Office of Scientific and Technical Information (OSTI.GOV)
Michels-Clark, Tara M.; Lynch, Vickie E.; Hoffmann, Christina M.
2016-03-01
The study by Michels-Clark et al. (2013 [Michels-Clark, T. M., Lynch, V. E., Hoffmann, C. M., Hauser, J., Weber, T., Harrison, R. & Bürgi, H. B. (2013). J. Appl. Cryst. 46, 1616-1625.]) contains misleading errors which are corrected here. The numerical results reported in that paper and the conclusions given there are not affected and remain unchanged. The transition probabilities in Table 1 (rows 4, 5, 7, 8) and Fig. 2 (rows 1 and 2) of the original paper were different from those used in the numerical calculations. Corrected transition probabilities as used in the computations are given in Tablemore » 1 and Fig. 1 of this article. The Δ parameter in the stacking model expresses the preference for the fifth layer in a five-layer stack to be eclipsed with respect to the first layer. This statement corrects the original text on p. 1622, lines 4–7. In the original Fig. 2 the helicity of the layer stacks b L and b R in rows 3 and 4 had been given as opposite to those in rows 1, 2 and 5. Fig. 1 of this article shows rows 3 and 4 corrected to correspond to rows 1, 2 and 5.« less
Online production validation in a HEP environment
NASA Astrophysics Data System (ADS)
Harenberg, T.; Kuhl, T.; Lang, N.; Mättig, P.; Sandhoff, M.; Schwanenberger, C.; Volkmer, F.
2017-03-01
In high energy physics (HEP) event simulations, petabytes of data are processed and stored requiring millions of CPU-years. This enormous demand for computing resources is handled by centers distributed worldwide, which form part of the LHC computing grid. The consumption of such an important amount of resources demands for an efficient production of simulation and for the early detection of potential errors. In this article we present a new monitoring framework for grid environments, which polls a measure of data quality during job execution. This online monitoring facilitates the early detection of configuration errors (specially in simulation parameters), and may thus contribute to significant savings in computing resources.
Chelliah, Kanthasamy; Raman, Ganesh G.; Muehleisen, Ralph T.
2016-07-07
This paper evaluates the performance of various regularization parameter choice methods applied to different approaches of nearfield acoustic holography when a very nearfield measurement is not possible. For a fixed grid resolution, the larger the hologram distance, the larger the error in the naive nearfield acoustic holography reconstructions. These errors can be smoothed out by using an appropriate order of regularization. In conclusion, this study shows that by using a fixed/manual choice of regularization parameter, instead of automated parameter choice methods, reasonably accurate reconstructions can be obtained even when the hologram distance is 16 times larger than the grid resolution.
Refined numerical solution of the transonic flow past a wedge
NASA Technical Reports Server (NTRS)
Liang, S.-M.; Fung, K.-Y.
1985-01-01
A numerical procedure combining the ideas of solving a modified difference equation and of adaptive mesh refinement is introduced. The numerical solution on a fixed grid is improved by using better approximations of the truncation error computed from local subdomain grid refinements. This technique is used to obtain refined solutions of steady, inviscid, transonic flow past a wedge. The effects of truncation error on the pressure distribution, wave drag, sonic line, and shock position are investigated. By comparing the pressure drag on the wedge and wave drag due to the shocks, a supersonic-to-supersonic shock originating from the wedge shoulder is confirmed.
Snyder, D.T.; Wilkinson, J.M.; Orzol, L.L.
1996-01-01
A ground-water flow model was used in conjunction with particle tracking to evaluate ground-water vulnerability in Clark County, Washington. Using the particle-tracking program, particles were placed in every cell of the flow model (about 60,000 particles) and tracked backwards in time and space upgradient along flow paths to their recharge points. A new computer program was developed that interfaces the results from a particle-tracking program with a geographic information system (GIS). The GIS was used to display and analyze the particle-tracking results. Ground-water vulnerability was evaluated by selecting parts of the ground-water flow system and combining the results with ancillary information stored in the GIS to determine recharge areas, characteristics of recharge areas, downgradient impact of land use at recharge areas, and age of ground water. Maps of the recharge areas for each hydrogeologic unit illustrate the presence of local, intermediate, or regional ground-water flow systems and emphasize the three-dimensional nature of the ground-water flow system in Clark County. Maps of the recharge points for each hydrogeologic unit were overlaid with maps depicting aquifer sensitivity as determined by DRASTIC (a measure of the pollution potential of ground water, based on the intrinsic characteristics of the near-surface unsaturated and saturated zones) and recharge from on-site waste-disposal systems. A large number of recharge areas were identified, particularly in southern Clark County, that have a high aquifer sensitivity, coincide with areas of recharge from on-site waste-disposal systems, or both. Using the GIS, the characteristics of the recharge areas were related to the downgradient parts of the ground-water system that will eventually receive flow that has recharged through these areas. The aquifer sensitivity, as indicated by DRASTIC, of the recharge areas for downgradient parts of the flow system was mapped for each hydrogeologic unit. A number of public-supply wells in Clark County may be receiving a component of water that recharged in areas that are more conducive to contaminant entry. The aquifer sensitivity maps illustrate a critical deficiency in the DRASTIC methodology: the failure to account for the dynamics of the ground-water flow system. DRASTIC indices calculated for a particular location thus do not necessarily reflect the conditions of the ground-water resources at the recharge areas to that particular location. Each hydrogeologic unit was also mapped to highlight those areas that will eventually receive flow from recharge areas with on-site waste-disposal systems. Most public-supply wells in southern Clark County may eventually receive a component of water that was recharged from on-site waste-disposal systems.Traveltimes from particle tracking were used to estimate the minimum and maximum age of ground water within each model-grid cell. Chlorofluorocarbon (CFC)-age dating of ground water from 51 wells was used to calibrate effective porosity values used for the particle- tracking program by comparison of ground-water ages determined through the use of the CFC-age dating with those calculated by the particle- tracking program. There was a 76 percent agreement in predicting the presence of modern water in the 51 wells as determined using CFCs and calculated by the particle-tracking program. Maps showing the age of ground water were prepared for all the hydrogeologic units. Areas with the youngest ground-water ages are expected to be at greatest risk for contamination from anthropogenic sources. Comparison of these maps with maps of public- supply wells in Clark County indicates that most of these wells may withdraw ground water that is, in part, less than 100 years old, and in many instances less than 10 years old. Results of the analysis showed that a single particle-tracking analysis simulating advective transport can be used to evaluate ground-water vulnerability for any part of a ground-wate
Code of Federal Regulations, 2011 CFR
2011-07-01
... County X Buffalo County X Chippewa County X Clark County X Crawford County X Dunn County X Eau Claire... Unclassifiable/Attainment Chippewa County Unclassifiable/Attainment Clark County Unclassifiable/Attainment... Unclassifiable/Attainment Chippewa County Unclassifiable/Attainment Clark County Unclassifiable/Attainment...
Code of Federal Regulations, 2010 CFR
2010-07-01
... County Unclassifiable/Attainment Champaign County Unclassifiable/Attainment Clark County Unclassifiable... intersection of Interstate 71 and Clark Avenue to the intersection of Interstate 77 and Pershing Avenue Rest of... Dayton-Springfield Area: Clark County Attainment Greene County Attainment Miami County Attainment...
Code of Federal Regulations, 2010 CFR
2010-07-01
... County X Buffalo County X Chippewa County X Clark County X Crawford County X Dunn County X Eau Claire... Unclassifiable/Attainment Chippewa County Unclassifiable/Attainment Clark County Unclassifiable/Attainment... Unclassifiable/Attainment Chippewa County Unclassifiable/Attainment Clark County Unclassifiable/Attainment...
NASA Astrophysics Data System (ADS)
Döpking, Sandra; Plaisance, Craig P.; Strobusch, Daniel; Reuter, Karsten; Scheurer, Christoph; Matera, Sebastian
2018-01-01
In the last decade, first-principles-based microkinetic modeling has been developed into an important tool for a mechanistic understanding of heterogeneous catalysis. A commonly known, but hitherto barely analyzed issue in this kind of modeling is the presence of sizable errors from the use of approximate Density Functional Theory (DFT). We here address the propagation of these errors to the catalytic turnover frequency (TOF) by global sensitivity and uncertainty analysis. Both analyses require the numerical quadrature of high-dimensional integrals. To achieve this efficiently, we utilize and extend an adaptive sparse grid approach and exploit the confinement of the strongly non-linear behavior of the TOF to local regions of the parameter space. We demonstrate the methodology on a model of the oxygen evolution reaction at the Co3O4 (110)-A surface, using a maximum entropy error model that imposes nothing but reasonable bounds on the errors. For this setting, the DFT errors lead to an absolute uncertainty of several orders of magnitude in the TOF. We nevertheless find that it is still possible to draw conclusions from such uncertain models about the atomistic aspects controlling the reactivity. A comparison with derivative-based local sensitivity analysis instead reveals that this more established approach provides incomplete information. Since the adaptive sparse grids allow for the evaluation of the integrals with only a modest number of function evaluations, this approach opens the way for a global sensitivity analysis of more complex models, for instance, models based on kinetic Monte Carlo simulations.
NASA Astrophysics Data System (ADS)
Morávek, Zdenek; Rickhey, Mark; Hartmann, Matthias; Bogner, Ludwig
2009-08-01
Treatment plans for intensity-modulated proton therapy may be sensitive to some sources of uncertainty. One source is correlated with approximations of the algorithms applied in the treatment planning system and another one depends on how robust the optimization is with regard to intra-fractional tissue movements. The irradiated dose distribution may substantially deteriorate from the planning when systematic errors occur in the dose algorithm. This can influence proton ranges and lead to improper modeling of the Braggpeak degradation in heterogeneous structures or particle scatter or the nuclear interaction part. Additionally, systematic errors influence the optimization process, which leads to the convergence error. Uncertainties with regard to organ movements are related to the robustness of a chosen beam setup to tissue movements on irradiation. We present the inverse Monte Carlo treatment planning system IKO for protons (IKO-P), which tries to minimize the errors described above to a large extent. Additionally, robust planning is introduced by beam angle optimization according to an objective function penalizing paths representing strongly longitudinal and transversal tissue heterogeneities. The same score function is applied to optimize spot planning by the selection of a robust choice of spots. As spots can be positioned on different energy grids or on geometric grids with different space filling factors, a variety of grids were used to investigate the influence on the spot-weight distribution as a result of optimization. A tighter distribution of spot weights was assumed to result in a more robust plan with respect to movements. IKO-P is described in detail and demonstrated on a test case and a lung cancer case as well. Different options of spot planning and grid types are evaluated, yielding a superior plan quality with dose delivery to the spots from all beam directions over optimized beam directions. This option shows a tighter spot-weight distribution and should therefore be less sensitive to movements compared to optimized directions. But accepting a slight loss in plan quality, the latter choice could potentially improve robustness even further by accepting only spots from the most proper direction. The choice of a geometric grid instead of an energy grid for spot positioning has only a minor influence on the plan quality, at least for the investigated lung case.
Ghumman, Abul Razzaq; Al-Salamah, Ibrahim Saleh; AlSaleem, Saleem Saleh; Haider, Husnain
2017-02-01
Geomorphological instantaneous unit hydrograph (GIUH) usually uses geomorphologic parameters of catchment estimated from digital elevation model (DEM) for rainfall-runoff modeling of ungauged watersheds with limited data. Higher resolutions (e.g., 5 or 10 m) of DEM play an important role in the accuracy of rainfall-runoff models; however, such resolutions are expansive to obtain and require much greater efforts and time for preparation of inputs. In this research, a modeling framework is developed to evaluate the impact of lower resolutions (i.e., 30 and 90 m) of DEM on the accuracy of Clark GIUH model. Observed rainfall-runoff data of a 202-km 2 catchment in a semiarid region was used to develop direct runoff hydrographs for nine rainfall events. Geographical information system was used to process both the DEMs. Model accuracy and errors were estimated by comparing the model results with the observed data. The study found (i) high model efficiencies greater than 90% for both the resolutions, and (ii) that the efficiency of Clark GIUH model does not significantly increase by enhancing the resolution of the DEM from 90 to 30 m. Thus, it is feasible to use lower resolutions (i.e., 90 m) of DEM in the estimation of peak runoff in ungauged catchments with relatively less efforts. Through sensitivity analysis (Monte Carlo simulations), the kinematic wave parameter and stream length ratio are found to be the most significant parameters in velocity and peak flow estimations, respectively; thus, they need to be carefully estimated for calculation of direct runoff in ungauged watersheds using Clark GIUH model.
Ramstad, K.M.; Woody, C.A.; Sage, G.K.; Allendorf, F.W.
2004-01-01
Bottlenecks can have lasting effects on genetic population structure that obscure patterns of contemporary gene flow and drift. Sockeye salmon are vulnerable to bottleneck effects because they are a highly structured species with excellent colonizing abilities and often occupy geologically young habitats. We describe genetic divergence among and genetic variation within spawning populations of sockeye salmon throughout the Lake Clark area of Alaska. Fin tissue was collected from sockeye salmon representing 15 spawning populations of Lake Clark, Six-mile Lake, and Lake Iliamna. Allele frequencies differed significantly at 11 microsatellite loci in 96 of 105 pairwise population comparisons. Pairwise estimates of FST ranged from zero to 0.089. Six-mile Lake and Lake Clark populations have historically been grouped together for management purposes and are geographically proximate. However, Six-mile Lake populations are genetically similar to Lake Iliamna populations and are divergent from Lake Clark populations. The reduced allelic diversity and strong divergence of Lake Clark populations relative to Six-mile Lake and Lake Iliamna populations suggest a bottleneck associated with the colonization of Lake Clark by sockeye salmon. Geographic distance and spawning habitat differences apparently do not contribute to isolation and divergence among populations. However, temporal isolation based on spawning time and founder effects associated with ongoing glacial retreat and colonization of new spawning habitats contribute to the genetic population structure of Lake Clark sock-eye salmon. Nonequilibrium conditions and the strong influence of genetic drift caution against using estimates of divergence to estimate gene flow among populations of Lake Clark sockeye salmon.
NASA Technical Reports Server (NTRS)
Lansing, Faiza S.; Rascoe, Daniel L.
1993-01-01
This paper presents a modified Finite-Difference Time-Domain (FDTD) technique using a generalized conformed orthogonal grid. The use of the Conformed Orthogonal Grid, Finite Difference Time Domain (GFDTD) enables the designer to match all the circuit dimensions, hence eliminating a major source o error in the analysis.
Evaluation of automated global mapping of Reference Soil Groups of WRB2015
NASA Astrophysics Data System (ADS)
Mantel, Stephan; Caspari, Thomas; Kempen, Bas; Schad, Peter; Eberhardt, Einar; Ruiperez Gonzalez, Maria
2017-04-01
SoilGrids is an automated system that provides global predictions for standard numeric soil properties at seven standard depths down to 200 cm, currently at spatial resolutions of 1km and 250m. In addition, the system provides predictions of depth to bedrock and distribution of soil classes based on WRB and USDA Soil Taxonomy (ST). In SoilGrids250m(1), soil classes (WRB, version 2006) consist of the RSG and the first prefix qualifier, whereas in SoilGrids1km(2), the soil class was assessed at RSG level. Automated mapping of World Reference Base (WRB) Reference Soil Groups (RSGs) at a global level has great advantages. Maps can be updated in a short time span with relatively little effort when new data become available. To translate soil names of older versions of FAO/WRB and national classification systems of the source data into names according to WRB 2006, correlation tables are used in SoilGrids. Soil properties and classes are predicted independently from each other. This means that the combinations of soil properties for the same cells or soil property-soil class combinations do not necessarily yield logical combinations when the map layers are studied jointly. The model prediction procedure is robust and probably has a low source of error in the prediction of RSGs. It seems that the quality of the original soil classification in the data and the use of correlation tables are the largest sources of error in mapping the RSG distribution patterns. Predicted patterns of dominant RSGs were evaluated in selected areas and sources of error were identified. Suggestions are made for improvement of WRB2015 RSG distribution predictions in SoilGrids. Keywords: Automated global mapping; World Reference Base for Soil Resources; Data evaluation; Data quality assurance References 1 Hengl T, de Jesus JM, Heuvelink GBM, Ruiperez Gonzalez M, Kilibarda M, et al. (2016) SoilGrids250m: global gridded soil information based on Machine Learning. Earth System Science Data (ESSD), in review. 2 Hengl T, de Jesus JM, MacMillan RA, Batjes NH, Heuvelink GBM, et al. (2014) SoilGrids1km — Global Soil Information Based on Automated Mapping. PLoS ONE 9(8): e105992. doi:10.1371/journal.pone.0105992
Code of Federal Regulations, 2011 CFR
2011-07-01
... County Butte County Campbell County Charles Mix County Clark County Clay County Codington County Corson... County Brule County Buffalo County Butte County Campbell County Charles Mix County Clark County Clay... Charles Mix County Unclassifiable/Attainment Clark County Unclassifiable/Attainment Clay County...
Code of Federal Regulations, 2010 CFR
2010-07-01
... County Butte County Campbell County Charles Mix County Clark County Clay County Codington County Corson... County Brule County Buffalo County Butte County Campbell County Charles Mix County Clark County Clay... Charles Mix County Unclassifiable/Attainment Clark County Unclassifiable/Attainment Clay County...
Sparse grid techniques for particle-in-cell schemes
NASA Astrophysics Data System (ADS)
Ricketson, L. F.; Cerfon, A. J.
2017-02-01
We propose the use of sparse grids to accelerate particle-in-cell (PIC) schemes. By using the so-called ‘combination technique’ from the sparse grids literature, we are able to dramatically increase the size of the spatial cells in multi-dimensional PIC schemes while paying only a slight penalty in grid-based error. The resulting increase in cell size allows us to reduce the statistical noise in the simulation without increasing total particle number. We present initial proof-of-principle results from test cases in two and three dimensions that demonstrate the new scheme’s efficiency, both in terms of computation time and memory usage.
Development of Three-Dimensional DRAGON Grid Technology
NASA Technical Reports Server (NTRS)
Zheng, Yao; Kiou, Meng-Sing; Civinskas, Kestutis C.
1999-01-01
For a typical three dimensional flow in a practical engineering device, the time spent in grid generation can take 70 percent of the total analysis effort, resulting in a serious bottleneck in the design/analysis cycle. The present research attempts to develop a procedure that can considerably reduce the grid generation effort. The DRAGON grid, as a hybrid grid, is created by means of a Direct Replacement of Arbitrary Grid Overlapping by Nonstructured grid. The DRAGON grid scheme is an adaptation to the Chimera thinking. The Chimera grid is a composite structured grid, composing a set of overlapped structured grids, which are independently generated and body-fitted. The grid is of high quality and amenable for efficient solution schemes. However, the interpolation used in the overlapped region between grids introduces error, especially when a sharp-gradient region is encountered. The DRAGON grid scheme is capable of completely eliminating the interpolation and preserving the conservation property. It maximizes the advantages of the Chimera scheme and adapts the strengths of the unstructured and while at the same time keeping its weaknesses minimal. In the present paper, we describe the progress towards extending the DRAGON grid technology into three dimensions. Essential and programming aspects of the extension, and new challenges for the three-dimensional cases, are addressed.
Q & A with Ed Tech Leaders: Interview with Clark Aldrich
ERIC Educational Resources Information Center
Shaughnessy, Michael F.; Fulgham, Susan M.
2016-01-01
Clark Aldrich is the founder and Managing Partner of Clark Aldrich Designs, and is known as a global education visionary, industry analyst, and speaker. In this interview, he responds to questions about his ideas, his work, and his theories.
The three-dimensional Multi-Block Advanced Grid Generation System (3DMAGGS)
NASA Technical Reports Server (NTRS)
Alter, Stephen J.; Weilmuenster, Kenneth J.
1993-01-01
As the size and complexity of three dimensional volume grids increases, there is a growing need for fast and efficient 3D volumetric elliptic grid solvers. Present day solvers are limited by computational speed and do not have all the capabilities such as interior volume grid clustering control, viscous grid clustering at the wall of a configuration, truncation error limiters, and convergence optimization residing in one code. A new volume grid generator, 3DMAGGS (Three-Dimensional Multi-Block Advanced Grid Generation System), which is based on the 3DGRAPE code, has evolved to meet these needs. This is a manual for the usage of 3DMAGGS and contains five sections, including the motivations and usage, a GRIDGEN interface, a grid quality analysis tool, a sample case for verifying correct operation of the code, and a comparison to both 3DGRAPE and GRIDGEN3D. Since it was derived from 3DGRAPE, this technical memorandum should be used in conjunction with the 3DGRAPE manual (NASA TM-102224).
Code of Federal Regulations, 2010 CFR
2010-07-01
... Unclassifiable/Attainment Cass County Unclassifiable/Attainment Clark County Unclassifiable/Attainment Clay...: Vanderburgh County Attainment Indianapolis Area: Marion County Attainment Louisville Area: Clark County 10/23... LaPorte CO., IN: LaPorte County 7/19/07 Attainment. Louisville, KY-IN: Clark County. Floyd County July 19...
Code of Federal Regulations, 2011 CFR
2011-07-01
... Unclassifiable/Attainment Cass County Unclassifiable/Attainment Clark County Unclassifiable/Attainment Clay...: Vanderburgh County Attainment Indianapolis Area: Marion County Attainment Louisville Area: Clark County 10/23... LaPorte CO., IN: LaPorte County 7/19/07 Attainment. Louisville, KY-IN: Clark County. Floyd County July 19...
1. VIEW OF HEADQUARTERS OF J. CLARK SALYER NATIONAL WILDLIFE ...
1. VIEW OF HEADQUARTERS OF J. CLARK SALYER NATIONAL WILDLIFE REFUGE, SHOWING PART OF THE POND BEHIND DAM 326, LOOKING SOUTHEAST FROM THE LOOKOUT TOWER - J. Clark Salyer National Wildlife Refuge Dams, Along Lower Souris River, Kramer, Bottineau County, ND
Media Embeds: Balancing Operations Security with Public Need to Know
2009-04-01
Journalism Review, January/February 2002. 2 Torie Clarke, Lipstick on a Pig (New York, N.Y.: Free Press, 2006), 17-24. 3 Richard K. Wright...voice-clarke.asp. 15 Torie Clarke, Lipstick on a Pig (New York, N.Y.: Free Press, 2006), 94. 16 Department of Defense, “Seminar on Coverage of the...20 Torie Clarke, Lipstick on a Pig (New York, N.Y.: Free Press, 2006), 54. 21 Message, 101900Z FEB 03, Department of Defense to Public Affairs, 10
Solving Upwind-Biased Discretizations. 2; Multigrid Solver Using Semicoarsening
NASA Technical Reports Server (NTRS)
Diskin, Boris
1999-01-01
This paper studies a novel multigrid approach to the solution for a second order upwind biased discretization of the convection equation in two dimensions. This approach is based on semi-coarsening and well balanced explicit correction terms added to coarse-grid operators to maintain on coarse-grid the same cross-characteristic interaction as on the target (fine) grid. Colored relaxation schemes are used on all the levels allowing a very efficient parallel implementation. The results of the numerical tests can be summarized as follows: 1) The residual asymptotic convergence rate of the proposed V(0, 2) multigrid cycle is about 3 per cycle. This convergence rate far surpasses the theoretical limit (4/3) predicted for standard multigrid algorithms using full coarsening. The reported efficiency does not deteriorate with increasing the cycle, depth (number of levels) and/or refining the target-grid mesh spacing. 2) The full multi-grid algorithm (FMG) with two V(0, 2) cycles on the target grid and just one V(0, 2) cycle on all the coarse grids always provides an approximate solution with the algebraic error less than the discretization error. Estimates of the total work in the FMG algorithm are ranged between 18 and 30 minimal work units (depending on the target (discretizatioin). Thus, the overall efficiency of the FMG solver closely approaches (if does not achieve) the goal of the textbook multigrid efficiency. 3) A novel approach to deriving a discrete solution approximating the true continuous solution with a relative accuracy given in advance is developed. An adaptive multigrid algorithm (AMA) using comparison of the solutions on two successive target grids to estimate the accuracy of the current target-grid solution is defined. A desired relative accuracy is accepted as an input parameter. The final target grid on which this accuracy can be achieved is chosen automatically in the solution process. the actual relative accuracy of the discrete solution approximation obtained by AMA is always better than the required accuracy; the computational complexity of the AMA algorithm is (nearly) optimal (comparable with the complexity of the FMG algorithm applied to solve the problem on the optimally spaced target grid).
Gridded Data in the Arctic; Benefits and Perils of Publicly Available Grids
NASA Astrophysics Data System (ADS)
Coakley, B.; Forsberg, R.; Gabbert, R.; Beale, J.; Kenyon, S. C.
2015-12-01
Our understanding of the Arctic Ocean has been hugely advanced by release of gridded bathymetry and potential field anomaly grids. The Arctic Gravity Project grid achieves excellent, near-isotropic coverage of the earth north of 64˚N by combining land, satellite, airborne, submarine, surface ship and ice set-out measurements of gravity anomalies. Since the release of the V 2.0 grid in 2008, there has been extensive icebreaker activity across the Amerasia Basin due to mapping of the Arctic coastal nation's Extended Continental Shelves (ECS). While grid resolution has been steadily improving over time, addition of higher resolution and better navigated data highlights some distortions in the grid that may influence interpretation. In addition to the new ECS data sets, gravity anomaly data has been collected from other vessels; notably the Korean Icebreaker Araon, the Japanese icebreaker Mirai and the German icebreaker Polarstern. Also the GRAV-D project of the US National Geodetic Survey has flown airborne surveys over much of Alaska. These data will be Included in the new AGP grid, which will result in a much improved product when version 3.0 is released in 2015. To make use of these measurements, it is necessary to compile them into a continuous spatial representation. Compilation is complicated by differences in survey parameters, gravimeter sensitivity and reduction methods. Cross-over errors are the classic means to assess repeatability of track measurements. Prior to the introduction of near-universal GPS positioning, positional uncertainty was evaluated by cross-over analysis. GPS positions can be treated as more or less true, enabling evaluation of differences due to contrasting sensitivity, reference and reduction techniques. For the most part, cross-over errors for racks of gravity anomaly data collected since 2008 are less than 0.5 mGals, supporting the compilation of these data with only slight adjustments. Given the different platforms used for various Arctic Ocean surveys, registration between bathymetric and gravity anomaly grids cannot be assumed. Inverse methods, which assume co-registration of data produce, sometimes surprising results when well-constrained gravity grid values are inverted against interpolated bathymetry.
An Improved Neutron Transport Algorithm for HZETRN
NASA Technical Reports Server (NTRS)
Slaba, Tony C.; Blattnig, Steve R.; Clowdsley, Martha S.; Walker, Steven A.; Badavi, Francis F.
2010-01-01
Long term human presence in space requires the inclusion of radiation constraints in mission planning and the design of shielding materials, structures, and vehicles. In this paper, the numerical error associated with energy discretization in HZETRN is addressed. An inadequate numerical integration scheme in the transport algorithm is shown to produce large errors in the low energy portion of the neutron and light ion fluence spectra. It is further shown that the errors result from the narrow energy domain of the neutron elastic cross section spectral distributions, and that an extremely fine energy grid is required to resolve the problem under the current formulation. Two numerical methods are developed to provide adequate resolution in the energy domain and more accurately resolve the neutron elastic interactions. Convergence testing is completed by running the code for various environments and shielding materials with various energy grids to ensure stability of the newly implemented method.
Federal Register 2010, 2011, 2012, 2013, 2014
2012-04-04
... of Public Land in Clark County, NV AGENCY: Bureau of Land Management, Interior. ACTION: Notice of... described contains 480 acres, more or less, in Clark County. The map delineating the proposed sale parcel is...
40 CFR 52.773 - Approval status.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Clark, Elkhart, Floyd, Lake, Marion, Porter, and St. Joseph Counties satisfy all requirements of Part D.... (g) The administrator finds that the total suspended particulate strategies for Clark, Dearborn... the Clean Air Act, as amended in 1977: (1) The transportation control plans for Lake, Porter, Clark...
NREL's Cybersecurity Initiative Aims to Wall Off the Smart Grid from
provided the Energy Department with $4.5 billion to modernize the electric power grid. One key to this possible. As just one example, in typical computer-based communications systems, like the Internet, data is found only one vulnerability, which was due to a misconfigured device. Through just that one error, the
DOE Office of Scientific and Technical Information (OSTI.GOV)
Cololla, P.
This review describes a structured approach to adaptivity. The Automated Mesh Refinement (ARM) algorithms developed by M Berger are described, touching on hyperbolic and parabolic applications. Adaptivity is achieved by overlaying finer grids only in areas flagged by a generalized error criterion. The author discusses some of the issues involved in abutting disparate-resolution grids, and demonstrates that suitable algorithms exist for dissipative as well as hyperbolic systems.
NASA Technical Reports Server (NTRS)
Poliner, Jeffrey; Fletcher, Lauren; Klute, Glenn K.
1994-01-01
Video-based motion analysis systems are widely employed to study human movement, using computers to capture, store, process, and analyze video data. This data can be collected in any environment where cameras can be located. One of the NASA facilities where human performance research is conducted is the Weightless Environment Training Facility (WETF), a pool of water which simulates zero-gravity with neutral buoyance. Underwater video collection in the WETF poses some unique problems. This project evaluates the error caused by the lens distortion of the WETF cameras. A grid of points of known dimensions was constructed and videotaped using a video vault underwater system. Recorded images were played back on a VCR and a personal computer grabbed and stored the images on disk. These images were then digitized to give calculated coordinates for the grid points. Errors were calculated as the distance from the known coordinates of the points to the calculated coordinates. It was demonstrated that errors from lens distortion could be as high as 8 percent. By avoiding the outermost regions of a wide-angle lens, the error can be kept smaller.
Predictive Monitoring for Improved Management of Glucose Levels
Reifman, Jaques; Rajaraman, Srinivasan; Gribok, Andrei; Ward, W. Kenneth
2007-01-01
Background Recent developments and expected near-future improvements in continuous glucose monitoring (CGM) devices provide opportunities to couple them with mathematical forecasting models to produce predictive monitoring systems for early, proactive glycemia management of diabetes mellitus patients before glucose levels drift to undesirable levels. This article assesses the feasibility of data-driven models to serve as the forecasting engine of predictive monitoring systems. Methods We investigated the capabilities of data-driven autoregressive (AR) models to (1) capture the correlations in glucose time-series data, (2) make accurate predictions as a function of prediction horizon, and (3) be made portable from individual to individual without any need for model tuning. The investigation is performed by employing CGM data from nine type 1 diabetic subjects collected over a continuous 5-day period. Results With CGM data serving as the gold standard, AR model-based predictions of glucose levels assessed over nine subjects with Clarke error grid analysis indicated that, for a 30-minute prediction horizon, individually tuned models yield 97.6 to 100.0% of data in the clinically acceptable zones A and B, whereas cross-subject, portable models yield 95.8 to 99.7% of data in zones A and B. Conclusions This study shows that, for a 30-minute prediction horizon, data-driven AR models provide sufficiently-accurate and clinically-acceptable estimates of glucose levels for timely, proactive therapy and should be considered as the modeling engine for predictive monitoring of patients with type 1 diabetes mellitus. It also suggests that AR models can be made portable from individual to individual with minor performance penalties, while greatly reducing the burden associated with model tuning and data collection for model development. PMID:19885110
Accuracy of continuous glucose monitoring during exercise in type 1 diabetes pregnancy.
Kumareswaran, Kavita; Elleri, Daniela; Allen, Janet M; Caldwell, Karen; Nodale, Marianna; Wilinska, Malgorzata E; Amiel, Stephanie A; Hovorka, Roman; Murphy, Helen R
2013-03-01
Performance of continuous glucose monitors (CGMs) may be lower when glucose levels are changing rapidly, such as occurs during physical activity. Our aim was to evaluate accuracy of a current-generation CGM during moderate-intensity exercise in type 1 diabetes (T1D) pregnancy. As part of a study of 24-h closed-loop insulin delivery in 12 women with T1D (disease duration, 17.6 years; glycosylated hemoglobin, 6.4%) during pregnancy (gestation, 21 weeks), we evaluated the Freestyle Navigator(®) sensor (Abbott Diabetes Care, Alameda, CA) during afternoon (15:00-18:00 h) and morning (09:30-12:30 h) exercise (55 min of brisk walking on a treadmill followed by a 2-h recovery), compared with sedentary conditions (18:00-09:00 h). Plasma (reference) glucose, measured at regular 15-30-min intervals with the YSI Ltd. (Fleet, United Kingdom) model YSI 2300 analyzer, was used to assess CGM performance. Sensor accuracy, as indicated by the larger relative absolute difference (RAD) between paired sensor and reference glucose values, was lower during exercise compared with rest (median RAD, 11.8% vs. 18.4%; P<0.001). These differences remained significant when correcting for plasma glucose relative rate of change (P<0.001). Analysis by glucose range showed lower accuracy during hypoglycemia for both sedentary (median RAD, 24.4%) and exercise (median RAD, 32.1%) conditions. Using Clarke error grid analysis, 96% of CGM values were clinically safe under resting conditions compared with only 87% during exercise. Compared with sedentary conditions, accuracy of the Freestyle Navigator CGM was lower during moderate-intensity exercise in pregnant women with T1D. This difference was particularly marked in hypoglycemia and could not be solely explained by the glucose rate of change associated with physical activity.
Assessment of three frequently used blood glucose monitoring devices in clinical routine.
Zueger, Thomas; Schuler, Vanessa; Stettler, Christoph; Diem, Peter; Christ, Emanuel R
2012-07-12
Self-monitoring of blood glucose plays an important role in the management of diabetes and has been shown to improve metabolic control. The use of blood glucose meters in clinical practice requires sufficient reliability to allow adequate treatment. Direct comparison of different blood glucose meters in clinical practice, independent of the manufactures is scarce. We, therefore, aimed to evaluate three frequently used blood glucose meters in daily clinical practice. Capillary blood glucose was measured simultaneous using the following glucose meters: Contour® (Bayer Diabetes Care, Zürich, Switzerland), Accu-Chek® aviva (Roche Diagnostics, Rotkreuz, Switzerland), Free-Style® lite (Abbott Diabetes Care, Baar, Switzerland). The reference method consisted of the HemoCue® Glucose 201+ System (HemoCue® AB, Ängelholm, Sweden) with plasma conversion. The devices were assessed by comparison of the Mean Absolute Relative Differences (MARD), the Clarke Error Grid Analysis (EGA) and the compliance with the International Organization of Standardization criteria (ISO 15197:2003). Capillary blood samples were obtained from 150 patients. MARD was 10.1 ± 0.65%, 7.0 ± 0.62% and 7.8 ± 0.48% for Contour®, Accu-Chek® and Free-Style®, respectively. EGA showed 99.3% (Contour®), 98.7% (Accu-Chek®) and 100% (Free-Style®) of all measurements in zone A and B (clinically acceptable). The ISO criteria were fulfilled by Accu-Chek® (95.3%) and Free-Style® (96%), but not by Contour® (92%). In the present study the three glucose meters provided good agreement with the reference and reliable results in daily clinical routine. Overall, the Free-Style® and Accu-Chek® device slightly outperformed the Contour® device.
Leelarathna, Lalantha; English, Shane W; Thabit, Hood; Caldwell, Karen; Allen, Janet M; Kumareswaran, Kavita; Wilinska, Malgorzata E; Nodale, Marianna; Haidar, Ahmad; Evans, Mark L; Burnstein, Rowan; Hovorka, Roman
2014-02-01
Accurate real-time continuous glucose measurements may improve glucose control in the critical care unit. We evaluated the accuracy of the FreeStyle(®) Navigator(®) (Abbott Diabetes Care, Alameda, CA) subcutaneous continuous glucose monitoring (CGM) device in critically ill adults using two methods of calibration. In a randomized trial, paired CGM and reference glucose (hourly arterial blood glucose [ABG]) were collected over a 48-h period from 24 adults with critical illness (mean±SD age, 60±14 years; mean±SD body mass index, 29.6±9.3 kg/m(2); mean±SD Acute Physiology and Chronic Health Evaluation score, 12±4 [range, 6-19]) and hyperglycemia. In 12 subjects, the CGM device was calibrated at variable intervals of 1-6 h using ABG. In the other 12 subjects, the sensor was calibrated according to the manufacturer's instructions (1, 2, 10, and 24 h) using arterial blood and the built-in point-of-care glucometer. In total, 1,060 CGM-ABG pairs were analyzed over the glucose range from 4.3 to 18.8 mmol/L. Using enhanced calibration median (interquartile range) every 169 (122-213) min, the absolute relative deviation was lower (7.0% [3.5, 13.0] vs. 12.8% [6.3, 21.8], P<0.001), and the percentage of points in the Clarke error grid Zone A was higher (87.8% vs. 70.2%). Accuracy of the Navigator CGM device during critical illness was comparable to that observed in non-critical care settings. Further significant improvements in accuracy may be obtained by frequent calibrations with ABG measurements.
McAuley, Sybil A; Dang, Tri T; Horsburgh, Jodie C; Bansal, Anubhuti; Ward, Glenn M; Aroyan, Sarkis; Jenkins, Alicia J; MacIsaac, Richard J; Shah, Rajiv V; O'Neal, David N
2016-05-01
Orthogonal redundancy for glucose sensing (multiple sensing elements utilizing distinct methodologies) may enhance performance compared to nonredundant sensors, and to sensors with multiple elements utilizing the same technology (simple redundancy). We compared the performance of a prototype orthogonal redundant sensor (ORS) combining optical fluorescence and redundant electrochemical sensing via a single insertion platform to an electrochemical simple redundant sensor (SRS). Twenty-one adults with type 1 diabetes wore an ORS and an SRS concurrently for 7 days. Following sensor insertion, and on Day 4 with a standardized meal, frequent venous samples were collected for reference glucose measurement (laboratory [YSI] and meter) over 3 and 4 hours, respectively. Between study visits reference capillary blood glucose testing was undertaken. Sensor data were processed prospectively. ORS mean absolute relative difference (MARD) was (mean ± SD) 10.5 ± 13.2% versus SRS 11.0 ± 10.4% (P = .34). ORS values in Clarke error grid zones A and A+B were 88.1% and 97.6%, respectively, versus SRS 86.4% and 97.8%, respectively (P = .23 and P = .84). ORS Day 1 MARD (10.7 ± 10.7%) was superior to SRS (16.5 ± 13.4%; P < .0001), and comparable to ORS MARD for the week. ORS sensor survival (time-averaged mean) was 92.1% versus SRS 74.4% (P = .10). ORS display time (96.0 ± 5.8%) was equivalent to SRS (95.6 ± 8.9%; P = .87). Combining simple and orthogonal sensor redundancy via a single insertion is feasible, with accuracy comparing favorably to current generation nonredundant sensors. Addition of an optical component potentially improves sensor reliability compared to electrochemical sensing alone. Further improvement in optical sensing performance is required prior to clinical application. © 2016 Diabetes Technology Society.
Leelarathna, Lalantha; English, Shane W.; Thabit, Hood; Caldwell, Karen; Allen, Janet M.; Kumareswaran, Kavita; Wilinska, Malgorzata E.; Nodale, Marianna; Haidar, Ahmad; Evans, Mark L.; Burnstein, Rowan
2014-01-01
Abstract Objective: Accurate real-time continuous glucose measurements may improve glucose control in the critical care unit. We evaluated the accuracy of the FreeStyle® Navigator® (Abbott Diabetes Care, Alameda, CA) subcutaneous continuous glucose monitoring (CGM) device in critically ill adults using two methods of calibration. Subjects and Methods: In a randomized trial, paired CGM and reference glucose (hourly arterial blood glucose [ABG]) were collected over a 48-h period from 24 adults with critical illness (mean±SD age, 60±14 years; mean±SD body mass index, 29.6±9.3 kg/m2; mean±SD Acute Physiology and Chronic Health Evaluation score, 12±4 [range, 6–19]) and hyperglycemia. In 12 subjects, the CGM device was calibrated at variable intervals of 1–6 h using ABG. In the other 12 subjects, the sensor was calibrated according to the manufacturer's instructions (1, 2, 10, and 24 h) using arterial blood and the built-in point-of-care glucometer. Results: In total, 1,060 CGM–ABG pairs were analyzed over the glucose range from 4.3 to 18.8 mmol/L. Using enhanced calibration median (interquartile range) every 169 (122–213) min, the absolute relative deviation was lower (7.0% [3.5, 13.0] vs. 12.8% [6.3, 21.8], P<0.001), and the percentage of points in the Clarke error grid Zone A was higher (87.8% vs. 70.2%). Conclusions: Accuracy of the Navigator CGM device during critical illness was comparable to that observed in non–critical care settings. Further significant improvements in accuracy may be obtained by frequent calibrations with ABG measurements. PMID:24180327
FIRST Robotics, Gulfport High, StenniSphere, Bo Clarke, mentor
NASA Technical Reports Server (NTRS)
2006-01-01
Bo Clarke, mentor for Gulfport High School's Team Fusion, offers strategy tips to students and coaches during the FIRST Robotics Competition kickoff held at StenniSphere on Jan. 7. Clarke is the lead building and infrastructure specialist for NASA's Shared Services Center at Stennis Space Center.
FIRST Robotics, Gulfport High, StenniSphere, Bo Clarke, mentor
2006-01-07
Bo Clarke, mentor for Gulfport High School's Team Fusion, offers strategy tips to students and coaches during the FIRST Robotics Competition kickoff held at StenniSphere on Jan. 7. Clarke is the lead building and infrastructure specialist for NASA's Shared Services Center at Stennis Space Center.
Code of Federal Regulations, 2011 CFR
2011-07-01
... classified Better than national standards (Township Range): Clark County: Las Vegas Valley (212)(15-24S, 56... County refers to 27 hydrographic areas either entirely or partially located within Clark County as shown... (September 1971), excluding the two designated areas in Clark County specifically listed in the table. Nevada...
Code of Federal Regulations, 2011 CFR
2011-07-01
... County X Cherokee County X Cheyenne County X Clark County X Clay County X Cloud County X Coffey County X... Chautauqua County X Cherokee County X Cheyenne County X Clark County X Clay County X Cloud County X Coffey.../Attainment Cherokee County Unclassifiable/Attainment Cheyenne County Unclassifiable/Attainment Clark County...
Bringing Organisations and Systems Back Together: Extending Clark's Entrepreneurial University
ERIC Educational Resources Information Center
Rhoades, Gary; Stensaker, Bjørn
2017-01-01
Burton R. Clark's 1998 book, "Creating Entrepreneurial Universities," has had a major impact on the field of higher education, especially internationally. In this paper, key aspects of Clark's conceptualisation of organisational pathways of transformation are identified, speaking to its theoretical and empirical contributions to higher…
Code of Federal Regulations, 2010 CFR
2010-07-01
... County X Cherokee County X Cheyenne County X Clark County X Clay County X Cloud County X Coffey County X... Chautauqua County X Cherokee County X Cheyenne County X Clark County X Clay County X Cloud County X Coffey.../Attainment Cherokee County Unclassifiable/Attainment Cheyenne County Unclassifiable/Attainment Clark County...
Cache-site selection in Clark's Nutcracker (Nucifraga columbiana)
Teresa J. Lorenz; Kimberly A. Sullivan; Amanda V. Bakian; Carol A. Aubry
2011-01-01
Clark's Nutcracker (Nucifraga Columbiana) is one of the most specialized scatter-hoarding birds, considered a seed disperser for four species of pines (Pinus spp.), as well as an obligate coevolved mutualist of White bark Pine (P. albicaulis). Cache-site selection has not been formally studied in Clark...
Munyon, Charles N; Koubeissi, Mohamad Z; Syed, Tanvir U; Lüders, Hans O; Miller, Jonathan P
2013-01-01
Frame-based stereotaxy and open craniotomy may seem mutually exclusive, but invasive electrophysiological monitoring can require broad sampling of the cortex and precise targeting of deeper structures. The purpose of this study is to describe simultaneous frame-based insertion of depth electrodes and craniotomy for placement of subdural grids through a single surgical field and to determine the accuracy of depth electrodes placed using this technique. A total of 6 patients with intractable epilepsy underwent placement of a stereotactic frame with the center of the planned cranial flap equidistant from the fixation posts. After volumetric imaging, craniotomy for placement of subdural grids was performed. Depth electrodes were placed using frame-based stereotaxy. Postoperative CT determined the accuracy of electrode placement. A total of 31 depth electrodes were placed. Mean distance of distal electrode contact from the target was 1.0 ± 0.15 mm. Error was correlated to distance to target, with an additional 0.35 mm error for each centimeter (r = 0.635, p < 0.001); when corrected, there was no difference in accuracy based on target structure or method of placement (prior to craniotomy vs. through grid, p = 0.23). The described technique for craniotomy through a stereotactic frame allows placement of subdural grids and depth electrodes without sacrificing the accuracy of a frame or requiring staged procedures.
Bouda, Martin; Caplan, Joshua S.; Saiers, James E.
2016-01-01
Fractal dimension (FD), estimated by box-counting, is a metric used to characterize plant anatomical complexity or space-filling characteristic for a variety of purposes. The vast majority of published studies fail to evaluate the assumption of statistical self-similarity, which underpins the validity of the procedure. The box-counting procedure is also subject to error arising from arbitrary grid placement, known as quantization error (QE), which is strictly positive and varies as a function of scale, making it problematic for the procedure's slope estimation step. Previous studies either ignore QE or employ inefficient brute-force grid translations to reduce it. The goals of this study were to characterize the effect of QE due to translation and rotation on FD estimates, to provide an efficient method of reducing QE, and to evaluate the assumption of statistical self-similarity of coarse root datasets typical of those used in recent trait studies. Coarse root systems of 36 shrubs were digitized in 3D and subjected to box-counts. A pattern search algorithm was used to minimize QE by optimizing grid placement and its efficiency was compared to the brute force method. The degree of statistical self-similarity was evaluated using linear regression residuals and local slope estimates. QE, due to both grid position and orientation, was a significant source of error in FD estimates, but pattern search provided an efficient means of minimizing it. Pattern search had higher initial computational cost but converged on lower error values more efficiently than the commonly employed brute force method. Our representations of coarse root system digitizations did not exhibit details over a sufficient range of scales to be considered statistically self-similar and informatively approximated as fractals, suggesting a lack of sufficient ramification of the coarse root systems for reiteration to be thought of as a dominant force in their development. FD estimates did not characterize the scaling of our digitizations well: the scaling exponent was a function of scale. Our findings serve as a caution against applying FD under the assumption of statistical self-similarity without rigorously evaluating it first. PMID:26925073
NASA Astrophysics Data System (ADS)
Valle, G.; Dell'Omodarme, M.; Prada Moroni, P. G.; Degl'Innocenti, S.
2018-01-01
Aims: We aim to perform a theoretical evaluation of the impact of the mass loss indetermination on asteroseismic grid based estimates of masses, radii, and ages of stars in the red giant branch (RGB) phase. Methods: We adopted the SCEPtER pipeline on a grid spanning the mass range [0.8; 1.8] M⊙. As observational constraints, we adopted the star effective temperatures, the metallicity [Fe/H], the average large frequency spacing Δν, and the frequency of maximum oscillation power νmax. The mass loss was modelled following a Reimers parametrization with the two different efficiencies η = 0.4 and η = 0.8. Results: In the RGB phase, the average random relative error (owing only to observational uncertainty) on mass and age estimates is about 8% and 30% respectively. The bias in mass and age estimates caused by the adoption of a wrong mass loss parameter in the recovery is minor for the vast majority of the RGB evolution. The biases get larger only after the RGB bump. In the last 2.5% of the RGB lifetime the error on the mass determination reaches 6.5% becoming larger than the random error component in this evolutionary phase. The error on the age estimate amounts to 9%, that is, equal to the random error uncertainty. These results are independent of the stellar metallicity [Fe/H] in the explored range. Conclusions: Asteroseismic-based estimates of stellar mass, radius, and age in the RGB phase can be considered mass loss independent within the range (η ∈ [0.0,0.8]) as long as the target is in an evolutionary phase preceding the RGB bump.
NASA Astrophysics Data System (ADS)
Greenough, J. A.; Rider, W. J.
2004-05-01
A numerical study is undertaken comparing a fifth-order version of the weighted essentially non-oscillatory numerical (WENO5) method to a modern piecewise-linear, second-order, version of Godunov's (PLMDE) method for the compressible Euler equations. A series of one-dimensional test problems are examined beginning with classical linear problems and ending with complex shock interactions. The problems considered are: (1) linear advection of a Gaussian pulse in density, (2) Sod's shock tube problem, (3) the "peak" shock tube problem, (4) a version of the Shu and Osher shock entropy wave interaction and (5) the Woodward and Colella interacting shock wave problem. For each problem and method, run times, density error norms and convergence rates are reported for each method as produced from a common code test-bed. The linear problem exhibits the advertised convergence rate for both methods as well as the expected large disparity in overall error levels; WENO5 has the smaller errors and an enormous advantage in overall efficiency (in accuracy per unit CPU time). For the nonlinear problems with discontinuities, however, we generally see both first-order self-convergence of error as compared to an exact solution, or when an analytic solution is not available, a converged solution generated on an extremely fine grid. The overall comparison of error levels shows some variation from problem to problem. For Sod's shock tube, PLMDE has nearly half the error, while on the peak problem the errors are nearly the same. For the interacting blast wave problem the two methods again produce a similar level of error with a slight edge for the PLMDE. On the other hand, for the Shu-Osher problem, the errors are similar on the coarser grids, but favors WENO by a factor of nearly 1.5 on the finer grids used. In all cases holding mesh resolution constant though, PLMDE is less costly in terms of CPU time by approximately a factor of 6. If the CPU cost is taken as fixed, that is run times are equal for both numerical methods, then PLMDE uniformly produces lower errors than WENO for the fixed computation cost on the test problems considered here.
Design of Energy Storage Management System Based on FPGA in Micro-Grid
NASA Astrophysics Data System (ADS)
Liang, Yafeng; Wang, Yanping; Han, Dexiao
2018-01-01
Energy storage system is the core to maintain the stable operation of smart micro-grid. Aiming at the existing problems of the energy storage management system in the micro-grid such as Low fault tolerance, easy to cause fluctuations in micro-grid, a new intelligent battery management system based on field programmable gate array is proposed : taking advantage of FPGA to combine the battery management system with the intelligent micro-grid control strategy. Finally, aiming at the problem that during estimation of battery charge State by neural network, initialization of weights and thresholds are not accurate leading to large errors in prediction results, the genetic algorithm is proposed to optimize the neural network method, and the experimental simulation is carried out. The experimental results show that the algorithm has high precision and provides guarantee for the stable operation of micro-grid.
Code of Federal Regulations, 2011 CFR
2011-10-01
..., Wallowa, Wasco; the following counties in Washington: Asotin, Benton, Clark, Columbia, Cowlitz, Franklin..., Union, Wallowa, Wasco; the following counties in Washington: Asotin, Benton, Clark, Columbia, Cowlitz... in Washington: Adams, Asotin, Benton, Clark, Columbia, Cowlitz, Franklin, Garfield, Klickitat...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-05-27
... DEPARTMENT OF COMMERCE Foreign-Trade Zones Board [Docket 37-2011] Foreign-Trade Zone 170--Clark County, IN; Application for Reorganization (Expansion of Service Area) Under Alternative Site Framework... includes Jackson, Washington, Harrison, Floyd, Clark and Scott Counties, Indiana. The applicant is now...
2. VIEW, LOOKING EAST, SHOWING J. CLARK SALYER NATIONAL WILDLIFE ...
2. VIEW, LOOKING EAST, SHOWING J. CLARK SALYER NATIONAL WILDLIFE REFUGE, JUST EAST OF WESTHOPE, NORTH DAKOTA (THE NORTH END OF THE REFUGE JUST SOUTH OF DAM 357 AND THE CANADIAN BORDER) - J. Clark Salyer National Wildlife Refuge Dams, Along Lower Souris River, Kramer, Bottineau County, ND
78 FR 54269 - Lake Clark National Park Subsistence Resource Commission; Meetings
Federal Register 2010, 2011, 2012, 2013, 2014
2013-09-03
... DEPARTMENT OF THE INTERIOR National Park Service [NPS-AKR-LACL-DTS-13687; PPAKAKROR4; PPMPRLE1Y.LS0000] Lake Clark National Park Subsistence Resource Commission; Meetings AGENCY: National Park Service...- 463, 86 Stat. 770), the National Park Service (NPS) is hereby giving notice that the Lake Clark...
Improving mobile robot localization: grid-based approach
NASA Astrophysics Data System (ADS)
Yan, Junchi
2012-02-01
Autonomous mobile robots have been widely studied not only as advanced facilities for industrial and daily life automation, but also as a testbed in robotics competitions for extending the frontier of current artificial intelligence. In many of such contests, the robot is supposed to navigate on the ground with a grid layout. Based on this observation, we present a localization error correction method by exploring the geometric feature of the tile patterns. On top of the classical inertia-based positioning, our approach employs three fiber-optic sensors that are assembled under the bottom of the robot, presenting an equilateral triangle layout. The sensor apparatus, together with the proposed supporting algorithm, are designed to detect a line's direction (vertical or horizontal) by monitoring the grid crossing events. As a result, the line coordinate information can be fused to rectify the cumulative localization deviation from inertia positioning. The proposed method is analyzed theoretically in terms of its error bound and also has been implemented and tested on a customary developed two-wheel autonomous mobile robot.
NASA Astrophysics Data System (ADS)
Flinders, Ashton F.; Mayer, Larry A.; Calder, Brian A.; Armstrong, Andrew A.
2014-05-01
We document a new high-resolution multibeam bathymetry compilation for the Canada Basin and Chukchi Borderland in the Arctic Ocean - United States Arctic Multibeam Compilation (USAMBC Version 1.0). The compilation preserves the highest native resolution of the bathymetric data, allowing for more detailed interpretation of seafloor morphology than has been previously possible. The compilation was created from multibeam bathymetry data available through openly accessible government and academic repositories. Much of the new data was collected during dedicated mapping cruises in support of the United States effort to map extended continental shelf regions beyond the 200 nm Exclusive Economic Zone. Data quality was evaluated using nadir-beam crossover-error statistics, making it possible to assess the precision of multibeam depth soundings collected from a wide range of vessels and sonar systems. Data were compiled into a single high-resolution grid through a vertical stacking method, preserving the highest quality data source in any specific grid cell. The crossover-error analysis and method of data compilation can be applied to other multi-source multibeam data sets, and is particularly useful for government agencies targeting extended continental shelf regions but with limited hydrographic capabilities. Both the gridded compilation and an easily distributed geospatial PDF map are freely available through the University of New Hampshire's Center for Coastal and Ocean Mapping (ccom.unh.edu/theme/law-sea). The geospatial pdf is a full resolution, small file-size product that supports interpretation of Arctic seafloor morphology without the need for specialized gridding/visualization software.
This document may be of assistance in applying the Title V air operating permit regulations. This document is part of the Title V Policy and Guidance Database available at www2.epa.gov/title-v-operating-permits/title-v-operating-permit-policy-and-guidance-document-index. Some documents in the database are a scanned or retyped version of a paper photocopy of the original. Although we have taken considerable effort to quality assure the documents, some may contain typographical errors. Contact the office that issued the document if you need a copy of the original.
Grid sensitivity for aerodynamic optimization and flow analysis
NASA Technical Reports Server (NTRS)
Sadrehaghighi, I.; Tiwari, S. N.
1993-01-01
After reviewing relevant literature, it is apparent that one aspect of aerodynamic sensitivity analysis, namely grid sensitivity, has not been investigated extensively. The grid sensitivity algorithms in most of these studies are based on structural design models. Such models, although sufficient for preliminary or conceptional design, are not acceptable for detailed design analysis. Careless grid sensitivity evaluations, would introduce gradient errors within the sensitivity module, therefore, infecting the overall optimization process. Development of an efficient and reliable grid sensitivity module with special emphasis on aerodynamic applications appear essential. The organization of this study is as follows. The physical and geometric representations of a typical model are derived in chapter 2. The grid generation algorithm and boundary grid distribution are developed in chapter 3. Chapter 4 discusses the theoretical formulation and aerodynamic sensitivity equation. The method of solution is provided in chapter 5. The results are presented and discussed in chapter 6. Finally, some concluding remarks are provided in chapter 7.
NASA Astrophysics Data System (ADS)
Eiserloh, Arthur J.; Chiao, Sen
2015-02-01
This study investigated a slow-moving long-wave trough that brought four Atmospheric Rivers (AR) "episodes" within a week to the U.S. West Coast from 28 November to 3 December 2012, bringing over 500 mm to some coastal locations. The highest 6- and 12-hourly rainfall rates (131 and 195 mm, respectively) over northern California occurred during Episode 2 along the windward slopes of the coastal Santa Lucia Mountains. Surface observations from NOAA's Hydrometeorological Testbed sites in California, available GPS Radio Occultation (RO) vertical profiles from the Constellation Observing System for Meteorology Ionosphere and Climate (COSMIC) satellite mission were both assimilated into WRF-ARW via eight combinations of observation nudging, grid nudging, and 3DVAR to improve the upstream moisture characteristics and quantitative precipitation forecast (QPF) during this event. Results during the 6-hourly rainfall maximum period in Episode 2 revealed that the models underestimated the observed 6-hourly rainfall rate maximum on the windward slopes of the Santa Lucia mountain range. The grid-nudging experiments smoothed out finer mesoscale details in the inner domain that may affect the final QPFs. Overall, the experiments that did not use grid nudging were more accurate in terms of less mean absolute error. In the time evolution of the accumulated rainfall forecast, the observation nudging experiment that included RAOB and COSMIC GPS RO data demonstrated results with the least error for the north central Coastal Range and the 3DVAR cold-start experiment demonstrated the least error for the windward Sierra Nevada. The experiment that combined 3DVAR cold start, observation nudging, and grid nudging showed the most error in the rainfall forecasts. Results from this study further suggest that including surface observations at frequencies less than 3 h for observation nudging and having cycling intervals less than 3 h for 3DVAR cycling would be more beneficial for short-to-medium range mesoscale QPFs during high-impact AR events over northern California.
NASA Astrophysics Data System (ADS)
Hiebl, Johann; Frei, Christoph
2018-04-01
Spatial precipitation datasets that are long-term consistent, highly resolved and extend over several decades are an increasingly popular basis for modelling and monitoring environmental processes and planning tasks in hydrology, agriculture, energy resources management, etc. Here, we present a grid dataset of daily precipitation for Austria meant to promote such applications. It has a grid spacing of 1 km, extends back till 1961 and is continuously updated. It is constructed with the classical two-tier analysis, involving separate interpolations for mean monthly precipitation and daily relative anomalies. The former was accomplished by kriging with topographic predictors as external drift utilising 1249 stations. The latter is based on angular distance weighting and uses 523 stations. The input station network was kept largely stationary over time to avoid artefacts on long-term consistency. Example cases suggest that the new analysis is at least as plausible as previously existing datasets. Cross-validation and comparison against experimental high-resolution observations (WegenerNet) suggest that the accuracy of the dataset depends on interpretation. Users interpreting grid point values as point estimates must expect systematic overestimates for light and underestimates for heavy precipitation as well as substantial random errors. Grid point estimates are typically within a factor of 1.5 from in situ observations. Interpreting grid point values as area mean values, conditional biases are reduced and the magnitude of random errors is considerably smaller. Together with a similar dataset of temperature, the new dataset (SPARTACUS) is an interesting basis for modelling environmental processes, studying climate change impacts and monitoring the climate of Austria.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Woodward, D.F.; Brumbaugh, W.G.; DeLonay, A.J.
1994-01-01
The upper Clark Fork River in northwestern Montana has received mining wastes from the Butte and Anaconda areas since 1880. These wastes have contaminated areas of the river bed and floodplain with tailings and heavy metal sludge, resulting in elevated concentration of metals in surface water, sediments, and biota. Rainbow trout Oncorhynchus mykiss were exposed immediately after hatching for 91 d to cadmium, copper, lead, and zinc in water at concentrations simulating those in Clark Fork River. From exogenous feeding (21 d posthatch) through 91 d, fry were also fed benthic invertebrates from the Clark Fork River that contained elevatedmore » concentrations of arsenic, cadmium, copper, and lead. Evaluations of different combinations of diet and water exposure indicated diet-borne metals were more important than water-borne metals - at the concentrations we tested - in reducing survival and growth of rainbow trout. Whole-body metal concentrations ([mu]g/g, wet weight) at 91 d in fish fed Clark Fork invertebrates without exposure to Clark Fork water were arsenic, 1.4; cadmium, 0.16; and copper, 6.7. These were similar to concentrations found in Clark Fork River fishes. Livers from fish on the high-metals diets exhibited degenerative changes and generally lacked glycogen vacuolation. Indigenous Clark Fork River invertebrates provide a concentrated source of metals for accumulation into young fishes, and probably were the cause of decreased survival and growth of age-0 rainbow trout in our laboratory exposures. 30 refs., 8 figs., 4 tabs.« less
New developments in spatial interpolation methods of Sea-Level Anomalies in the Mediterranean Sea
NASA Astrophysics Data System (ADS)
Troupin, Charles; Barth, Alexander; Beckers, Jean-Marie; Pascual, Ananda
2014-05-01
The gridding of along-track Sea-Level Anomalies (SLA) measured by a constellation of satellites has numerous applications in oceanography, such as model validation, data assimilation or eddy tracking. Optimal Interpolation (OI) is often the preferred method for this task, as it leads to the lowest expected error and provides an error field associated to the analysed field. However, the numerical cost of the method may limit its utilization in situations where the number of data points is significant. Furthermore, the separation of non-adjacent regions with OI requires adaptation of the code, leading to a further increase of the numerical cost. To solve these issues, the Data-Interpolating Variational Analysis (DIVA), a technique designed to produce gridded from sparse in situ measurements, is applied on SLA data in the Mediterranean Sea. DIVA and OI have been shown to be equivalent (provided some assumptions on the covariances are made). The main difference lies in the covariance function, which is not explicitly formulated in DIVA. The particular spatial and temporal distributions of measurements required adaptation in the Software tool (data format, parameter determinations, ...). These adaptation are presented in the poster. The daily analysed and error fields obtained with this technique are compared with available products such as the gridded field from the Archiving, Validation and Interpretation of Satellite Oceanographic data (AVISO) data server. The comparison reveals an overall good agreement between the products. The time evolution of the mean error field evidences the need of a large number of simultaneous altimetry satellites: in period during which 4 satellites are available, the mean error is on the order of 17.5%, while when only 2 satellites are available, the error exceeds 25%. Finally, we propose the use sea currents to improve the results of the interpolation, especially in the coastal area. These currents can be constructed from the bathymetry or extracted from a HF radar located in the Balearic Sea.
Patrick L. Zimmerman; Greg C. Liknes
2010-01-01
Dot grids are often used to estimate the proportion of land cover belonging to some class in an aerial photograph. Interpreter misclassification is an often-ignored source of error in dot-grid sampling that has the potential to significantly bias proportion estimates. For the case when the true class of items is unknown, we present a maximum-likelihood estimator of...
33 CFR 117.899 - Youngs Bay and Lewis and Clark River.
Code of Federal Regulations, 2014 CFR
2014-07-01
... SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Oregon § 117.899 Youngs Bay and... blast. (b) The draw of the Oregon State (Old Youngs Bay) highway bridge, mile 2.4, across Youngs Bay... of the Oregon State (Lewis and Clark River) highway bridge, mile 1.0, across the Lewis and Clark...
33 CFR 117.899 - Youngs Bay and Lewis and Clark River.
Code of Federal Regulations, 2012 CFR
2012-07-01
... SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Oregon § 117.899 Youngs Bay and... blast. (b) The draw of the Oregon State (Old Youngs Bay) highway bridge, mile 2.4, across Youngs Bay... of the Oregon State (Lewis and Clark River) highway bridge, mile 1.0, across the Lewis and Clark...
33 CFR 117.899 - Youngs Bay and Lewis and Clark River.
Code of Federal Regulations, 2013 CFR
2013-07-01
... SECURITY BRIDGES DRAWBRIDGE OPERATION REGULATIONS Specific Requirements Oregon § 117.899 Youngs Bay and... blast. (b) The draw of the Oregon State (Old Youngs Bay) highway bridge, mile 2.4, across Youngs Bay... of the Oregon State (Lewis and Clark River) highway bridge, mile 1.0, across the Lewis and Clark...
Clark's Triangle and Fiscal Incentives: Implications for Colleges'
ERIC Educational Resources Information Center
Lang, Dan
2015-01-01
For nearly 35 year's Burton Clark's triangle has been used as a paradigm for describing, assessing, and comparing systems of postsecondary education (Clark, 1998, 2004). Two major developments in the fiscal management of post-secondary education occurred more or less contemporaneously: incentive or performance funding on the part of the state and…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-01-07
... DEPARTMENT OF TRANSPORTATION Surface Transportation Board [STB Docket No. AB-55 (Sub-No. 698X)] CSX Transportation, Inc.--Discontinuance of Service Exemption--in Clark, Floyd, Lawrence, Orange, and... milepost 00Q 251.7, near Bedford, and milepost 00Q 314.0, near New Albany, in Clark, Floyd, Lawrence...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-14
... Environmental Impact Statement for the Proposed Replacement General Aviation Airport, Mesquite, Clark County, NV... Environmental Impact Statement (EIS) for a proposed Replacement General Aviation (GA) Airport in Mesquite, Clark... General Aviation (GA) Airport, for the City of Mesquite in eastern Clark County, Nevada. The City [[Page...
Code of Federal Regulations, 2011 CFR
2011-07-01
... Cassia Clark Franklin Fremont Jefferson Madison Oneida Power Teton Montana Beaverhead Broadwater Cascade Deer Lodge Flathead Gallatin Granite Jefferson Lake Lewis and Clark Madison Meagher Missoula Park... Sanpete Sevier Summit Tooele Utah Wasatch Washington Wayne Weber Washington Chelan Clallam Clark Cowlitz...
Code of Federal Regulations, 2010 CFR
2010-07-01
... Cassia Clark Franklin Fremont Jefferson Madison Oneida Power Teton Montana Beaverhead Broadwater Cascade Deer Lodge Flathead Gallatin Granite Jefferson Lake Lewis and Clark Madison Meagher Missoula Park... Sanpete Sevier Summit Tooele Utah Wasatch Washington Wayne Weber Washington Chelan Clallam Clark Cowlitz...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-06-10
... Impact Statement, Including a Draft Programmatic Agreement, for the Clark, Lincoln, and White Pine...), which is included as an Appendix to the EIS, for the Southern Nevada Water Authority's (SNWA) Clark...--Central Nevada Regional Water Authority, White Pine, Lincoln, and Clark counties (NV); and Juab, Millard...
Burton Clark's "The Higher Education System: Academic Organization in Cross-National Perspective"
ERIC Educational Resources Information Center
Brennan, John
2010-01-01
In "The Higher Education System", Burton Clark provides a model for the organisational analysis of higher education institutions and systems. Central to the model are the concepts of knowledge, beliefs and authority. In particular, Clark examines how different interest groups both inside and outside the university shape and subvert the…
Federal Register 2010, 2011, 2012, 2013, 2014
2010-11-03
... National Emission Standards for Hazardous Air Pollutants for Source Categories; State of Nevada; Clark... pollutants (NESHAP) to Clark County, Nevada. DATES: Any comments on this proposal must arrive by December 3...: This proposal concerns the delegation of unchanged NESHAP to Clark County, Nevada. In the Rules and...
Publications - PIR 2011-1 | Alaska Division of Geological & Geophysical
content DGGS PIR 2011-1 Publication Details Title: Reconnaissance evaluation of the Lake Clark fault Koehler, R.D., and Reger, R.D., 2011, Reconnaissance evaluation of the Lake Clark fault, Tyonek area M) Keywords Cook Inlet; Glacial Stratigraphy; Lake Clark Fault; Neotectonics; STATEMAP Project Top
Fred Clarke and the Internationalisation of Studies and Research in Education
ERIC Educational Resources Information Center
McCulloch, Gary
2014-01-01
Fred Clarke (1880-1952) was a key figure in the internationalisation of educational studies and research in the first half of the twentieth century. Clarke aimed to heighten the ideals and develop the practices of educational studies and research through promoting mutual influences in different countries around the world. He envisaged the…
ERIC Educational Resources Information Center
Hendel, Darwin D.; Harrold, Roger
2007-01-01
Unrest in the early 1970s stimulated a need to understand undergraduates' motivations. The Clark-Trow Typology (Clark & Trow, 1966) examined student behavior (i.e., academic, collegiate, vocational, and non-conformist) according to identification with the institution and involvement with ideas. The Student Interest Survey included questions…
ERIC Educational Resources Information Center
Olsen, Ken
2006-01-01
Writer and historian Bernard DeVoto observed more than 50 years ago that a dismaying amount of American history has been written without regards to the Indians. Such disregard is glaring in many mainstream stories of Meriwether Lewis and William Clark. Lewis and Clark began preparing for their historic journey in 1803 and officially launched the…
Kenneth B. Clark in the Patterns of American Culture.
ERIC Educational Resources Information Center
Keppel, Ben
2002-01-01
Discusses how three books written for the general public by African American social scientist, Kenneth B. Clark, document his growing pessimism about the prospects for improving race relations in the United States. Also considers Clark's place in contemporary U.S. debates on Brown v. Board of Education and the persistence of racial inequality. (SM)
2003-10-28
KENNEDY SPACE CENTER, FLA. -- Dr. Jonathan Clark, husband of STS-107 astronaut Laurel Clark, addresses the family members of the STS-107 astronauts, other dignitaries, members of the university community and the public gathered for the dedication ceremony of the Columbia Village at the Florida Institute of Technology in Melbourne, Fla. Each of the seven new residence halls in the complex is named for one of the STS-107 astronauts who perished during the Columbia accident -- Rick Husband, Willie McCool, Laurel Clark, Michael Anderson, David Brown, Kalpana Chawla, and Ilan Ramon.
Clark's nutcracker spatial memory: the importance of large, structural cues.
Bednekoff, Peter A; Balda, Russell P
2014-02-01
Clark's nutcrackers, Nucifraga columbiana, cache and recover stored seeds in high alpine areas including areas where snowfall, wind, and rockslides may frequently obscure or alter cues near the cache site. Previous work in the laboratory has established that Clark's nutcrackers use spatial memory to relocate cached food. Following from aspects of this work, we performed experiments to test the importance of large, structural cues for Clark's nutcracker spatial memory. Birds were no more accurate in recovering caches when more objects were on the floor of a large experimental room nor when this room was subdivided with a set of panels. However, nutcrackers were consistently less accurate in this large room than in a small experimental room. Clark's nutcrackers probably use structural features of experimental rooms as important landmarks during recovery of cached food. This use of large, extremely stable cues may reflect the imperfect reliability of smaller, closer cues in the natural habitat of Clark's nutcrackers. This article is part of a Special Issue entitled: CO3 2013. Copyright © 2013 Elsevier B.V. All rights reserved.
A new statistic to express the uncertainty of kriging predictions for purposes of survey planning.
NASA Astrophysics Data System (ADS)
Lark, R. M.; Lapworth, D. J.
2014-05-01
It is well-known that one advantage of kriging for spatial prediction is that, given the random effects model, the prediction error variance can be computed a priori for alternative sampling designs. This allows one to compare sampling schemes, in particular sampling at different densities, and so to decide on one which meets requirements in terms of the uncertainty of the resulting predictions. However, the planning of sampling schemes must account not only for statistical considerations, but also logistics and cost. This requires effective communication between statisticians, soil scientists and data users/sponsors such as managers, regulators or civil servants. In our experience the latter parties are not necessarily able to interpret the prediction error variance as a measure of uncertainty for decision making. In some contexts (particularly the solution of very specific problems at large cartographic scales, e.g. site remediation and precision farming) it is possible to translate uncertainty of predictions into a loss function directly comparable with the cost incurred in increasing precision. Often, however, sampling must be planned for more generic purposes (e.g. baseline or exploratory geochemical surveys). In this latter context the prediction error variance may be of limited value to a non-statistician who has to make a decision on sample intensity and associated cost. We propose an alternative criterion for these circumstances to aid communication between statisticians and data users about the uncertainty of geostatistical surveys based on different sampling intensities. The criterion is the consistency of estimates made from two non-coincident instantiations of a proposed sample design. We consider square sample grids, one instantiation is offset from the second by half the grid spacing along the rows and along the columns. If a sample grid is coarse relative to the important scales of variation in the target property then the consistency of predictions from two instantiations is expected to be small, and can be increased by reducing the grid spacing. The measure of consistency is the correlation between estimates from the two instantiations of the sample grid, averaged over a grid cell. We call this the offset correlation, it can be calculated from the variogram. We propose that this measure is easier to grasp intuitively than the prediction error variance, and has the advantage of having an upper bound (1.0) which will aid its interpretation. This quality measure is illustrated for some hypothetical examples, considering both ordinary kriging and factorial kriging of the variable of interest. It is also illustrated using data on metal concentrations in the soil of north-east England.
NASA Astrophysics Data System (ADS)
Schwing, Alan Michael
For computational fluid dynamics, the governing equations are solved on a discretized domain of nodes, faces, and cells. The quality of the grid or mesh can be a driving source for error in the results. While refinement studies can help guide the creation of a mesh, grid quality is largely determined by user expertise and understanding of the flow physics. Adaptive mesh refinement is a technique for enriching the mesh during a simulation based on metrics for error, impact on important parameters, or location of important flow features. This can offload from the user some of the difficult and ambiguous decisions necessary when discretizing the domain. This work explores the implementation of adaptive mesh refinement in an implicit, unstructured, finite-volume solver. Consideration is made for applying modern computational techniques in the presence of hanging nodes and refined cells. The approach is developed to be independent of the flow solver in order to provide a path for augmenting existing codes. It is designed to be applicable for unsteady simulations and refinement and coarsening of the grid does not impact the conservatism of the underlying numerics. The effect on high-order numerical fluxes of fourth- and sixth-order are explored. Provided the criteria for refinement is appropriately selected, solutions obtained using adapted meshes have no additional error when compared to results obtained on traditional, unadapted meshes. In order to leverage large-scale computational resources common today, the methods are parallelized using MPI. Parallel performance is considered for several test problems in order to assess scalability of both adapted and unadapted grids. Dynamic repartitioning of the mesh during refinement is crucial for load balancing an evolving grid. Development of the methods outlined here depend on a dual-memory approach that is described in detail. Validation of the solver developed here against a number of motivating problems shows favorable comparisons across a range of regimes. Unsteady and steady applications are considered in both subsonic and supersonic flows. Inviscid and viscous simulations achieve similar results at a much reduced cost when employing dynamic mesh adaptation. Several techniques for guiding adaptation are compared. Detailed analysis of statistics from the instrumented solver enable understanding of the costs associated with adaptation. Adaptive mesh refinement shows promise for the test cases presented here. It can be considerably faster than using conventional grids and provides accurate results. The procedures for adapting the grid are light-weight enough to not require significant computational time and yield significant reductions in grid size.
The USNO 26'' Clark Refractor; From Visual Observations to Speckle Interferometry
NASA Astrophysics Data System (ADS)
Bartlett, Jennifer L.; Mason, B. D.; Hartkopf, W. I.
2011-01-01
Before addressing queries about how and what to preserve among astronomical devices, the question of what constitutes a historic instrument must be considered. Certainly, the lenses are the defining feature of a Clark refractor. Since 1867, when Newcomb inquired about the possibility of obtaining a great glass from Alvan Clark & Sons, the U.S. Naval Observatory 26-in (66-cm) equatorial has evolved in response to improvements in technology and changes in its observing program. After two major overhauls, only the objective remains of the equipment originally installed by the Clarks in 1873 at the old Observatory site in Foggy Bottom. However, the telescope retains its reputation as a historic Clark refractor. The USNO telescope was briefly renowned as the largest refractor in the world; the second of five such achievements by the Clarks. Through it, Hall first detected the moons of Mars in 1877. However, by that time, the Clarks had already refigured the flint glass. Hall and Gardiner had also altered the drive mechanism. When the USNO moved to its present Georgetown Heights location in 1893, the great equatorial was refurbished with its original Clark optics installed on a more robust Warner & Swasey mount. Peters eventually incorporated discarded parts from the original mounting into his photographic telescopes during the first half of the 20th century. The 26'' refractor underwent further modernization in the early 1960s to facilitate the xy-slide of a Hertzsprung-style photographic double star camera. In 1965, the objective was disassembled for cleaning and reassembled with new spacers. The most recent maintenance included re-wiring and replacing several motors and the hand paddles. Originally designed as a visual instrument, the USNO 26'' Clark refractor now hosts a speckle interferometer for its current double star program. Despite continuing modifications, this telescope remains a fine example of the optician's art.
Dodge, Kent A.; Hornberger, Michelle I.; Dyke, Jessica
2009-01-01
Water, bed sediment, and biota were sampled in streams from Butte to near Missoula as part of a long-term monitoring program in the upper Clark Fork basin; additional water samples were collected in the Clark Fork basin from sites near Missoula downstream to near the confluence of the Clark Fork and Flathead River as part of a supplemental sampling program. The sampling programs were conducted in cooperation with the U.S. Environmental Protection Agency to characterize aquatic resources in the Clark Fork basin of western Montana, with emphasis on trace elements associated with historic mining and smelting activities. Sampling sites were located on the Clark Fork and selected tributaries. Water samples were collected periodically at 23 sites from October 2007 through September 2008. Bed-sediment and biota samples were collected once at 13 sites during August 2008. This report presents the analytical results and quality assurance data for water-quality, bed-sediment, and biota samples collected at all long-term and supplemental monitoring sites from October 2007 through September 2008. Water-quality data include concentrations of selected major ions, trace elements, and suspended sediment. Turbidity was analyzed for water samples collected at sites where seasonal daily values of turbidity were being determined and at Clark Fork above Missoula. Nutrients also were analyzed at all the supplemental water-quality sites, except for Clark Fork Bypass, near Bonner. Daily values of suspended-sediment concentration and suspended-sediment discharge were determined for four sites, and seasonal daily values of turbidity were determined for four sites. Bed-sediment data include trace-element concentrations in the fine-grained fraction. Biological data include trace-element concentrations in whole-body tissue of aquatic benthic insects. Statistical summaries of long-term water-quality, bed-sediment, and biological data for sites in the upper Clark Fork basin are provided for the period of record since 1985.
Dodge, Kent A.; Hornberger, Michelle I.; Dyke, Jessica
2010-01-01
Water, bed sediment, and biota were sampled in streams from Butte to near Missoula, Montana, as part of a long-term monitoring program in the upper Clark Fork basin; additional water samples were collected in the Clark Fork basin from sites near Missoula downstream to near the confluence of the Clark Fork and Flathead River as part of a supplemental sampling program. The sampling programs were conducted by the U.S. Geological Survey in cooperation with the U.S. Environmental Protection Agency to characterize aquatic resources in the Clark Fork basin of western Montana, with emphasis on trace elements associated with historic mining and smelting activities. Sampling sites were located on the Clark Fork and selected tributaries. Water samples were collected periodically at 24 sites from October 2008 through September 2009. Bed-sediment and biota samples were collected once at 13 sites during August 2009. This report presents the analytical results and quality-assurance data for water-quality, bed-sediment, and biota samples collected at all long-term and supplemental monitoring sites from October 2008 through September 2009. Water-quality data include concentrations of selected major ions, trace elements, and suspended sediment. Turbidity was analyzed for water samples collected at the four sites where seasonal daily values of turbidity were being determined as well as at Clark Fork above Missoula. Nutrients also were analyzed at all the supplemental water-quality sites, except for Clark Fork Bypass, near Bonner. Daily values of suspended-sediment concentration and suspended-sediment discharge were determined for four sites. Bed-sediment data include trace-element concentrations in the fine-grained fraction. Biological data include trace-element concentrations in whole-body tissue of aquatic benthic insects. Statistical summaries of long-term water-quality, bed-sediment, and biological data for sites in the upper Clark Fork basin are provided for the period of record since 1985.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Guba, O.; Taylor, M. A.; Ullrich, P. A.
2014-11-27
We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable-resolution grids using the shallow-water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance, implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution-dependent coefficient. For the spectral element method with variable-resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity is constructed so that, formore » regions of uniform resolution, it matches the traditional constant-coefficient hyperviscosity. With the tensor hyperviscosity, the large-scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications in which long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less
Guba, O.; Taylor, M. A.; Ullrich, P. A.; ...
2014-06-25
We evaluate the performance of the Community Atmosphere Model's (CAM) spectral element method on variable resolution grids using the shallow water equations in spherical geometry. We configure the method as it is used in CAM, with dissipation of grid scale variance implemented using hyperviscosity. Hyperviscosity is highly scale selective and grid independent, but does require a resolution dependent coefficient. For the spectral element method with variable resolution grids and highly distorted elements, we obtain the best results if we introduce a tensor-based hyperviscosity with tensor coefficients tied to the eigenvalues of the local element metric tensor. The tensor hyperviscosity ismore » constructed so that for regions of uniform resolution it matches the traditional constant coefficient hyperviscsosity. With the tensor hyperviscosity the large scale solution is almost completely unaffected by the presence of grid refinement. This later point is important for climate applications where long term climatological averages can be imprinted by stationary inhomogeneities in the truncation error. We also evaluate the robustness of the approach with respect to grid quality by considering unstructured conforming quadrilateral grids generated with a well-known grid-generating toolkit and grids generated by SQuadGen, a new open source alternative which produces lower valence nodes.« less
Aeroacoustic Simulations of a Nose Landing Gear with FUN3D: A Grid Refinement Study
NASA Technical Reports Server (NTRS)
Vatsa, Veer N.; Khorrami, Mehdi R.; Lockard, David P.
2017-01-01
A systematic grid refinement study is presented for numerical simulations of a partially-dressed, cavity-closed (PDCC) nose landing gear configuration that was tested in the University of Florida's open-jet acoustic facility known as the UFAFF. The unstructured-grid flow solver FUN3D is used to compute the unsteady flow field for this configuration. Mixed-element grids generated using the Pointwise (Registered Trademark) grid generation software are used for numerical simulations. Particular care is taken to ensure quality cells and proper resolution in critical areas of interest in an effort to minimize errors introduced by numerical artifacts. A set of grids was generated in this manner to create a family of uniformly refined grids. The finest grid was then modified to coarsen the wall-normal spacing to create a grid suitable for the wall-function implementation in FUN3D code. A hybrid Reynolds-averaged Navier-Stokes/large eddy simulation (RANS/LES) turbulence modeling approach is used for these simulations. Time-averaged and instantaneous solutions obtained on these grids are compared with the measured data. These CFD solutions are used as input to a FfowcsWilliams-Hawkings (FW-H) noise propagation code to compute the farfield noise levels. The agreement of the computed results with the experimental data improves as the grid is refined.
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.;
2016-01-01
This manual describes the installation and execution of FUN3D version 12.9, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.;
2017-01-01
This manual describes the installation and execution of FUN3D version 13.2, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.;
2015-01-01
This manual describes the installation and execution of FUN3D version 12.6, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.;
2015-01-01
This manual describes the installation and execution of FUN3D version 12.7, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, William L.; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.;
2014-01-01
This manual describes the installation and execution of FUN3D version 12.5, including optional dependent packages. FUN3D is a suite of computational uid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables ecient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.;
2015-01-01
This manual describes the installation and execution of FUN3D version 12.8, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.; Rumsey, Christopher L.;
2014-01-01
This manual describes the installation and execution of FUN3D version 12.4, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixedelement unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.;
2017-01-01
This manual describes the installation and execution of FUN3D version 13.1, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bill; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.;
2016-01-01
This manual describes the installation and execution of FUN3D version 13.0, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.
NASA Technical Reports Server (NTRS)
Biedron, Robert T.; Carlson, Jan-Renee; Derlaga, Joseph M.; Gnoffo, Peter A.; Hammond, Dana P.; Jones, William T.; Kleb, Bil; Lee-Rausch, Elizabeth M.; Nielsen, Eric J.; Park, Michael A.;
2018-01-01
This manual describes the installation and execution of FUN3D version 13.3, including optional dependent packages. FUN3D is a suite of computational fluid dynamics simulation and design tools that uses mixed-element unstructured grids in a large number of formats, including structured multiblock and overset grid systems. A discretely-exact adjoint solver enables efficient gradient-based design and grid adaptation to reduce estimated discretization error. FUN3D is available with and without a reacting, real-gas capability. This generic gas option is available only for those persons that qualify for its beta release status.
ERIC Educational Resources Information Center
Bond, William Glenn
2012-01-01
In this paper, I propose to demonstrate a means of error estimation preprocessing in the assembly of overlapping aerial image mosaics. The mosaic program automatically assembles several hundred aerial images from a data set by aligning them, via image registration using a pattern search method, onto a GIS grid. The method presented first locates…
NASTRAN maintenance and enhancement experiences
NASA Technical Reports Server (NTRS)
Schmitz, R. P.
1975-01-01
The current capability is described which includes isoparametric elements, optimization of grid point sequencing, and eigenvalue routine. Overlay and coding errors were corrected for cyclic symmetry, transient response, and differential stiffness rigid formats. Error corrections and program enhancements are discussed along with developments scheduled for the current year and a brief description of analyses being performed using the program.
Relative and Absolute Error Control in a Finite-Difference Method Solution of Poisson's Equation
ERIC Educational Resources Information Center
Prentice, J. S. C.
2012-01-01
An algorithm for error control (absolute and relative) in the five-point finite-difference method applied to Poisson's equation is described. The algorithm is based on discretization of the domain of the problem by means of three rectilinear grids, each of different resolution. We discuss some hardware limitations associated with the algorithm,…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-07-05
... listed and six unlisted species of fish covered by Kent's Clark Springs Water Supply HCP. This notice... applications are for the operation and maintenance of Kent's Clark Springs Water Supply System adjacent to Rock Creek, King County, Washington. The Clark Springs Water Supply System consists of a spring-fed...
12. Photocopy of photograph (original photograph in possession of Atlanta ...
12. Photocopy of photograph (original photograph in possession of Atlanta Housing Authority, Atlanta, GA). Photographer unknown, circa 1945. View across park and playground between Techwood Homes and Clark Howell Homes, facing west with Clark Howell Homes in background. - Clark Howell Homes (Public Housing), Bounded by North Avenue, Lovejoy Street, Mills Street & Luckie Street, Atlanta, Fulton County, GA
Federal Register 2010, 2011, 2012, 2013, 2014
2011-12-09
... Lewis and Clark County AGENCY: Bureau of Land Management, Interior. ACTION: Notice. SUMMARY: The... realign Lewis and Clark County, currently a split county between the two offices, to the Western Montana... Broadwater, Deer Lodge, Gallatin, Jefferson, Lewis and Clark, Park, Silver Bow and the northern portion of...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-15
... Clark County Department of Aviation To Use a Weight-Based Air Service Incentive Program AGENCY: Federal... airport revenue and on airport rates and charges. The petitioner Clark County Department of Aviation is..., 2011, the Federal Aviation Administration (FAA) received a letter from counsel for the Clark County...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-04-26
... Record of Decision for the Proposed Sloan Hills Competitive Mineral Material Sales, Clark County, NV..., Clark County, Nevada, and by this notice is announcing their availability. DATES: The BLM will not act... include: Las Vegas Valley Water District, Nevada Department of Wildlife, Clark County Department of Air...
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-16
... DEPARTMENT OF TRANSPORTATION Surface Transportation Board [Docket No. AB 1076X] Caddo Valley Railroad Company--Abandonment Exemption--in Clark, Pike, and Montgomery Counties, AR Caddo Valley Railroad... milepost 479.2, at the end of the line near Birds Mill, a distance of 32.2 miles, in Clark, Pike, and...
75 FR 5114 - Desert National Wildlife Refuge Complex, Clark, Lincoln, and Nye Counties, NV
Federal Register 2010, 2011, 2012, 2013, 2014
2010-02-01
... DEPARTMENT OF THE INTERIOR Fish and Wildlife Service [FWS-R8-R-2009-N222; 80230-1265-0000-S3] Desert National Wildlife Refuge Complex, Clark, Lincoln, and Nye Counties, NV AGENCY: Fish and Wildlife.... The Wildlife Refuge is located on 116 acres in northeastern Clark County. Due to its small size...
76 FR 71124 - Caddo Valley Railroad Company-Abandonment Exemption-in Pike and Clark Counties, AR
Federal Register 2010, 2011, 2012, 2013, 2014
2011-11-16
... DEPARTMENT OF TRANSPORTATION Surface Transportation Board [Docket No. AB 1076 (Sub-No. 1X)] Caddo Valley Railroad Company--Abandonment Exemption--in Pike and Clark Counties, AR On October 27, 2011, Caddo... 17.55 miles, in Pike and Clark Counties, Ark. (the line).\\1\\ The line traverses United States Postal...
Computational Science as Part of Technology Education: An Interview with Aaron Clark
ERIC Educational Resources Information Center
Technology Teacher, 2008
2008-01-01
As teachers search for the most appropriate form of TIDE education for the future, they must consider as many alternatives as possible. One such alternative is computational science, which is described in detail in this interview with Dr. Aaron Clark of North Carolina State University. Dr. Clark recently agreed to this interview, with the primary…
Giving Children Security: Mamie Phipps Clark and the Racialization of Child Psychology.
ERIC Educational Resources Information Center
Lal, Shafali
2002-01-01
Examines the individual and social contexts of the life of Mamie Clark (wife of African American psychologist Kenneth Clark), whose work at the Harlem Northside Center for Child Development helped define an increasing interest in the psychology of children of color. Urges greater attention to the dynamics of race and gender in history of…
The Clark/AAC&U Conference on Liberal Education and Effective Practice
ERIC Educational Resources Information Center
Freeland, Richard M.
2009-01-01
On March 12 and 13, 2009, thirty-two educators and leaders from the corporate and nonprofit sectors gathered at Clark University for an extended seminar cosponsored by Clark and the Association of American Colleges and Universities. Their focus was a question of fundamental importance for liberal education: how well do the learning experiences…
Clarke Central High School: One Student at a Time
ERIC Educational Resources Information Center
Principal Leadership, 2013
2013-01-01
There is excitement in the air at Clarke Central High School in anticipation of a $28 million renovation planned on its 27-acre, urban campus located just minutes from the University of Georgia in Athens. This extensive construction aims to fulfill a board of education mandate to provide equity among the Clarke County school facilities and will…
Accurate finite difference methods for time-harmonic wave propagation
NASA Technical Reports Server (NTRS)
Harari, Isaac; Turkel, Eli
1994-01-01
Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.
NASA Technical Reports Server (NTRS)
VanderWijngaart, Rob; Frumkin, Michael; Biegel, Bryan A. (Technical Monitor)
2002-01-01
We provide a paper-and-pencil specification of a benchmark suite for computational grids. It is based on the NAS (NASA Advanced Supercomputing) Parallel Benchmarks (NPB) and is called the NAS Grid Benchmarks (NGB). NGB problems are presented as data flow graphs encapsulating an instance of a slightly modified NPB task in each graph node, which communicates with other nodes by sending/receiving initialization data. Like NPB, NGB specifies several different classes (problem sizes). In this report we describe classes S, W, and A, and provide verification values for each. The implementor has the freedom to choose any language, grid environment, security model, fault tolerance/error correction mechanism, etc., as long as the resulting implementation passes the verification test and reports the turnaround time of the benchmark.
NASA Technical Reports Server (NTRS)
da Silva, Arlindo; Redder, Christopher
2010-01-01
MERRA is a NASA reanalysis for the satellite era using a major new version of the Goddard Earth Observing System Data Assimilation System Version 5 (GEOS-5). The project focuses on historical analyses of the hydrological cycle on a broad range of weather and climate time scales and places the NASA EOS suite of observations in a climate context. The characterization of uncertainty in reanalysis fields is a commonly requested feature by users of such data. While intercomparison with reference data sets is common practice for ascertaining the realism of the datasets, such studies typically are restricted to long term climatological statistics and seldom provide state dependent measures of the uncertainties involved. In principle, variational data assimilation algorithms have the ability of producing error estimates for the analysis variables (typically surface pressure, winds, temperature, moisture and ozone) consistent with the assumed background and observation error statistics. However, these "perceived error estimates" are expensive to obtain and are limited by the somewhat simplistic errors assumed in the algorithm. The observation minus forecast residuals (innovations) by-product of any assimilation system constitutes a powerful tool for estimating the systematic and random errors in the analysis fields. Unfortunately, such data is usually not readily available with reanalysis products, often requiring the tedious decoding of large datasets and not so-user friendly file formats. With MERRA we have introduced a gridded version of the observations/innovations used in the assimilation process, using the same grid and data formats as the regular datasets. Such dataset empowers the user with the ability of conveniently performing observing system related analysis and error estimates. The scope of this dataset will be briefly described. We will present a systematic analysis of MERRA innovation time series for the conventional observing system, including maximum-likelihood estimates of background and observation errors, as well as global bias estimates. Starting with the joint PDF of innovations and analysis increments at observation locations we propose a technique for diagnosing bias among the observing systems, and document how these contextual biases have evolved during the satellite era covered by MERRA.
NASA Astrophysics Data System (ADS)
da Silva, A.; Redder, C. R.
2010-12-01
MERRA is a NASA reanalysis for the satellite era using a major new version of the Goddard Earth Observing System Data Assimilation System Version 5 (GEOS-5). The Project focuses on historical analyses of the hydrological cycle on a broad range of weather and climate time scales and places the NASA EOS suite of observations in a climate context. The characterization of uncertainty in reanalysis fields is a commonly requested feature by users of such data. While intercomparison with reference data sets is common practice for ascertaining the realism of the datasets, such studies typically are restricted to long term climatological statistics and seldom provide state dependent measures of the uncertainties involved. In principle, variational data assimilation algorithms have the ability of producing error estimates for the analysis variables (typically surface pressure, winds, temperature, moisture and ozone) consistent with the assumed background and observation error statistics. However, these "perceived error estimates" are expensive to obtain and are limited by the somewhat simplistic errors assumed in the algorithm. The observation minus forecast residuals (innovations) by-product of any assimilation system constitutes a powerful tool for estimating the systematic and random errors in the analysis fields. Unfortunately, such data is usually not readily available with reanalysis products, often requiring the tedious decoding of large datasets and not so-user friendly file formats. With MERRA we have introduced a gridded version of the observations/innovations used in the assimilation process, using the same grid and data formats as the regular datasets. Such dataset empowers the user with the ability of conveniently performing observing system related analysis and error estimates. The scope of this dataset will be briefly described. We will present a systematic analysis of MERRA innovation time series for the conventional observing system, including maximum-likelihood estimates of background and observation errors, as well as global bias estimates. Starting with the joint PDF of innovations and analysis increments at observation locations we propose a technique for diagnosing bias among the observing systems, and document how these contextual biases have evolved during the satellite era covered by MERRA.
Method and apparatus for nondestructive in vivo measurement of photosynthesis
Greenbaum, E.
1988-02-22
A device for in situ, nondestructive measurement of photosynthesis in live plants and photosynthetic microorganisms is disclosed which comprises a Clark-type oxygen electrode having a substantially transparent cathode comprised of an optical fiber having a metallic grid microetched onto its front face and sides, an anode, a substantially transparent electrolyte film, and a substantially transparent oxygen permeable membrane. The device is designed to be placed in direct contact with a photosynthetic portion of a living plant, and nondestructive, noninvasive measurement of photosynthetic oxygen production from the plant can be taken by passing light through the fiber-optic cathode, transparent electrolyte and transparent membrane, and onto the plant so that photosynthesis occurs. The oxygen thus produced by the plant is measured polarographically by the electrode. The present invention allows for rapid, nondestructive measurements of photosynthesis in living plants in a manner heretofore impossible using prior art methods. 6 figs.
Method and apparatus for nondestructive in vivo measurement of photosynthesis
Greenbaum, Elias
1988-01-01
A device for in situ, nondestructive measurement of photosynthesis in live plants and photosynthetic microorganisms is disclosed which comprises a Clark-type oxygen electrode having a substantially transparent cathode comprised of an optical fiber having a metallic grid microetched onto its front face and sides, an anode, a substantially transparent electrolyte film, and a substantially transparent oxygen permeable membrane. The device is designed to be placed in direct contact with a photosynthetic portion of a living plant, and nondestructive, noninvasive measurement of photosynthetic oxygen production from the plant can be taken by passing light through the fiber-optic cathode, transparent electroyte and transparent membrane, and onto the plant so that photosynthesis occurs. The oxygen thus produced by the plant is measured polargraphically by the electrode. The present invention allows for rapid, nondestructive measurements of photosynthesis in living plants in a manner heretofore impossible using prior art methods.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Zhao, J; Hu, W; Xing, Y
Purpose: Different particle scanning beam delivery systems have different delivery accuracies. This study was performed to determine, for our particle treatment system, an appropriate composition (n=FWHM/GS) of spot size(FWHM) and grid size (GS), which can provide homogenous delivered dose distributions for both proton and heavy ion scanning beam radiotherapy. Methods: We analyzed the delivery errors of our beam delivery system using log files from the treatment of 28 patients. We used a homemade program to simulate square fields for different n values with and without considering the delivery errors and analyzed the homogeneity. All spots were located on a rectilinearmore » grid with equal spacing in the × and y directions. After that, we selected 7 energy levels for both proton and carbon ions. For each energy level, we made 6 square field plans with different n values (1, 1.5, 2, 2.5, 3, 3.5). Then we delivered those plans and used films to measure the homogeneity of each field. Results: For program simulation without delivery errors, when n≥1.1 the homogeneity can be within ±3%. For both proton and carbon program simulations with delivery errors and film measurements, the homogeneity can be within ±3% when n≥2.5. Conclusion: For our facility with system errors, the n≥2.5 is appropriate for maintaining homogeneity within ±3%.« less
Small Area Variance Estimation for the Siuslaw NF in Oregon and Some Results
S. Lin; D. Boes; H.T. Schreuder
2006-01-01
The results of a small area prediction study for the Siuslaw National Forest in Oregon are presented. Predictions were made for total basal area, number of trees and mortality per ha on a 0.85 mile grid using data on a 1.7 mile grid and additional ancillary information from TM. A reliable method of estimating prediction errors for individual plot predictions called the...
Environmental boundaries as a mechanism for correcting and anchoring spatial maps
2016-01-01
Abstract Ubiquitous throughout the animal kingdom, path integration‐based navigation allows an animal to take a circuitous route out from a home base and using only self‐motion cues, calculate a direct vector back. Despite variation in an animal's running speed and direction, medial entorhinal grid cells fire in repeating place‐specific locations, pointing to the medial entorhinal circuit as a potential neural substrate for path integration‐based spatial navigation. Supporting this idea, grid cells appear to provide an environment‐independent metric representation of the animal's location in space and preserve their periodic firing structure even in complete darkness. However, a series of recent experiments indicate that spatially responsive medial entorhinal neurons depend on environmental cues in a more complex manner than previously proposed. While multiple types of landmarks may influence entorhinal spatial codes, environmental boundaries have emerged as salient landmarks that both correct error in entorhinal grid cells and bind internal spatial representations to the geometry of the external spatial world. The influence of boundaries on error correction and grid symmetry points to medial entorhinal border cells, which fire at a high rate only near environmental boundaries, as a potential neural substrate for landmark‐driven control of spatial codes. The influence of border cells on other entorhinal cell populations, such as grid cells, could depend on plasticity, raising the possibility that experience plays a critical role in determining how external cues influence internal spatial representations. PMID:26563618
Isotopic Effects in Nuclear Fragmentation and GCR Transport Problems
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.
2002-01-01
Improving the accuracy of the galactic cosmic ray (GCR) environment and transport models is an important goal in preparing for studies of the projected risks and the efficiency of potential mitigations methods for space exploration. In this paper we consider the effects of the isotopic composition of the primary cosmic rays and the isotopic dependence of nuclear fragmentation cross sections on GCR transport models. Measurements are used to describe the isotopic composition of the GCR including their modulation throughout the solar cycle. The quantum multiple-scattering approach to nuclear fragmentation (QMSFRG) is used as the data base generator in order to accurately describe the odd-even effect in fragment production. Using the Badhwar and O'Neill GCR model, the QMSFRG model and the HZETRN transport code, the effects of the isotopic dependence of the primary GCR composition and on fragment production for transport problems is described for a complete GCR isotopic-grid. The principle finding of this study is that large errors ( 100%) will occur in the mass-flux spectra when comparing the complete isotopic-grid (141 ions) to a reduced isotopic-grid (59 ions), however less significant errors 30%) occur in the elemental-flux spectra. Because the full isotopic-grid is readily handled on small computer work-stations, it is recommended that they be used for future GCR studies.
Integrating bathymetric and topographic data
NASA Astrophysics Data System (ADS)
Teh, Su Yean; Koh, Hock Lye; Lim, Yong Hui; Tan, Wai Kiat
2017-11-01
The quality of bathymetric and topographic resolution significantly affect the accuracy of tsunami run-up and inundation simulation. However, high resolution gridded bathymetric and topographic data sets for Malaysia are not freely available online. It is desirable to have seamless integration of high resolution bathymetric and topographic data. The bathymetric data available from the National Hydrographic Centre (NHC) of the Royal Malaysian Navy are in scattered form; while the topographic data from the Department of Survey and Mapping Malaysia (JUPEM) are given in regularly spaced grid systems. Hence, interpolation is required to integrate the bathymetric and topographic data into regularly-spaced grid systems for tsunami simulation. The objective of this research is to analyze the most suitable interpolation methods for integrating bathymetric and topographic data with minimal errors. We analyze four commonly used interpolation methods for generating gridded topographic and bathymetric surfaces, namely (i) Kriging, (ii) Multiquadric (MQ), (iii) Thin Plate Spline (TPS) and (iv) Inverse Distance to Power (IDP). Based upon the bathymetric and topographic data for the southern part of Penang Island, our study concluded, via qualitative visual comparison and Root Mean Square Error (RMSE) assessment, that the Kriging interpolation method produces an interpolated bathymetric and topographic surface that best approximate the admiralty nautical chart of south Penang Island.
An integral conservative gridding--algorithm using Hermitian curve interpolation.
Volken, Werner; Frei, Daniel; Manser, Peter; Mini, Roberto; Born, Ernst J; Fix, Michael K
2008-11-07
The problem of re-sampling spatially distributed data organized into regular or irregular grids to finer or coarser resolution is a common task in data processing. This procedure is known as 'gridding' or 're-binning'. Depending on the quantity the data represents, the gridding-algorithm has to meet different requirements. For example, histogrammed physical quantities such as mass or energy have to be re-binned in order to conserve the overall integral. Moreover, if the quantity is positive definite, negative sampling values should be avoided. The gridding process requires a re-distribution of the original data set to a user-requested grid according to a distribution function. The distribution function can be determined on the basis of the given data by interpolation methods. In general, accurate interpolation with respect to multiple boundary conditions of heavily fluctuating data requires polynomial interpolation functions of second or even higher order. However, this may result in unrealistic deviations (overshoots or undershoots) of the interpolation function from the data. Accordingly, the re-sampled data may overestimate or underestimate the given data by a significant amount. The gridding-algorithm presented in this work was developed in order to overcome these problems. Instead of a straightforward interpolation of the given data using high-order polynomials, a parametrized Hermitian interpolation curve was used to approximate the integrated data set. A single parameter is determined by which the user can control the behavior of the interpolation function, i.e. the amount of overshoot and undershoot. Furthermore, it is shown how the algorithm can be extended to multidimensional grids. The algorithm was compared to commonly used gridding-algorithms using linear and cubic interpolation functions. It is shown that such interpolation functions may overestimate or underestimate the source data by about 10-20%, while the new algorithm can be tuned to significantly reduce these interpolation errors. The accuracy of the new algorithm was tested on a series of x-ray CT-images (head and neck, lung, pelvis). The new algorithm significantly improves the accuracy of the sampled images in terms of the mean square error and a quality index introduced by Wang and Bovik (2002 IEEE Signal Process. Lett. 9 81-4).
Textbook Multigrid Efficiency for Leading Edge Stagnation
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.; Mineck, Raymond E.
2004-01-01
A multigrid solver is defined as having textbook multigrid efficiency (TME) if the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in evaluating the discrete residuals. TME in solving the incompressible inviscid fluid equations is demonstrated for leading-edge stagnation flows. The contributions of this paper include (1) a special formulation of the boundary conditions near stagnation allowing convergence of the Newton iterations on coarse grids, (2) the boundary relaxation technique to facilitate relaxation and residual restriction near the boundaries, (3) a modified relaxation scheme to prevent initial error amplification, and (4) new general analysis techniques for multigrid solvers. Convergence of algebraic errors below the level of discretization errors is attained by a full multigrid (FMG) solver with one full approximation scheme (FAS) cycle per grid. Asymptotic convergence rates of the FAS cycles for the full system of flow equations are very fast, approaching those for scalar elliptic equations.
Textbook Multigrid Efficiency for Leading Edge Stagnation
NASA Technical Reports Server (NTRS)
Diskin, Boris; Thomas, James L.; Mineck, Raymond E.
2004-01-01
A multigrid solver is defined as having textbook multigrid efficiency (TME) if the solutions to the governing system of equations are attained in a computational work which is a small (less than 10) multiple of the operation count in evaluating the discrete residuals. TME in solving the incompressible inviscid fluid equations is demonstrated for leading- edge stagnation flows. The contributions of this paper include (1) a special formulation of the boundary conditions near stagnation allowing convergence of the Newton iterations on coarse grids, (2) the boundary relaxation technique to facilitate relaxation and residual restriction near the boundaries, (3) a modified relaxation scheme to prevent initial error amplification, and (4) new general analysis techniques for multigrid solvers. Convergence of algebraic errors below the level of discretization errors is attained by a full multigrid (FMG) solver with one full approximation scheme (F.4S) cycle per grid. Asymptotic convergence rates of the F.4S cycles for the full system of flow equations are very fast, approaching those for scalar elliptic equations.
A-posteriori error estimation for the finite point method with applications to compressible flow
NASA Astrophysics Data System (ADS)
Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio
2017-08-01
An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.
Tree growth inference and prediction from diameter censuses and ring widths
James S. Clark; Michael Wolosin; Michael Dietze; Ines Ibanez; Shannon LaDeau; Miranda Welsh; Brian Kloeppel
2007-01-01
Knowledge of tree growth is needed to understand population dynamics (Condit et al. 1993, Fastie 1995, Frelich and Reich 1995, Clark and Clark 1999, Wyckoff and Clark 2002, 2005, Webster and Lorimer 2005), species interactions (Swetnam and Lynch 1993), carbon sequestration (DeLucia et al. 1999, Casperson et al. 2000), forest response to climate change (Cook 1987,...
Sand Fly Surveillance and Control on Camp Ramadi, Iraq, as Part of a Leishmaniasis Control Program
2013-12-01
Environmental Science, Research Triangle Park, NC, U.S.A.) and Anvil® 10+10 ULV (Clarke Mosquito Control Products, Roselle , IL, U.S.A). Scourge® was...contract personnel using a Clarke Pro-Mist ULV machine (Clarke Mosquito Control Products, Roselle , IL): April (n=2), May (n=6), June (n=6), July
Federal Register 2010, 2011, 2012, 2013, 2014
2010-05-12
... Subsequent Conveyance for the Recreation and Public Purposes Act of Public Lands in Clark County, NV AGENCY... of public land in Clark County, Nevada. The Grace Lutheran Church proposes to use the land for a.... 315f) and Executive Order No. 6910, the following described public land in Clark County, Nevada, has...
Federal Register 2010, 2011, 2012, 2013, 2014
2013-08-01
...) for Affordable Housing Purposes in Las Vegas, Clark County, NV AGENCY: Bureau of Land Management... 5-acre public land parcel located in the southern portion of the Las Vegas Valley in Clark County... of the Las Vegas Valley in Clark County, Nevada, further described as: Mount Diablo Meridian T. 22 S...
Comment on C. M. Clark, L. Lawlor-Savage, & v. M. Goghari
ERIC Educational Resources Information Center
Hiscock, Merrill
2016-01-01
Merrill Hiscock presents two criticisms of Clark's analysis of the Flynn effect. The first is that the authors worry too much about general ability and pay too little attention to multifactorial concepts of intelligence. The second applies not only to the Clark et al. paper but to the Flynn effect literature in general--namely, neglect of the…
NASA Technical Reports Server (NTRS)
Green, Robert O.; Vane, Gregg
1989-01-01
The Clark Mountains in eastern California form a rugged, highly dissected area nearly 5000 ft above sea level, with Clark Mountain rising to 8000 ft. The rocks of the Clark Mountains and the Mescal Range just to the south are Paleozoic carbonate and clastic rocks, and Mesozoic clastic and volcanic rocks standing in pronounced relief above the fractured Precambrian gneisses to the east. The Permian Kaibab Limestone and the Triassic Moenkopi and Chinle Formations are exposed in the Mescal Range, which is the only place in California where these rocks, which are typical of the Colorado Plateau, are found. To the west, the mountains are bordered by the broad alluvial plains of Shadow Valley. Cima Dome, which is an erosional remnant carved on a batholithic intrusion of quartz monzonite, is found at the south end of the valley. To the east of the Clark and Mescal Mountains is found the Ivanpah Valley, in the center of which is located the Ivanpah Play. Studies of the Clark Mountains with the airborne visible/infrared imaging spectrometer are briefly described.
Physics of the Isotopic Dependence of Galactic Cosmic Ray Fluence Behind Shielding
NASA Technical Reports Server (NTRS)
Cucinotta, Francis A.; Saganti, Premkumar B.; Hu, Xiao-Dong; Kim, Myung-Hee Y.; Cleghorn, Timothy F.; Wilson, John W.; Tripathi, Ram K.; Zeitlin, Cary J.
2003-01-01
For over 25 years, NASA has supported the development of space radiation transport models for shielding applications. The NASA space radiation transport model now predicts dose and dose equivalent in Earth and Mars orbit to an accuracy of plus or minus 20%. However, because larger errors may occur in particle fluence predictions, there is interest in further assessments and improvements in NASA's space radiation transport model. In this paper, we consider the effects of the isotopic composition of the primary galactic cosmic rays (GCR) and the isotopic dependence of nuclear fragmentation cross-sections on the solution to transport models used for shielding studies. Satellite measurements are used to describe the isotopic composition of the GCR. Using NASA's quantum multiple-scattering theory of nuclear fragmentation (QMSFRG) and high-charge and energy (HZETRN) transport code, we study the effect of the isotopic dependence of the primary GCR composition and secondary nuclei on shielding calculations. The QMSFRG is shown to accurately describe the iso-spin dependence of nuclear fragmentation. The principal finding of this study is that large errors (plus or minus 100%) will occur in the mass-fluence spectra when comparing transport models that use a complete isotope grid (approximately 170 ions) to ones that use a reduced isotope grid, for example the 59 ion-grid used in the HZETRN code in the past, however less significant errors (less than 20%) occur in the elemental-fluence spectra. Because a complete isotope grid is readily handled on small computer workstations and is needed for several applications studying GCR propagation and scattering, it is recommended that they be used for future GCR studies.
CO2 Flux Estimation Errors Associated with Moist Atmospheric Processes
NASA Technical Reports Server (NTRS)
Parazoo, N. C.; Denning, A. S.; Kawa, S. R.; Pawson, S.; Lokupitiya, R.
2012-01-01
Vertical transport by moist sub-grid scale processes such as deep convection is a well-known source of uncertainty in CO2 source/sink inversion. However, a dynamical link between vertical transport, satellite based retrievals of column mole fractions of CO2, and source/sink inversion has not yet been established. By using the same offline transport model with meteorological fields from slightly different data assimilation systems, we examine sensitivity of frontal CO2 transport and retrieved fluxes to different parameterizations of sub-grid vertical transport. We find that frontal transport feeds off background vertical CO2 gradients, which are modulated by sub-grid vertical transport. The implication for source/sink estimation is two-fold. First, CO2 variations contained in moist poleward moving air masses are systematically different from variations in dry equatorward moving air. Moist poleward transport is hidden from orbital sensors on satellites, causing a sampling bias, which leads directly to small but systematic flux retrieval errors in northern mid-latitudes. Second, differences in the representation of moist sub-grid vertical transport in GEOS-4 and GEOS-5 meteorological fields cause differences in vertical gradients of CO2, which leads to systematic differences in moist poleward and dry equatorward CO2 transport and therefore the fraction of CO2 variations hidden in moist air from satellites. As a result, sampling biases are amplified and regional scale flux errors enhanced, most notably in Europe (0.43+/-0.35 PgC /yr). These results, cast from the perspective of moist frontal transport processes, support previous arguments that the vertical gradient of CO2 is a major source of uncertainty in source/sink inversion.
Spatial Representativeness of Surface-Measured Variations of Downward Solar Radiation
NASA Astrophysics Data System (ADS)
Schwarz, M.; Folini, D.; Hakuba, M. Z.; Wild, M.
2017-12-01
When using time series of ground-based surface solar radiation (SSR) measurements in combination with gridded data, the spatial and temporal representativeness of the point observations must be considered. We use SSR data from surface observations and high-resolution (0.05°) satellite-derived data to infer the spatiotemporal representativeness of observations for monthly and longer time scales in Europe. The correlation analysis shows that the squared correlation coefficients (R2) between SSR times series decrease linearly with increasing distance between the surface observations. For deseasonalized monthly mean time series, R2 ranges from 0.85 for distances up to 25 km between the stations to 0.25 at distances of 500 km. A decorrelation length (i.e., the e-folding distance of R2) on the order of 400 km (with spread of 100-600 km) was found. R2 from correlations between point observations and colocated grid box area means determined from satellite data were found to be 0.80 for a 1° grid. To quantify the error which arises when using a point observation as a surrogate for the area mean SSR of larger surroundings, we calculated a spatial sampling error (SSE) for a 1° grid of 8 (3) W/m2 for monthly (annual) time series. The SSE based on a 1° grid, therefore, is of the same magnitude as the measurement uncertainty. The analysis generally reveals that monthly mean (or longer temporally aggregated) point observations of SSR capture the larger-scale variability well. This finding shows that comparing time series of SSR measurements with gridded data is feasible for those time scales.
Incompressible flow simulations on regularized moving meshfree grids
NASA Astrophysics Data System (ADS)
Vasyliv, Yaroslav; Alexeev, Alexander
2017-11-01
A moving grid meshfree solver for incompressible flows is presented. To solve for the flow field, a semi-implicit approximate projection method is directly discretized on meshfree grids using General Finite Differences (GFD) with sharp interface stencil modifications. To maintain a regular grid, an explicit shift is used to relax compressed pseudosprings connecting a star node to its cloud of neighbors. The following test cases are used for validation: the Taylor-Green vortex decay, the analytic and modified lid-driven cavities, and an oscillating cylinder enclosed in a container for a range of Reynolds number values. We demonstrate that 1) the grid regularization does not impede the second order spatial convergence rate, 2) the Courant condition can be used for time marching but the projection splitting error reduces the convergence rate to first order, and 3) moving boundaries and arbitrary grid distortions can readily be handled. Financial support provided by the National Science Foundation (NSF) Graduate Research Fellowship, Grant No. DGE-1148903.
Interplay Between Energy-Market Dynamics and Physical Stability of a Smart Power Grid
NASA Astrophysics Data System (ADS)
Picozzi, Sergio; Mammoli, Andrea; Sorrentino, Francesco
2013-03-01
A smart power grid is being envisioned for the future which, among other features, should enable users to play the dual role of consumers as well as producers and traders of energy, thanks to emerging renewable energy production and energy storage technologies. As a complex dynamical system, any power grid is subject to physical instabilities. With existing grids, such instabilities tend to be caused by natural disasters, human errors, or weather-related peaks in demand. In this work we analyze the impact, upon the stability of a smart grid, of the energy-market dynamics arising from users' ability to buy from and sell energy to other users. The stability analysis of the resulting dynamical system is performed assuming different proposed models for this market of the future, and the corresponding stability regions in parameter space are identified. We test our theoretical findings by comparing them with data collected from some existing prototype systems.
Research on Spectroscopy, Opacity, and Atmospheres
NASA Technical Reports Server (NTRS)
Kurucz, Robert L.
1996-01-01
I discuss errors in theory and in interpreting observations that are produced by the failure to consider resolution in space, time, and energy. I discuss convection in stellar model atmospheres and in stars. Large errors in abundances are possible such as the factor of ten error in the Li abundance for extreme Population II stars. Finally I discuss the variation of microturbulent velocity with depth, effective temperature, gravity and abundance. These variations must be dealt with in computing models and grids and in any type of photometric calibration.
NASA Astrophysics Data System (ADS)
Kashefi, Ali; Staples, Anne
2016-11-01
Coarse grid projection (CGP) methodology is a novel multigrid method for systems involving decoupled nonlinear evolution equations and linear elliptic equations. The nonlinear equations are solved on a fine grid and the linear equations are solved on a corresponding coarsened grid. Mapping functions transfer data between the two grids. Here we propose a version of CGP for incompressible flow computations using incremental pressure correction methods, called IFEi-CGP (implicit-time-integration, finite-element, incremental coarse grid projection). Incremental pressure correction schemes solve Poisson's equation for an intermediate variable and not the pressure itself. This fact contributes to IFEi-CGP's efficiency in two ways. First, IFEi-CGP preserves the velocity field accuracy even for a high level of pressure field grid coarsening and thus significant speedup is achieved. Second, because incremental schemes reduce the errors that arise from boundaries with artificial homogenous Neumann conditions, CGP generates undamped flows for simulations with velocity Dirichlet boundary conditions. Comparisons of the data accuracy and CPU times for the incremental-CGP versus non-incremental-CGP computations are presented.
Global Digital Image Mosaics of Mars: Assessment of Geodetic Accuracy
NASA Technical Reports Server (NTRS)
Kirk, R.; Archinal, B. A.; Lee, E. M.; Davies, M. E.; Colvin, T. R.; Duxbury, T. C.
2001-01-01
A revised global image mosaic of Mars (MDIM 2.0) was recently completed by USGS. Comparison with high-resolution gridded Mars Orbiter Laser Altimeter (MOLA) digital image mosaics will allow us to quantify its geodetic errors; linking the next MDIM to the MOLA data will help eliminate those errors. Additional information is contained in the original extended abstract.
NASA Astrophysics Data System (ADS)
Quinn, Niall; Freer, Jim; Coxon, Gemma; O'Loughlin, Fiachra; Woods, Ross; Liguori, Sara
2015-04-01
In Great Britain and many other regions of the world, flooding resulting from short duration, high intensity rainfall events can lead to significant economic losses and fatalities. At present, such extreme events are often poorly evaluated using hydrological models due, in part, to their rarity and relatively short duration and a lack of appropriate data. Such storm characteristics are not well represented by daily rainfall records currently available using volumetric gauges and/or derived gridded products. This research aims to address this important data gap by developing a sub-daily gridded precipitation product for Great Britain. Our focus is to better understand these storm events and some of the challenges and uncertainties in quantifying such data across catchment scales. Our goal is to both improve such rainfall characterisation and derive an input to drive hydrological model simulations. Our methodology involves the collation, error checking, and spatial interpolation of approximately 2000 rain gauges located across Great Britain, provided by the Scottish Environment Protection Agency (SEPA) and the Environment Agency (EA). Error checking was conducted over the entirety of the TBR data available, utilising a two stage approach. First, rain gauge data at each site were examined independently, with data exceeding reasonable thresholds marked as suspect. Second, potentially erroneous data were marked using a neighbourhood analysis approach whereby measurements at a given gauge were deemed suspect if they did not fall within defined bounds of measurements at neighbouring gauges. A total of eight error checks were conducted. To provide the user with the greatest flexibility possible, the error markers associated with each check have been recorded at every site. This approach aims to enable the user to choose which checks they deem most suitable for a particular application. The quality assured TBR dataset was then spatially interpolated to produce a national scale gridded rainfall product. Finally, radar rainfall data provided by the UK Met Office was assimilated, where available, to provide an optimal hourly estimate of rainfall, given the error variance associated with both datasets. This research introduces a sub-daily rainfall product that will be of particular value to hydrological modellers requiring rainfall inputs at higher temporal resolutions than those currently available nationally. Further research will aim to quantify the uncertainties in the rainfall product in order to improve our ability to diagnose and identify structural errors in hydrological modelling of extreme events. Here we present our initial findings.
The Need for a Revised Joint Personnel Accounting Doctrine
2011-05-22
recoveries to field operations for the 11 David R. Graham, Ashley N. Bybee , Susan L. Clark-Sestak, and...Naval War College, 2009), XI-43. 16 David R. Graham, Ashley N. Bybee , Susan L. Clark-Sestak, and Michal S. Finnin, Assessment of DOD Central...Division, 8 June 2005. Graham, David R., Ashley N. Bybee , Susan L. Clark-Sestak, and Michal S. Finnin. Assessment of DOD Central
Federal Register 2010, 2011, 2012, 2013, 2014
2012-06-20
...), Clark County, NV AGENCY: U.S. Federal Highway Administration (FHWA), DOT. ACTION: Notice of Withdrawal of the Notice of Intent to prepare an EIS for the improvements to I-515 in Clark County, Nevada... improvements to I-515 in the cities of Las Vegas and Henderson, Clark County, NV and in that portion of...
2009-01-01
Background Soybeans grown in the upper Midwestern United States often suffer from iron deficiency chlorosis, which results in yield loss at the end of the season. To better understand the effect of iron availability on soybean yield, we identified genes in two near isogenic lines with changes in expression patterns when plants were grown in iron sufficient and iron deficient conditions. Results Transcriptional profiles of soybean (Glycine max, L. Merr) near isogenic lines Clark (PI548553, iron efficient) and IsoClark (PI547430, iron inefficient) grown under Fe-sufficient and Fe-limited conditions were analyzed and compared using the Affymetrix® GeneChip® Soybean Genome Array. There were 835 candidate genes in the Clark (PI548553) genotype and 200 candidate genes in the IsoClark (PI547430) genotype putatively involved in soybean's iron stress response. Of these candidate genes, fifty-eight genes in the Clark genotype were identified with a genetic location within known iron efficiency QTL and 21 in the IsoClark genotype. The arrays also identified 170 single feature polymorphisms (SFPs) specific to either Clark or IsoClark. A sliding window analysis of the microarray data and the 7X genome assembly coupled with an iterative model of the data showed the candidate genes are clustered in the genome. An analysis of 5' untranslated regions in the promoter of candidate genes identified 11 conserved motifs in 248 differentially expressed genes, all from the Clark genotype, representing 129 clusters identified earlier, confirming the cluster analysis results. Conclusion These analyses have identified the first genes with expression patterns that are affected by iron stress and are located within QTL specific to iron deficiency stress. The genetic location and promoter motif analysis results support the hypothesis that the differentially expressed genes are co-regulated. The combined results of all analyses lead us to postulate iron inefficiency in soybean is a result of a mutation in a transcription factor(s), which controls the expression of genes required in inducing an iron stress response. PMID:19678937
Life Prediction Model for Grid-Connected Li-ion Battery Energy Storage System: Preprint
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Kandler A; Saxon, Aron R; Keyser, Matthew A
Life Prediction Model for Grid-Connected Li-ion Battery Energy Storage System: Preprint Lithium-ion (Li-ion) batteries are being deployed on the electrical grid for a variety of purposes, such as to smooth fluctuations in solar renewable power generation. The lifetime of these batteries will vary depending on their thermal environment and how they are charged and discharged. To optimal utilization of a battery over its lifetime requires characterization of its performance degradation under different storage and cycling conditions. Aging tests were conducted on commercial graphite/nickel-manganese-cobalt (NMC) Li-ion cells. A general lifetime prognostic model framework is applied to model changes in capacity andmore » resistance as the battery degrades. Across 9 aging test conditions from 0oC to 55oC, the model predicts capacity fade with 1.4 percent RMS error and resistance growth with 15 percent RMS error. The model, recast in state variable form with 8 states representing separate fade mechanisms, is used to extrapolate lifetime for example applications of the energy storage system integrated with renewable photovoltaic (PV) power generation.« less
A study of ionospheric grid modification technique for BDS/GPS receiver
NASA Astrophysics Data System (ADS)
Liu, Xuelin; Li, Meina; Zhang, Lei
2017-07-01
For the single-frequency GPS receiver, ionospheric delay is an important factor affecting the positioning performance. There are many kinds of ionospheric correction methods, common models are Bent model, IRI model, Klobuchar model, Ne Quick model and so on. The US Global Positioning System (GPS) uses the Klobuchar coefficients transmitted in the satellite signal to correct the ionospheric delay error for a single frequency GPS receiver, but this model can only reduce the ionospheric error of about 50% in the mid-latitudes. In the Beidou system, the accuracy of the correction delay is higher. Therefore, this paper proposes a method that using BD grid information to correct GPS ionospheric delay to improve the ionospheric delay for the BDS/GPS compatible positioning receiver. In this paper, the principle of ionospheric grid algorithm is introduced in detail, and the positioning accuracy of GPS system and BDS/GPS compatible positioning system is compared and analyzed by the real measured data. The results show that the method can effectively improve the positioning accuracy of the receiver in a more concise way.
Simulation of wave propagation in three-dimensional random media
NASA Astrophysics Data System (ADS)
Coles, Wm. A.; Filice, J. P.; Frehlich, R. G.; Yadlowsky, M.
1995-04-01
Quantitative error analyses for the simulation of wave propagation in three-dimensional random media, when narrow angular scattering is assumed, are presented for plane-wave and spherical-wave geometry. This includes the errors that result from finite grid size, finite simulation dimensions, and the separation of the two-dimensional screens along the propagation direction. Simple error scalings are determined for power-law spectra of the random refractive indices of the media. The effects of a finite inner scale are also considered. The spatial spectra of the intensity errors are calculated and compared with the spatial spectra of
SAR image formation with azimuth interpolation after azimuth transform
Doerry,; Armin W. , Martin; Grant D. , Holzrichter; Michael, W [Albuquerque, NM
2008-07-08
Two-dimensional SAR data can be processed into a rectangular grid format by subjecting the SAR data to a Fourier transform operation, and thereafter to a corresponding interpolation operation. Because the interpolation operation follows the Fourier transform operation, the interpolation operation can be simplified, and the effect of interpolation errors can be diminished. This provides for the possibility of both reducing the re-grid processing time, and improving the image quality.
Measurement error in time-series analysis: a simulation study comparing modelled and monitored data.
Butland, Barbara K; Armstrong, Ben; Atkinson, Richard W; Wilkinson, Paul; Heal, Mathew R; Doherty, Ruth M; Vieno, Massimo
2013-11-13
Assessing health effects from background exposure to air pollution is often hampered by the sparseness of pollution monitoring networks. However, regional atmospheric chemistry-transport models (CTMs) can provide pollution data with national coverage at fine geographical and temporal resolution. We used statistical simulation to compare the impact on epidemiological time-series analysis of additive measurement error in sparse monitor data as opposed to geographically and temporally complete model data. Statistical simulations were based on a theoretical area of 4 regions each consisting of twenty-five 5 km × 5 km grid-squares. In the context of a 3-year Poisson regression time-series analysis of the association between mortality and a single pollutant, we compared the error impact of using daily grid-specific model data as opposed to daily regional average monitor data. We investigated how this comparison was affected if we changed the number of grids per region containing a monitor. To inform simulations, estimates (e.g. of pollutant means) were obtained from observed monitor data for 2003-2006 for national network sites across the UK and corresponding model data that were generated by the EMEP-WRF CTM. Average within-site correlations between observed monitor and model data were 0.73 and 0.76 for rural and urban daily maximum 8-hour ozone respectively, and 0.67 and 0.61 for rural and urban loge(daily 1-hour maximum NO2). When regional averages were based on 5 or 10 monitors per region, health effect estimates exhibited little bias. However, with only 1 monitor per region, the regression coefficient in our time-series analysis was attenuated by an estimated 6% for urban background ozone, 13% for rural ozone, 29% for urban background loge(NO2) and 38% for rural loge(NO2). For grid-specific model data the corresponding figures were 19%, 22%, 54% and 44% respectively, i.e. similar for rural loge(NO2) but more marked for urban loge(NO2). Even if correlations between model and monitor data appear reasonably strong, additive classical measurement error in model data may lead to appreciable bias in health effect estimates. As process-based air pollution models become more widely used in epidemiological time-series analysis, assessments of error impact that include statistical simulation may be useful.
Progress in NEXT Ion Optics Modeling
NASA Technical Reports Server (NTRS)
Emhoff, Jerold W.; Boyd, Iain D.
2004-01-01
Results are presented from an ion optics simulation code applied to the NEXT ion thruster geometry. The error in the potential field solver of the code is characterized, and methods and requirements for reducing this error are given. Results from a study on electron backstreaming using the improved field solver are given and shown to compare much better to experimental results than previous studies. Results are also presented on a study of the beamlet behavior in the outer radial apertures of the NEXT thruster. The low beamlet currents in this region allow over-focusing of the beam, causing direct impingement of ions on the accelerator grid aperture wall. Different possibilities for reducing this direct impingement are analyzed, with the conclusion that, of the methods studied, decreasing the screen grid aperture diameter eliminates direct impingement most effectively.
Automatic Overset Grid Generation with Heuristic Feedback Control
NASA Technical Reports Server (NTRS)
Robinson, Peter I.
2001-01-01
An advancing front grid generation system for structured Overset grids is presented which automatically modifies Overset structured surface grids and control lines until user-specified grid qualities are achieved. The system is demonstrated on two examples: the first refines a space shuttle fuselage control line until global truncation error is achieved; the second advances, from control lines, the space shuttle orbiter fuselage top and fuselage side surface grids until proper overlap is achieved. Surface grids are generated in minutes for complex geometries. The system is implemented as a heuristic feedback control (HFC) expert system which iteratively modifies the input specifications for Overset control line and surface grids. It is developed as an extension of modern control theory, production rules systems and subsumption architectures. The methodology provides benefits over the full knowledge lifecycle of an expert system for knowledge acquisition, knowledge representation, and knowledge execution. The vector/matrix framework of modern control theory systematically acquires and represents expert system knowledge. Missing matrix elements imply missing expert knowledge. The execution of the expert system knowledge is performed through symbolic execution of the matrix algebra equations of modern control theory. The dot product operation of matrix algebra is generalized for heuristic symbolic terms. Constant time execution is guaranteed.
NASA Astrophysics Data System (ADS)
Peng, L.; Sheffield, J.; Verbist, K. M. J.
2016-12-01
Hydrological predictions at regional-to-global scales are often hampered by the lack of meteorological forcing data. The use of large-scale gridded meteorological data is able to overcome this limitation, but these data are subject to regional biases and unrealistic values at local scale. This is especially challenging in regions such as Chile, where climate exhibits high spatial heterogeneity as a result of long latitude span and dramatic elevation changes. However, regional station-based observational datasets are not fully exploited and have the potential of constraining biases and spatial patterns. This study aims at adjusting precipitation and temperature estimates from the Princeton University global meteorological forcing (PGF) gridded dataset to improve hydrological simulations over Chile, by assimilating 982 gauges from the Dirección General de Aguas (DGA). To merge station data with the gridded dataset, we use a state-space estimation method to produce optimal gridded estimates, considering both the error of the station measurements and the gridded PGF product. The PGF daily precipitation, maximum and minimum temperature at 0.25° spatial resolution are adjusted for the period of 1979-2010. Precipitation and temperature gauges with long and continuous records (>70% temporal coverage) are selected, while the remaining stations are used for validation. The leave-one-out cross validation verifies the robustness of this data assimilation approach. The merged dataset is then used to force the Variable Infiltration Capacity (VIC) hydrological model over Chile at daily time step which are compared to the observations of streamflow. Our initial results show that the station-merged PGF precipitation effectively captures drizzle and the spatial pattern of storms. Overall the merged dataset has significant improvements compared to the original PGF with reduced biases and stronger inter-annual variability. The invariant spatial pattern of errors between the station data and the gridded product opens up the possibility of merging real-time satellite and intermittent gauge observations to produce more accurate real-time hydrological predictions.
Dickinson, J.E.; James, S.C.; Mehl, S.; Hill, M.C.; Leake, S.A.; Zyvoloski, G.A.; Faunt, C.C.; Eddebbarh, A.-A.
2007-01-01
A flexible, robust method for linking parent (regional-scale) and child (local-scale) grids of locally refined models that use different numerical methods is developed based on a new, iterative ghost-node method. Tests are presented for two-dimensional and three-dimensional pumped systems that are homogeneous or that have simple heterogeneity. The parent and child grids are simulated using the block-centered finite-difference MODFLOW and control-volume finite-element FEHM models, respectively. The models are solved iteratively through head-dependent (child model) and specified-flow (parent model) boundary conditions. Boundary conditions for models with nonmatching grids or zones of different hydraulic conductivity are derived and tested against heads and flows from analytical or globally-refined models. Results indicate that for homogeneous two- and three-dimensional models with matched grids (integer number of child cells per parent cell), the new method is nearly as accurate as the coupling of two MODFLOW models using the shared-node method and, surprisingly, errors are slightly lower for nonmatching grids (noninteger number of child cells per parent cell). For heterogeneous three-dimensional systems, this paper compares two methods for each of the two sets of boundary conditions: external heads at head-dependent boundary conditions for the child model are calculated using bilinear interpolation or a Darcy-weighted interpolation; specified-flow boundary conditions for the parent model are calculated using model-grid or hydrogeologic-unit hydraulic conductivities. Results suggest that significantly more accurate heads and flows are produced when both Darcy-weighted interpolation and hydrogeologic-unit hydraulic conductivities are used, while the other methods produce larger errors at the boundary between the regional and local models. The tests suggest that, if posed correctly, the ghost-node method performs well. Additional testing is needed for highly heterogeneous systems. ?? 2007 Elsevier Ltd. All rights reserved.
The challenges and frustrations of a veteran astronomical optician: Robert Lundin, 1880-1962
NASA Astrophysics Data System (ADS)
Briggs, John W.; Osterbrock, Donald E.
1998-12-01
Robert Lundin, apprenticed in nineteenth century optical craftsmanship but employed in twenty century fabrication and engineering, suffered many frustrations during a nonetheless productive career. Son of Carl A.R. Lundin, a senior optician at the famous American firm of Alvan Clark & Sons, Robert grew up building telescopes. As a teenager, he assisted with projects including the 1-m [40-inch] objective for Yerkes Observatory. After his father's death in 1915, he became manager of the Clark Corporation and was responsible for many smaller, successful refractors and reflectors. Lundin also completed major projects, including a highly praised 50.8-cm achromat for Van Vleck Observatory, as well as a successful 33-cm astrograph used at Lowell to discover Pluto. In 1929, a dispute with the owners of the Clark Corporation led to Lundin's resignation and his creation of a new business, "C.A. Robert Lundin and Associates." This short-lived firm built several observatory refractors, including a 26.7 cm for E.W. Rice, the retired chairman of General Electric. But none was entirely successful, and the Great Depression finished off the company. In 1933, Lundin took a job as head of Warner & Swasey's new optical shop, only to experience his greatest disasters. The 2.08-m [82-inch] reflector for McDonald Observatory was delayed for years until astronomers uncovered an error in Lundin's procedure for testing the primary mirror. A 38.1-cm photographic lens for the Naval Observatory was a complete failure. Under pressure to complete a 61-cm Schmidt camera, Lundin seems to have attempted to deceive visiting astronomers. After retirement in the mid 1940s, Lundin moved to Austin, Texas, the home of his daughter, where he died. His difficulties should not obscure his success with many instruments that continue to serve as important research and education tools.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Mereghetti, Paolo; Martinez, M.; Wade, Rebecca C.
Brownian dynamics (BD) simulations can be used to study very large molecular systems, such as models of the intracellular environment, using atomic-detail structures. Such simulations require strategies to contain the computational costs, especially for the computation of interaction forces and energies. A common approach is to compute interaction forces between macromolecules by precomputing their interaction potentials on three-dimensional discretized grids. For long-range interactions, such as electrostatics, grid-based methods are subject to finite size errors. We describe here the implementation of a Debye-Hückel correction to the grid-based electrostatic potential used in the SDA BD simulation software that was applied to simulatemore » solutions of bovine serum albumin and of hen egg white lysozyme.« less
A Solution Adaptive Technique Using Tetrahedral Unstructured Grids
NASA Technical Reports Server (NTRS)
Pirzadeh, Shahyar Z.
2000-01-01
An adaptive unstructured grid refinement technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The method is based on a combination of surface mesh subdivision and local remeshing of the volume grid Simple functions of flow quantities are employed to detect dominant features of the flowfield The method is designed for modular coupling with various error/feature analyzers and flow solvers. Several steady-state, inviscid flow test cases are presented to demonstrate the applicability of the method for solving practical three-dimensional problems. In all cases, accurate solutions featuring complex, nonlinear flow phenomena such as shock waves and vortices have been generated automatically and efficiently.
Clark, V Ralph; Schrire, Brian D; Barker, Nigel P
2015-01-01
Two new species of Indigofera L. (Leguminosae) are described from the Sneeuberg Centre of Floristic Endemism on the southern Great Escarpment, Eastern and Western Cape Provinces, South Africa. Both species are localised high-altitude endemics. Indigoferamagnifica Schrire & V.R. Clark is confined to the summit plateau of the Toorberg-Koudeveldberg-Meelberg west of Graaff-Reinet, and complements other western Sneeuberg endemics such as Ericapasserinoides (Bolus) E.G.H. Oliv. and Faurearecondita Rourke & V.R. Clark. Indigoferaasantasanensis Schrire & V.R. Clark is confined to a small area east of Graaff-Reinet, and complements several other eastern Sneeuberg endemics such as Euryopsexsudans B. Nord & V.R. Clark and Euryopsproteoides B. Nord. & V.R. Clark. Based on morphology, both new species belong to the Cape Clade of Indigofera, supporting a biogeographical link between the Cape Floristic Region and the Sneeuberg, as well as with the rest of the eastern Great Escarpment.
NASA Astrophysics Data System (ADS)
Tang, Guoqiang; Behrangi, Ali; Long, Di; Li, Changming; Hong, Yang
2018-04-01
Rain gauge observations are commonly used to evaluate the quality of satellite precipitation products. However, the inherent difference between point-scale gauge measurements and areal satellite precipitation, i.e. a point of space in time accumulation v.s. a snapshot of time in space aggregation, has an important effect on the accuracy and precision of qualitative and quantitative evaluation results. This study aims to quantify the uncertainty caused by various combinations of spatiotemporal scales (0.1°-0.8° and 1-24 h) of gauge network designs in the densely gauged and relatively flat Ganjiang River basin, South China, in order to evaluate the state-of-the-art satellite precipitation, the Integrated Multi-satellite Retrievals for Global Precipitation Measurement (IMERG). For comparison with the dense gauge network serving as "ground truth", 500 sparse gauge networks are generated through random combinations of gauge numbers at each set of spatiotemporal scales. Results show that all sparse gauge networks persistently underestimate the performance of IMERG according to most metrics. However, the probability of detection is overestimated because hit and miss events are more likely fewer than the reference numbers derived from dense gauge networks. A nonlinear error function of spatiotemporal scales and the number of gauges in each grid pixel is developed to estimate the errors of using gauges to evaluate satellite precipitation. Coefficients of determination of the fitting are above 0.9 for most metrics. The error function can also be used to estimate the required minimum number of gauges in each grid pixel to meet a predefined error level. This study suggests that the actual quality of satellite precipitation products could be better than conventionally evaluated or expected, and hopefully enables non-subject-matter-expert researchers to have better understanding of the explicit uncertainties when using point-scale gauge observations to evaluate areal products.
ERIC Educational Resources Information Center
Witte, Kevin C.
2006-01-01
The odyssey of the Lewis and Clark Expedition continues to capture the hearts of those who love tales of adventure and unknown lands. In light of the bicentennial celebration that began in 2003 and continued through 2006, the popularity and aggrandizement of Meriwether Lewis, William Clark, and their Corps of Discovery has never been greater.…
2003-10-28
KENNEDY SPACE CENTER, FLA. -- Dr. Jonathan Clark (right), husband of STS-107 Mission Specialist Laurel Clark, and their son (left) visit a new residence hall at the Florida Institute of Technology (FIT) in Melbourne, Fla., named for his late wife. Family members of the STS-107 astronauts, other dignitaries, members of the university community and the public gathered for a dedication ceremony for the Columbia Village at FIT. Each of the seven new residence halls in the complex is named for one of the STS-107 astronauts who perished during the Columbia accident -- Rick Husband, Willie McCool, Laurel Clark, Michael Anderson, David Brown, Kalpana Chawla, and Ilan Ramon.
The Space-Wise Global Gravity Model from GOCE Nominal Mission Data
NASA Astrophysics Data System (ADS)
Gatti, A.; Migliaccio, F.; Reguzzoni, M.; Sampietro, D.; Sanso, F.
2011-12-01
In the framework of the GOCE data analysis, the space-wise approach implements a multi-step collocation solution for the estimation of a global geopotential model in terms of spherical harmonic coefficients and their error covariance matrix. The main idea is to use the collocation technique to exploit the spatial correlation of the gravity field in the GOCE data reduction. In particular the method consists of an along-track Wiener filter, a collocation gridding at satellite altitude and a spherical harmonic analysis by integration. All these steps are iterated, also to account for the rotation between local orbital and gradiometer reference frame. Error covariances are computed by Montecarlo simulations. The first release of the space-wise approach was presented at the ESA Living Planet Symposium in July 2010. This model was based on only two months of GOCE data and partially contained a priori information coming from other existing gravity models, especially at low degrees and low orders. A second release was distributed after the 4th International GOCE User Workshop in May 2011. In this solution, based on eight months of GOCE data, all the dependencies from external gravity information were removed thus giving rise to a GOCE-only space-wise model. However this model showed an over-regularization at the highest degrees of the spherical harmonic expansion due to the combination technique of intermediate solutions (based on about two months of data). In this work a new space-wise solution is presented. It is based on all nominal mission data from November 2009 to mid April 2011, and its main novelty is that the intermediate solutions are now computed in such a way to avoid over-regularization in the final solution. Beyond the spherical harmonic coefficients of the global model and their error covariance matrix, the space-wise approach is able to deliver as by-products a set of spherical grids of potential and of its second derivatives at mean satellite altitude. These grids have an information content that is very similar to the original along-orbit data, but they are much easier to handle. In addition they are estimated by local least-squares collocation and therefore, although computed by a unique global covariance function, they could yield more information at local level than the spherical harmonic coefficients of the global model. For this reason these grids seem to be useful for local geophysical investigations. The estimated grids with their estimated errors are presented in this work together with proposals on possible future improvements. A test to compare the different information contents of the along-orbit data, the gridded data and the spherical harmonic coefficients is also shown.
Estimation of sampling error uncertainties in observed surface air temperature change in China
NASA Astrophysics Data System (ADS)
Hua, Wei; Shen, Samuel S. P.; Weithmann, Alexander; Wang, Huijun
2017-08-01
This study examines the sampling error uncertainties in the monthly surface air temperature (SAT) change in China over recent decades, focusing on the uncertainties of gridded data, national averages, and linear trends. Results indicate that large sampling error variances appear at the station-sparse area of northern and western China with the maximum value exceeding 2.0 K2 while small sampling error variances are found at the station-dense area of southern and eastern China with most grid values being less than 0.05 K2. In general, the negative temperature existed in each month prior to the 1980s, and a warming in temperature began thereafter, which accelerated in the early and mid-1990s. The increasing trend in the SAT series was observed for each month of the year with the largest temperature increase and highest uncertainty of 0.51 ± 0.29 K (10 year)-1 occurring in February and the weakest trend and smallest uncertainty of 0.13 ± 0.07 K (10 year)-1 in August. The sampling error uncertainties in the national average annual mean SAT series are not sufficiently large to alter the conclusion of the persistent warming in China. In addition, the sampling error uncertainties in the SAT series show a clear variation compared with other uncertainty estimation methods, which is a plausible reason for the inconsistent variations between our estimate and other studies during this period.
Hartman, C. Alex; Ackerman, Joshua T.; Eagles-Smith, Collin A.; Herzog, Mark
2016-01-01
In birds where males and females are similar in size and plumage, sex determination by alternative means is necessary. Discriminant function analysis based on external morphometrics was used to distinguish males from females in two closely related species: Western Grebe (Aechmophorus occidentalis) and Clark's Grebe (A. clarkii). Additionally, discriminant function analysis was used to evaluate morphometric divergence between Western and Clark's grebe adults and eggs. Aechmophorus grebe adults (n = 576) and eggs (n = 130) were sampled across 29 lakes and reservoirs throughout California, USA, and adult sex was determined using molecular analysis. Both Western and Clark's grebes exhibited considerable sexual size dimorphism. Males averaged 6–26% larger than females among seven morphological measurements, with the greatest sexual size dimorphism occurring for bill morphometrics. Discriminant functions based on bill length, bill depth, and short tarsus length correctly assigned sex to 98% of Western Grebes, and a function based on bill length and bill depth correctly assigned sex to 99% of Clark's Grebes. Further, a simplified discriminant function based only on bill depth correctly assigned sex to 96% of Western Grebes and 98% of Clark's Grebes. In contrast, external morphometrics were not suitable for differentiating between Western and Clark's grebe adults or their eggs, with correct classification rates of discriminant functions of only 60%, 63%, and 61% for adult males, adult females, and eggs, respectively. Our results indicate little divergence in external morphology between species of Aechmophorus grebes, and instead separation is much greater between males and females.
The Diagnostic Value of the Clarke Sign in Assessing Chondromalacia Patella
Doberstein, Scott T; Romeyn, Richard L; Reineke, David M
2008-01-01
Context: Various techniques have been described for assessing conditions that cause pain at the patellofemoral (PF) joint. The Clarke sign is one such test, but the diagnostic value of this test in assessing chondromalacia patella is unknown. Objective: To (1) investigate the diagnostic value of the Clarke sign in assessing the presence of chondromalacia patella using arthroscopic examination of the PF joint as the “gold standard,” and (2) provide a historical perspective of the Clarke sign as a clinical diagnostic test. Design: Validation study. Setting: All patients of one of the investigators who had knee pain or injuries unrelated to the patellofemoral joint and were scheduled for arthroscopic surgery were recruited for this study. Patients or Other Participants: A total of 106 otherwise healthy individuals with no history of patellofemoral pain or dysfunction volunteered. Main Outcome Measure(s): The Clarke sign was performed on the surgical knee by a single investigator in the clinic before surgery. A positive test was indicated by the presence of pain sufficient to prevent the patient from maintaining a quadriceps muscle contraction against manual resistance for longer than 2 seconds. The preoperative result was compared with visual evidence of chondromalacia patella during arthroscopy. Results: Sensitivity was 0.39, specificity was 0.67, likelihood ratio for a positive test was 1.18, likelihood ratio for a negative test was 0.91, positive predictive value was 0.25, and negative predictive value was 0.80. Conclusions: Diagnostic validity values for the use of the Clarke sign in assessing chondromalacia patella were unsatisfactory, supporting suggestions that it has poor diagnostic value as a clinical examination technique. Additionally, an extensive search of the available literature for the Clarke sign reveals multiple problems with the test, causing significant confusion for clinicians. Therefore, the use of the Clarke sign as a routine part of a knee examination is not beneficial, and its use should be discontinued. PMID:18345345
The diagnostic value of the Clarke sign in assessing chondromalacia patella.
Doberstein, Scott T; Romeyn, Richard L; Reineke, David M
2008-01-01
Various techniques have been described for assessing conditions that cause pain at the patellofemoral (PF) joint. The Clarke sign is one such test, but the diagnostic value of this test in assessing chondromalacia patella is unknown. To (1) investigate the diagnostic value of the Clarke sign in assessing the presence of chondromalacia patella using arthroscopic examination of the PF joint as the "gold standard," and (2) provide a historical perspective of the Clarke sign as a clinical diagnostic test. Validation study. All patients of one of the investigators who had knee pain or injuries unrelated to the patellofemoral joint and were scheduled for arthroscopic surgery were recruited for this study. A total of 106 otherwise healthy individuals with no history of patellofemoral pain or dysfunction volunteered. The Clarke sign was performed on the surgical knee by a single investigator in the clinic before surgery. A positive test was indicated by the presence of pain sufficient to prevent the patient from maintaining a quadriceps muscle contraction against manual resistance for longer than 2 seconds. The preoperative result was compared with visual evidence of chondromalacia patella during arthroscopy. Sensitivity was 0.39, specificity was 0.67, likelihood ratio for a positive test was 1.18, likelihood ratio for a negative test was 0.91, positive predictive value was 0.25, and negative predictive value was 0.80. Diagnostic validity values for the use of the Clarke sign in assessing chondromalacia patella were unsatisfactory, supporting suggestions that it has poor diagnostic value as a clinical examination technique. Additionally, an extensive search of the available literature for the Clarke sign reveals multiple problems with the test, causing significant confusion for clinicians. Therefore, the use of the Clarke sign as a routine part of a knee examination is not beneficial, and its use should be discontinued.
Daupin, Johanne; Atkinson, Suzanne; Bédard, Pascal; Pelchat, Véronique; Lebel, Denis; Bussières, Jean-François
2016-12-01
The medication-use system in hospitals is very complex. To improve the health professionals' awareness of the risks of errors related to the medication-use system, a simulation of medication errors was created. The main objective was to assess the medical, nursing and pharmacy staffs' ability to identify errors related to the medication-use system using a simulation. The secondary objective was to assess their level of satisfaction. This descriptive cross-sectional study was conducted in a 500-bed mother-and-child university hospital. A multidisciplinary group set up 30 situations and replicated a patient room and a care unit pharmacy. All hospital staff, including nurses, physicians, pharmacists and pharmacy technicians, was invited. Participants had to detect if a situation contained an error and fill out a response grid. They also answered a satisfaction survey. The simulation was held during 100 hours. A total of 230 professionals visited the simulation, 207 handed in a response grid and 136 answered the satisfaction survey. The participants' overall rate of correct answers was 67.5% ± 13.3% (4073/6036). Among the least detected errors were situations involving a Y-site infusion incompatibility, an oral syringe preparation and the patient's identification. Participants mainly considered the simulation as effective in identifying incorrect practices (132/136, 97.8%) and relevant to their practice (129/136, 95.6%). Most of them (114/136; 84.4%) intended to change their practices in view of their exposure to the simulation. We implemented a realistic medication-use system errors simulation in a mother-child hospital, with a wide audience. This simulation was an effective, relevant and innovative tool to raise the health care professionals' awareness of critical processes. © 2016 John Wiley & Sons, Ltd.
Metering error quantification under voltage and current waveform distortion
NASA Astrophysics Data System (ADS)
Wang, Tao; Wang, Jia; Xie, Zhi; Zhang, Ran
2017-09-01
With integration of more and more renewable energies and distortion loads into power grid, the voltage and current waveform distortion results in metering error in the smart meters. Because of the negative effects on the metering accuracy and fairness, it is an important subject to study energy metering combined error. In this paper, after the comparing between metering theoretical value and real recorded value under different meter modes for linear and nonlinear loads, a quantification method of metering mode error is proposed under waveform distortion. Based on the metering and time-division multiplier principles, a quantification method of metering accuracy error is proposed also. Analyzing the mode error and accuracy error, a comprehensive error analysis method is presented which is suitable for new energy and nonlinear loads. The proposed method has been proved by simulation.
Impact of Measurement Error on Synchrophasor Applications
DOE Office of Scientific and Technical Information (OSTI.GOV)
Liu, Yilu; Gracia, Jose R.; Ewing, Paul D.
2015-07-01
Phasor measurement units (PMUs), a type of synchrophasor, are powerful diagnostic tools that can help avert catastrophic failures in the power grid. Because of this, PMU measurement errors are particularly worrisome. This report examines the internal and external factors contributing to PMU phase angle and frequency measurement errors and gives a reasonable explanation for them. It also analyzes the impact of those measurement errors on several synchrophasor applications: event location detection, oscillation detection, islanding detection, and dynamic line rating. The primary finding is that dynamic line rating is more likely to be influenced by measurement error. Other findings include themore » possibility of reporting nonoscillatory activity as an oscillation as the result of error, failing to detect oscillations submerged by error, and the unlikely impact of error on event location and islanding detection.« less
Chew, Robert F; Amer, Safaa; Jones, Kasey; Unangst, Jennifer; Cajka, James; Allpress, Justine; Bruhn, Mark
2018-05-09
Conducting surveys in low- and middle-income countries is often challenging because many areas lack a complete sampling frame, have outdated census information, or have limited data available for designing and selecting a representative sample. Geosampling is a probability-based, gridded population sampling method that addresses some of these issues by using geographic information system (GIS) tools to create logistically manageable area units for sampling. GIS grid cells are overlaid to partition a country's existing administrative boundaries into area units that vary in size from 50 m × 50 m to 150 m × 150 m. To avoid sending interviewers to unoccupied areas, researchers manually classify grid cells as "residential" or "nonresidential" through visual inspection of aerial images. "Nonresidential" units are then excluded from sampling and data collection. This process of manually classifying sampling units has drawbacks since it is labor intensive, prone to human error, and creates the need for simplifying assumptions during calculation of design-based sampling weights. In this paper, we discuss the development of a deep learning classification model to predict whether aerial images are residential or nonresidential, thus reducing manual labor and eliminating the need for simplifying assumptions. On our test sets, the model performs comparable to a human-level baseline in both Nigeria (94.5% accuracy) and Guatemala (96.4% accuracy), and outperforms baseline machine learning models trained on crowdsourced or remote-sensed geospatial features. Additionally, our findings suggest that this approach can work well in new areas with relatively modest amounts of training data. Gridded population sampling methods like geosampling are becoming increasingly popular in countries with outdated or inaccurate census data because of their timeliness, flexibility, and cost. Using deep learning models directly on satellite images, we provide a novel method for sample frame construction that identifies residential gridded aerial units. In cases where manual classification of satellite images is used to (1) correct for errors in gridded population data sets or (2) classify grids where population estimates are unavailable, this methodology can help reduce annotation burden with comparable quality to human analysts.
NASA Technical Reports Server (NTRS)
Stowe, Larry; Hucek, Richard; Ardanuy, Philip; Joyce, Robert
1994-01-01
Much of the new record of broadband earth radiation budget satellite measurements to be obtained during the late 1990s and early twenty-first century will come from the dual-radiometer Clouds and Earth's Radiant Energy System Instrument (CERES-I) flown aboard sun-synchronous polar orbiters. Simulation studies conducted in this work for an early afternoon satellite orbit indicate that spatial root-mean-square (rms) sampling errors of instantaneous CERES-I shortwave flux estimates will range from about 8.5 to 14.0 W/m on a 2.5 deg latitude and longitude grid resolution. Rms errors in longwave flux estimates are only about 20% as large and range from 1.5 to 3.5 W/sq m. These results are based on an optimal cross-track scanner design that includes 50% footprint overlap to eliminate gaps in the top-of-the-atmosphere coverage, and a 'smallest' footprint size to increase the ratio in the number of observations lying within to the number of observations lying on grid area boundaries. Total instantaneous measurement error also depends on the variability of anisotropic reflectance and emission patterns and on retrieval methods used to generate target area fluxes. Three retrieval procedures from both CERES-I scanners (cross-track and rotating azimuth plane) are used. (1) The baseline Earth Radiaton Budget Experiment (ERBE) procedure, which assumes that errors due to the use of mean angular dependence models (ADMs) in the radiance-to-flux inversion process nearly cancel when averaged over grid areas. (2) To estimate N, instantaneous ADMs are estimated from the multiangular, collocated observations of the two scanners. These observed models replace the mean models in computation of satellite flux estimates. (3) The scene flux approach, conducts separate target-area retrievals for each ERBE scene category and combines their results using area weighting by scene type. The ERBE retrieval performs best when the simulated radiance field departs from the ERBE mean models by less than 10%. For larger perturbations, both the scene flux and collocation methods produce less error than the ERBE retrieval. The scene flux technique is preferable, however, because it involves fewer restrictive assumptions.
Calculations of Flowfield About Indented Nosetips,
1982-08-23
agreement is good. UNCLASSIFIED SECURITY CLASSIFICATION OF THIS PAOE(ft,. Date E -t. , - NSWC TR 82-286 FOREWORD A finite difference computer program has been...Specific heat at constant pressure and volume respectively e Total energy per unit volume E ,F,H,R,S,T Functions of U AHT, HT Error in total enthalpy and...total enthalpy respectively ijGrid index in E and n directions respectively SI Identity matrix J,K Maximum grid point in E and n directions respectively
Wang, Shiyao; Deng, Zhidong; Yin, Gang
2016-01-01
A high-performance differential global positioning system (GPS) receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS–inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car. PMID:26927108
Wang, Shiyao; Deng, Zhidong; Yin, Gang
2016-02-24
A high-performance differential global positioning system (GPS) receiver with real time kinematics provides absolute localization for driverless cars. However, it is not only susceptible to multipath effect but also unable to effectively fulfill precise error correction in a wide range of driving areas. This paper proposes an accurate GPS-inertial measurement unit (IMU)/dead reckoning (DR) data fusion method based on a set of predictive models and occupancy grid constraints. First, we employ a set of autoregressive and moving average (ARMA) equations that have different structural parameters to build maximum likelihood models of raw navigation. Second, both grid constraints and spatial consensus checks on all predictive results and current measurements are required to have removal of outliers. Navigation data that satisfy stationary stochastic process are further fused to achieve accurate localization results. Third, the standard deviation of multimodal data fusion can be pre-specified by grid size. Finally, we perform a lot of field tests on a diversity of real urban scenarios. The experimental results demonstrate that the method can significantly smooth small jumps in bias and considerably reduce accumulated position errors due to DR. With low computational complexity, the position accuracy of our method surpasses existing state-of-the-arts on the same dataset and the new data fusion method is practically applied in our driverless car.
Code of Federal Regulations, 2011 CFR
2011-04-01
... reliance on settled disciplinary proceedings in some circumstances. See In the Matter of Michael J. Clark... significant sanctions for serious rule violations—whether settlements or adjudications), aff'd sub nom., Clark... in Sections 8a(2) and 8a(3) of the Act. The Commission held in In the Matter of Clark that statutory...
A Study on How to Implement an Effective Marketing and Education Program for Coordinated Care
1992-11-01
could play. Philip Kotler and Roberta N. Clarke (1987) recognize that the role marketing plays in health care organizations varies greatly. It is...accepted by professional marketers is provided by Kotler and Clarke (1987): Marketing is the analysis, planning, implementation, and control of carefully...Gaithersburg, MD: Aspen. Kotler , P. & Clarke, R. N. (1987). Marketina for health care organizations. Englewood Cliffs, NJ: Prentice-Hall. Leebov, W. (1988
Comparison of Full-Scale Propellers Having R.A.F.-6 and Clark Y Airfoil Sections
NASA Technical Reports Server (NTRS)
Freeman, Hugh B
1932-01-01
In this report the efficiencies of two series of propellers having two types of blade sections are compared. Six full-scale propellers were used, three having R. A. F.-6 and three Clark Y airfoil sections with thickness/chord ratios of 0.06, 0.08, and 0.10. The propellers were tested at five pitch setting, which covered the range ordinarily used in practice. The propellers having the Clark Y sections gave the highest peak efficiency at the low pitch settings. At the high pitch settings, the propellers with R. A. F.-6 sections gave about the same maximum efficiency as the Clark Y propellers and were more efficient for the conditions of climb and take-off.
STS-107 Crew Interviews: Laurel Clark, Mission Specialist
NASA Technical Reports Server (NTRS)
2002-01-01
STS-107 Mission Specialist 4 Laurel Clark is seen during this preflight interview, where she gives a quick overview of the mission before answering questions about her inspiration to become an astronaut and her career path. Clark outlines her role in the mission in general, and specifically in conducting onboard science experiments. She discusses the following suite of experiments and instruments in detail: ARMS (Advanced Respiratory Monitoring System) and the European Space Agency's Biopack. Clark also mentions on-board activities and responsibilities during launch and reentry, mission training, and microgravity research. In addition, she touches on the use of crew members as research subjects including pre and postflight monitoring activities, the emphasis on crew safety and the value of international cooperation.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Udhay Ravishankar; Milos manic
2013-08-01
This paper presents a micro-grid simulator tool useful for implementing and testing multi-agent controllers (SGridSim). As a common engineering practice it is important to have a tool that simplifies the modeling of the salient features of a desired system. In electric micro-grids, these salient features are the voltage and power distributions within the micro-grid. Current simplified electric power grid simulator tools such as PowerWorld, PowerSim, Gridlab, etc, model only the power distribution features of a desired micro-grid. Other power grid simulators such as Simulink, Modelica, etc, use detailed modeling to accommodate the voltage distribution features. This paper presents a SGridSimmore » micro-grid simulator tool that simplifies the modeling of both the voltage and power distribution features in a desired micro-grid. The SGridSim tool accomplishes this simplified modeling by using Effective Node-to-Node Complex Impedance (EN2NCI) models of components that typically make-up a micro-grid. The term EN2NCI models means that the impedance based components of a micro-grid are modeled as single impedances tied between their respective voltage nodes on the micro-grid. Hence the benefit of the presented SGridSim tool are 1) simulation of a micro-grid is performed strictly in the complex-domain; 2) faster simulation of a micro-grid by avoiding the simulation of detailed transients. An example micro-grid model was built using the SGridSim tool and tested to simulate both the voltage and power distribution features with a total absolute relative error of less than 6%.« less
Nutaro, James; Kuruganti, Teja
2017-02-24
Numerical simulations of the wave equation that are intended to provide accurate time domain solutions require a computational mesh with grid points separated by a distance less than the wavelength of the source term and initial data. However, calculations of radio signal pathloss generally do not require accurate time domain solutions. This paper describes an approach for calculating pathloss by using the finite difference time domain and transmission line matrix models of wave propagation on a grid with points separated by distances much greater than the signal wavelength. The calculated pathloss can be kept close to the true value formore » freespace propagation with an appropriate selection of initial conditions. This method can also simulate diffraction with an error governed by the ratio of the signal wavelength to the grid spacing.« less
Convergence issues in domain decomposition parallel computation of hovering rotor
NASA Astrophysics Data System (ADS)
Xiao, Zhongyun; Liu, Gang; Mou, Bin; Jiang, Xiong
2018-05-01
Implicit LU-SGS time integration algorithm has been widely used in parallel computation in spite of its lack of information from adjacent domains. When applied to parallel computation of hovering rotor flows in a rotating frame, it brings about convergence issues. To remedy the problem, three LU factorization-based implicit schemes (consisting of LU-SGS, DP-LUR and HLU-SGS) are investigated comparatively. A test case of pure grid rotation is designed to verify these algorithms, which show that LU-SGS algorithm introduces errors on boundary cells. When partition boundaries are circumferential, errors arise in proportion to grid speed, accumulating along with the rotation, and leading to computational failure in the end. Meanwhile, DP-LUR and HLU-SGS methods show good convergence owing to boundary treatment which are desirable in domain decomposition parallel computations.
Zanderigo, Francesca; Sparacino, Giovanni; Kovatchev, Boris; Cobelli, Claudio
2007-09-01
The aim of this article was to use continuous glucose error-grid analysis (CG-EGA) to assess the accuracy of two time-series modeling methodologies recently developed to predict glucose levels ahead of time using continuous glucose monitoring (CGM) data. We considered subcutaneous time series of glucose concentration monitored every 3 minutes for 48 hours by the minimally invasive CGM sensor Glucoday® (Menarini Diagnostics, Florence, Italy) in 28 type 1 diabetic volunteers. Two prediction algorithms, based on first-order polynomial and autoregressive (AR) models, respectively, were considered with prediction horizons of 30 and 45 minutes and forgetting factors (ff) of 0.2, 0.5, and 0.8. CG-EGA was used on the predicted profiles to assess their point and dynamic accuracies using original CGM profiles as reference. Continuous glucose error-grid analysis showed that the accuracy of both prediction algorithms is overall very good and that their performance is similar from a clinical point of view. However, the AR model seems preferable for hypoglycemia prevention. CG-EGA also suggests that, irrespective of the time-series model, the use of ff = 0.8 yields the highest accurate readings in all glucose ranges. For the first time, CG-EGA is proposed as a tool to assess clinically relevant performance of a prediction method separately at hypoglycemia, euglycemia, and hyperglycemia. In particular, we have shown that CG-EGA can be helpful in comparing different prediction algorithms, as well as in optimizing their parameters.
Clark county monitoring program
DOE Office of Scientific and Technical Information (OSTI.GOV)
Conway, Sheila; Auger, Jeremy; Navies, Irene
2007-07-01
Available in abstract form only. Full text of publication follows: Since 1988, Clark County has been one of the counties designated by the United States Department of Energy (DOE) as an 'Affected Unit of Local Government' (AULG). The AULG designation is an acknowledgement by the federal government that could be negatively impacted to a considerable degree by activities associated with the Yucca Mountain High Level Nuclear Waste Repository. These negative effects would have an impact on residents as individuals and the community as a whole. As an AULG, Clark County is authorized to identify 'any potential economic, social, public healthmore » and safety, and environmental impacts' of the potential repository (42 USC Section 10135(C)(1)(B)(1)). Toward this end, Clark County has conducted numerous studies of potential impacts, many of which are summarized in the Clark County's Impact Assessment Report that was submitted by the DOE and the president of the United States in February 2002. Given the unprecedented magnitude and duration of the DoE's proposal, as well as the many unanswered questions about the number of shipments and the modal mix, the estimate of impacts described in these studies are preliminary. In order to refine these estimates, Clark County Comprehensive Planning Department's Nuclear Waste Division is continuing to assess potential impacts. In addition, the County has implemented a Monitoring Program designed to capture changes to the social, environmental, and economic well-being of its residents resulting from the Yucca Mountain project and other significant events within the County. The Monitoring Program acts as an 'early warning system' that allows Clark County decision makers to proactive respond to impacts from the Yucca Mountain Project. (authors)« less
Variability between Clarke's angle and Chippaux-Smirak index for the diagnosis of flat feet
Gonzalez-Martin, Cristina; Seoane-Pillado, Teresa; Lopez-Calviño, Beatriz; Pertega-Diaz, Sonia; Gil-Guillen, Vicente
2017-01-01
Abstract Background: The measurements used in diagnosing biomechanical pathologies vary greatly. The aim of this study was to determine the concordance between Clarke's angle and Chippaux-Smirak index, and to determine the validity of Clarke's angle using the Chippaux-Smirak index as a reference. Methods: Observational study in a random population sample (n= 1,002) in A Coruña (Spain). After informed patient consent and ethical review approval, a study was conducted of anthropometric variables, Charlson comorbidity score, and podiatric examination (Clarke's angle and Chippaux-Smirak index). Descriptive analysis and multivariate logistic regression were performed. Results: The prevalence of flat feet, using a podoscope, was 19.0% for the left foot and 18.9% for the right foot, increasing with age. The prevalence of flat feet according to the Chippaux-Smirak index or Clarke's angle increases significantly, reaching 62.0% and 29.7% respectively. The concordance (kappa I) between the indices according to age groups varied between 0.25-0.33 (left foot) and 0.21-0.30 (right foot). The intraclass correlation coefficient (ICC) between the Chippaux-Smirak index and Clarke's angle was -0.445 (left foot) and -0.424 (right foot). After adjusting for age, body mass index (BMI), comorbidity score and gender, the only variable with an independent effect to predict discordance was the BMI (OR= 0.969; 95% CI: 0.940-0.998). Conclusion: There is little concordance between the indices studied for the purpose of diagnosing foot arch pathologies. In turn, Clarke's angle has a limited sensitivity in diagnosing flat feet, using the Chippaux-Smirak index as a reference. This discordance decreases with higher BMI values. PMID:28559643
Variability between Clarke's angle and Chippaux-Smirak index for the diagnosis of flat feet.
Gonzalez-Martin, Cristina; Pita-Fernandez, Salvador; Seoane-Pillado, Teresa; Lopez-Calviño, Beatriz; Pertega-Diaz, Sonia; Gil-Guillen, Vicente
2017-03-30
The measurements used in diagnosing biomechanical pathologies vary greatly. The aim of this study was to determine the concordance between Clarke's angle and Chippaux-Smirak index, and to determine the validity of Clarke's angle using the Chippaux-Smirak index as a reference. Observational study in a random population sample (n= 1,002) in A Coruña (Spain). After informed patient consent and ethical review approval, a study was conducted of anthropometric variables, Charlson comorbidity score, and podiatric examination (Clarke's angle and Chippaux-Smirak index). Descriptive analysis and multivariate logistic regression were performed. The prevalence of flat feet, using a podoscope, was 19.0% for the left foot and 18.9% for the right foot, increasing with age. The prevalence of flat feet according to the Chippaux-Smirak index or Clarke's angle increases significantly, reaching 62.0% and 29.7% respectively. The concordance (kappa I) between the indices according to age groups varied between 0.25-0.33 (left foot) and 0.21-0.30 (right foot). The intraclass correlation coefficient (ICC) between the Chippaux-Smirak index and Clarke's angle was -0.445 (left foot) and -0.424 (right foot). After adjusting for age, body mass index (BMI), comorbidity score and gender, the only variable with an independent effect to predict discordance was the BMI (OR= 0.969; 95% CI: 0.940-0.998). There is little concordance between the indices studied for the purpose of diagnosing foot arch pathologies. In turn, Clarke's angle has a limited sensitivity in diagnosing flat feet, using the Chippaux-Smirak index as a reference. This discordance decreases with higher BMI values.
Dodge, Kent A.; Hornberger, Michelle I.; Dyke, Jessica
2007-01-01
Water, bed sediment, and biota were sampled in streams from Butte to below Milltown Reservoir as part of a long-term monitoring program in the upper Clark Fork basin; additional water-quality samples were collected in the Clark Fork basin from sites near Milltown Reservoir downstream to near the confluence of the Clark Fork and Flathead River as part of a supplemental sampling program. The sampling programs were conducted in cooperation with the U.S. Environmental Protection Agency to characterize aquatic resources in the Clark Fork basin of western Montana, with emphasis on trace elements associated with historic mining and smelting activities. Sampling sites were located on the Clark Fork and selected tributaries. Water-quality samples were collected periodically at 22 sites from October 2005 through September 2006. Bed-sediment and biological samples were collected once at 12 sites during August 2006. This report presents the analytical results and quality-assurance data for water-quality, bed-sediment, and biota samples collected at all long-term and supplemental monitoring sites from October 2005 through September 2006. Water-quality data include concentrations of selected major ions, trace ele-ments, and suspended sediment. Nutrients also were analyzed in the supplemental water-quality samples. Daily values of suspended-sed-iment concentration and suspended-sediment discharge were determined for four sites, and seasonal daily values of turbidity were determined for four sites. Bed-sediment data include trace-ele-ment concentrations in the fine-grained fraction. Bio-logical data include trace-element concentrations in whole-body tissue of aquatic benthic insects. Statistical summaries of long-term water-quality, bed-sediment, and biological data for sites in the upper Clark Fork basin are provided for the period of record since 1985.
Emerging DoD Role in the Interagency Counter Threat Finance Mission
2012-03-14
Policy magazine article entitled “ Follow the Money ,” by Stuart Levey and Christy Clark, suggests that the FATF should branch out beyond establishing...February 2008), 2. 23 Ibid. 24 Stuart Levey and Christy Clark, “ Follow the Money ,” Foreign Policy online, www.foreignpolicy.com/articles/2011/10/3...PA, November 29, 2011. 28 Stuart Levey and Christy Clark, “ Follow the Money ,” Foreign Policy online, www.foreignpolicy.com/articles/2011/10/3
A grid for a precise analysis of daily activities.
Wojtasik, V; Olivier, C; Lekeu, F; Quittre, A; Adam, S; Salmon, E
2010-01-01
Assessment of daily living activities is essential in patients with Alzheimer's disease. Most current tools quantitatively assess overall ability but provide little qualitative information on individual difficulties. Only a few tools allow therapists to evaluate stereotyped activities and record different types of errors. We capitalised on the Kitchen Activity Assessment to design a widely applicable analysis grid that provides both qualitative and quantitative data on activity performance. A cooking activity was videotaped in 15 patients with dementia and assessed according to the different steps in the execution of the task. The evaluations obtained with our grid showed good correlations between raters, between versions of the grid and between sessions. Moreover, the degree of independence obtained with our analysis of the task correlated with the Kitchen Activity Assessment score and with a global score of cognitive functioning. We conclude that assessment of a daily living activity with this analysis grid is reproducible and relatively independent of the therapist, and thus provides quantitative and qualitative information useful for both evaluating and caring for demented patients.
The large discretization step method for time-dependent partial differential equations
NASA Technical Reports Server (NTRS)
Haras, Zigo; Taasan, Shlomo
1995-01-01
A new method for the acceleration of linear and nonlinear time dependent calculations is presented. It is based on the Large Discretization Step (LDS) approximation, defined in this work, which employs an extended system of low accuracy schemes to approximate a high accuracy discrete approximation to a time dependent differential operator. Error bounds on such approximations are derived. These approximations are efficiently implemented in the LDS methods for linear and nonlinear hyperbolic equations, presented here. In these algorithms the high and low accuracy schemes are interpreted as the same discretization of a time dependent operator on fine and coarse grids, respectively. Thus, a system of correction terms and corresponding equations are derived and solved on the coarse grid to yield the fine grid accuracy. These terms are initialized by visiting the fine grid once in many coarse grid time steps. The resulting methods are very general, simple to implement and may be used to accelerate many existing time marching schemes.
Summation-by-Parts operators with minimal dispersion error for coarse grid flow calculations
NASA Astrophysics Data System (ADS)
Linders, Viktor; Kupiainen, Marco; Nordström, Jan
2017-07-01
We present a procedure for constructing Summation-by-Parts operators with minimal dispersion error both near and far from numerical interfaces. Examples of such operators are constructed and compared with a higher order non-optimised Summation-by-Parts operator. Experiments show that the optimised operators are superior for wave propagation and turbulent flows involving large wavenumbers, long solution times and large ranges of resolution scales.
NASA Astrophysics Data System (ADS)
Zhang, Yunju; Chen, Zhongyi; Guo, Ming; Lin, Shunsheng; Yan, Yinyang
2018-01-01
With the large capacity of the power system, the development trend of the large unit and the high voltage, the scheduling operation is becoming more frequent and complicated, and the probability of operation error increases. This paper aims at the problem of the lack of anti-error function, single scheduling function and low working efficiency for technical support system in regional regulation and integration, the integrated construction of the error prevention of the integrated architecture of the system of dispatching anti - error of dispatching anti - error of power network based on cloud computing has been proposed. Integrated system of error prevention of Energy Management System, EMS, and Operation Management System, OMS have been constructed either. The system architecture has good scalability and adaptability, which can improve the computational efficiency, reduce the cost of system operation and maintenance, enhance the ability of regional regulation and anti-error checking with broad development prospects.
Field comparison of optical and clark cell dissolved-oxygen sensors
Fulford, J.M.; Davies, W.J.; Garcia, L.
2005-01-01
Three multi-parameter water-quality monitors equipped with either Clark cell type or optical type dissolved-oxygen sensors were deployed for 30 days in a brackish (salinity <10 parts per thousand) environment to determine the sensitivity of the sensors to biofouling. The dissolved-oxygen sensors compared periodically to a hand-held dissolved oxygen sensor, but were not serviced or cleaned during the deployment. One of the Clark cell sensors and the optical sensor performed similarly during the deployment. The remaining Clark cell sensor was not aged correctly prior to deployment and did not perform as well as the other sensors. All sensors experienced substantial biofouling that gradually degraded the accuracy of the dissolved-oxygen measurement during the last half of the deployment period. Copyright ASCE 2005.
... Home About the ATA Work of the ATA Leadership & Staff Governance Society Awards Society Committees Clark T. ... Home About the ATA Work of the ATA Leadership & Staff Governance Society Awards Society Committees Clark T. ...
... Home About the ATA Work of the ATA Leadership & Staff Governance Society Awards Society Committees Clark T. ... Home About the ATA Work of the ATA Leadership & Staff Governance Society Awards Society Committees Clark T. ...
76 FR 19117 - Missouri; Major Disaster and Related Determinations
Federal Register 2010, 2011, 2012, 2013, 2014
2011-04-06
..., Cedar, Chariton, Clark, Clinton, Cole, Cooper, Dade, Dallas, DeKalb, Grundy, Henry, Hickory, Howard..., Cass, Cedar, Chariton, Clark, Clinton, Cole, Dade, DeKalb, Grundy, Henry, Hickory, Howard, Johnson...
40 CFR 62.7130 - Identification of plan.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Clark County Department of Air Quality Management submitted on February 27, 2003, a letter certifying that there are no existing commercial/industrial solid waste incineration units in Clark County that...
NASA Astrophysics Data System (ADS)
Moslehi, M.; de Barros, F.; Rajagopal, R.
2014-12-01
Hydrogeological models that represent flow and transport in subsurface domains are usually large-scale with excessive computational complexity and uncertain characteristics. Uncertainty quantification for predicting flow and transport in heterogeneous formations often entails utilizing a numerical Monte Carlo framework, which repeatedly simulates the model according to a random field representing hydrogeological characteristics of the field. The physical resolution (e.g. grid resolution associated with the physical space) for the simulation is customarily chosen based on recommendations in the literature, independent of the number of Monte Carlo realizations. This practice may lead to either excessive computational burden or inaccurate solutions. We propose an optimization-based methodology that considers the trade-off between the following conflicting objectives: time associated with computational costs, statistical convergence of the model predictions and physical errors corresponding to numerical grid resolution. In this research, we optimally allocate computational resources by developing a modeling framework for the overall error based on a joint statistical and numerical analysis and optimizing the error model subject to a given computational constraint. The derived expression for the overall error explicitly takes into account the joint dependence between the discretization error of the physical space and the statistical error associated with Monte Carlo realizations. The accuracy of the proposed framework is verified in this study by applying it to several computationally extensive examples. Having this framework at hand aims hydrogeologists to achieve the optimum physical and statistical resolutions to minimize the error with a given computational budget. Moreover, the influence of the available computational resources and the geometric properties of the contaminant source zone on the optimum resolutions are investigated. We conclude that the computational cost associated with optimal allocation can be substantially reduced compared with prevalent recommendations in the literature.
Application Of Multi-grid Method On China Seas' Temperature Forecast
NASA Astrophysics Data System (ADS)
Li, W.; Xie, Y.; He, Z.; Liu, K.; Han, G.; Ma, J.; Li, D.
2006-12-01
Correlation scales have been used in traditional scheme of 3-dimensional variational (3D-Var) data assimilation to estimate the background error covariance for the numerical forecast and reanalysis of atmosphere and ocean for decades. However there are still some drawbacks of this scheme. First, the correlation scales are difficult to be determined accurately. Second, the positive definition of the first-guess error covariance matrix cannot be guaranteed unless the correlation scales are sufficiently small. Xie et al. (2005) indicated that a traditional 3D-Var only corrects some certain wavelength errors and its accuracy depends on the accuracy of the first-guess covariance. And in general, short wavelength error can not be well corrected until long one is corrected and then inaccurate first-guess covariance may mistakenly take long wave error as short wave ones and result in erroneous analysis. For the purpose of quickly minimizing the errors of long and short waves successively, a new 3D-Var data assimilation scheme, called multi-grid data assimilation scheme, is proposed in this paper. By assimilating the shipboard SST and temperature profiles data into a numerical model of China Seas, we applied this scheme in two-month data assimilation and forecast experiment which ended in a favorable result. Comparing with the traditional scheme of 3D-Var, the new scheme has higher forecast accuracy and a lower forecast Root-Mean-Square (RMS) error. Furthermore, this scheme was applied to assimilate the SST of shipboard, AVHRR Pathfinder Version 5.0 SST and temperature profiles at the same time, and a ten-month forecast experiment on sea temperature of China Seas was carried out, in which a successful forecast result was obtained. Particularly, the new scheme is demonstrated a great numerical efficiency in these analyses.
Accuracy and speed in computing the Chebyshev collocation derivative
NASA Technical Reports Server (NTRS)
Don, Wai-Sun; Solomonoff, Alex
1991-01-01
We studied several algorithms for computing the Chebyshev spectral derivative and compare their roundoff error. For a large number of collocation points, the elements of the Chebyshev differentiation matrix, if constructed in the usual way, are not computed accurately. A subtle cause is is found to account for the poor accuracy when computing the derivative by the matrix-vector multiplication method. Methods for accurately computing the elements of the matrix are presented, and we find that if the entities of the matrix are computed accurately, the roundoff error of the matrix-vector multiplication is as small as that of the transform-recursion algorithm. Results of CPU time usage are shown for several different algorithms for computing the derivative by the Chebyshev collocation method for a wide variety of two-dimensional grid sizes on both an IBM and a Cray 2 computer. We found that which algorithm is fastest on a particular machine depends not only on the grid size, but also on small details of the computer hardware as well. For most practical grid sizes used in computation, the even-odd decomposition algorithm is found to be faster than the transform-recursion method.
Wang, Jingang; Gao, Can; Yang, Jie
2014-07-17
Currently available traditional electromagnetic voltage sensors fail to meet the measurement requirements of the smart grid, because of low accuracy in the static and dynamic ranges and the occurrence of ferromagnetic resonance attributed to overvoltage and output short circuit. This work develops a new non-contact high-bandwidth voltage measurement system for power equipment. This system aims at the miniaturization and non-contact measurement of the smart grid. After traditional D-dot voltage probe analysis, an improved method is proposed. For the sensor to work in a self-integrating pattern, the differential input pattern is adopted for circuit design, and grounding is removed. To prove the structure design, circuit component parameters, and insulation characteristics, Ansoft Maxwell software is used for the simulation. Moreover, the new probe was tested on a 10 kV high-voltage test platform for steady-state error and transient behavior. Experimental results ascertain that the root mean square values of measured voltage are precise and that the phase error is small. The D-dot voltage sensor not only meets the requirement of high accuracy but also exhibits satisfactory transient response. This sensor can meet the intelligence, miniaturization, and convenience requirements of the smart grid.
"Tools For Analysis and Visualization of Large Time- Varying CFD Data Sets"
NASA Technical Reports Server (NTRS)
Wilhelms, Jane; vanGelder, Allen
1999-01-01
During the four years of this grant (including the one year extension), we have explored many aspects of the visualization of large CFD (Computational Fluid Dynamics) datasets. These have included new direct volume rendering approaches, hierarchical methods, volume decimation, error metrics, parallelization, hardware texture mapping, and methods for analyzing and comparing images. First, we implemented an extremely general direct volume rendering approach that can be used to render rectilinear, curvilinear, or tetrahedral grids, including overlapping multiple zone grids, and time-varying grids. Next, we developed techniques for associating the sample data with a k-d tree, a simple hierarchial data model to approximate samples in the regions covered by each node of the tree, and an error metric for the accuracy of the model. We also explored a new method for determining the accuracy of approximate models based on the light field method described at ACM SIGGRAPH (Association for Computing Machinery Special Interest Group on Computer Graphics) '96. In our initial implementation, we automatically image the volume from 32 approximately evenly distributed positions on the surface of an enclosing tessellated sphere. We then calculate differences between these images under different conditions of volume approximation or decimation.
NASA Astrophysics Data System (ADS)
Pathiraja, S. D.; van Leeuwen, P. J.
2017-12-01
Model Uncertainty Quantification remains one of the central challenges of effective Data Assimilation (DA) in complex partially observed non-linear systems. Stochastic parameterization methods have been proposed in recent years as a means of capturing the uncertainty associated with unresolved sub-grid scale processes. Such approaches generally require some knowledge of the true sub-grid scale process or rely on full observations of the larger scale resolved process. We present a methodology for estimating the statistics of sub-grid scale processes using only partial observations of the resolved process. It finds model error realisations over a training period by minimizing their conditional variance, constrained by available observations. Special is that these realisations are binned conditioned on the previous model state during the minimization process, allowing for the recovery of complex error structures. The efficacy of the approach is demonstrated through numerical experiments on the multi-scale Lorenz 96' model. We consider different parameterizations of the model with both small and large time scale separations between slow and fast variables. Results are compared to two existing methods for accounting for model uncertainty in DA and shown to provide improved analyses and forecasts.
Numerical Simulations For the F-16XL Aircraft Configuration
NASA Technical Reports Server (NTRS)
Elmiligui, Alaa A.; Abdol-Hamid, Khaled; Cavallo, Peter A.; Parlette, Edward B.
2014-01-01
Numerical simulations of flow around the F-16XL are presented as a contribution to the Cranked Arrow Wing Aerodynamic Project International II (CAWAPI-II). The NASA Tetrahedral Unstructured Software System (TetrUSS) is used to perform numerical simulations. This CFD suite, developed and maintained by NASA Langley Research Center, includes an unstructured grid generation program called VGRID, a postprocessor named POSTGRID, and the flow solver USM3D. The CRISP CFD package is utilized to provide error estimates and grid adaption for verification of USM3D results. A subsonic high angle-of-attack case flight condition (FC) 25 is computed and analyzed. Three turbulence models are used in the calculations: the one-equation Spalart-Allmaras (SA), the two-equation shear stress transport (SST) and the ke turbulence models. Computational results, and surface static pressure profiles are presented and compared with flight data. Solution verification is performed using formal grid refinement studies, the solution of Error Transport Equations, and adaptive mesh refinement. The current study shows that the USM3D solver coupled with CRISP CFD can be used in an engineering environment in predicting vortex-flow physics on a complex configuration at flight Reynolds numbers.
Optimizing Screening and Risk Assessment for Suicide Risk in the U.S. Military
2015-03-01
J., Scheftner, W. A., Fogg , L., Clark, D. C., Young, M. A., Hedeker, D., et al. (1990). Time-related predictors of suicide in major affective...stress. Psychophysiology, 31, 113-128. Cannon, W.B. (1932). The Wisdom of the Body. New York: Norton. Fawcett, J., Scheftner, W. A., Fogg , L., Clark...Fawcett, J., Scheftner, W. A., Fogg , L., Clark, D. C., Young, M. A., Hedeker, D., & Gibbons, R. (1990). Time-related predictors of suicide in major
1987-10-01
effectiveness of a particular health care provider or treatment procedure (Doering, 1983; Ben-Sira, 1983; Kotler & Clarke, 1986). Instead, much of a...criteria for quality of care ( Kotler & Clarke, 1986). There is great potential for positively influencing a patient’s percep- tion of care through a...important quality of care indicator for patients (Doering, 1983; Kotler & Clarke, 1986). Personal attention of the patient must remain a primary
Derivation of Human Lethal Doses
2006-01-19
1956; Blair, 1961; Mason et al., 1965; Clarke , 1969; Cretney, 1976; Gray et al., 1985). Gordon reported blood level of 5 mg/100ml in a victim. The...LD50 ( Clark et al., 1979). This is a primary source for the value. No LD50 for mouse Clinical Management of Poisoning and Drug Overdose 100 mL...Verschueren (2001) lists an oral LD50 range in various mammalian species as 30-112 mg/kg based on Clark et al., (1966 as cited in Verschueren, 2001
1986-06-12
Sacramento, California (1977). 18 Philip Kotler and Roberta N. Clarke, "Creating the Responsive Organization." Healthcare Forum 2 (3) (May/June 1986), p. 30...19 Philip Kotler and Roberta N. Clarke, "Creating the Responsive Organization." Healthcare Forum 2 (3) (May/June 1986), p. 32. 20 Thomas J. Peters...and Robert H.Waterman, Jr. IarchQo xcelenc. New York: Warner Books, 1984. 21 Philip Kotler and Roberta N. Clarke, "Creating the Responsive Organization
1993-02-01
recent literature is telling us. Hospitals & Health Services Administration, Special II, pp 67-84. Kotler , Philip , Clarke, R.N. (1987). Marketing for...only a small portion of a marketing plan. Kotler and Clarke (1987) state that a marketer would define marketing as follows: "Marketing is the analysis...This uniqueness of the services is part of the problem. Kotler and Clark (1987) state in their text that the first step in marketing any service is to
36 CFR 13.1602 - Subsistence resident zone.
Code of Federal Regulations, 2011 CFR
2011-07-01
... INTERIOR NATIONAL PARK SYSTEM UNITS IN ALASKA Special Regulations-Lake Clark National Park and Preserve... resident zone for Lake Clark National Park: Iliamna, Lime Village, Newhalen, Nondalton, Pedro Bay, and Port...
36 CFR 13.1602 - Subsistence resident zone.
Code of Federal Regulations, 2010 CFR
2010-07-01
... INTERIOR NATIONAL PARK SYSTEM UNITS IN ALASKA Special Regulations-Lake Clark National Park and Preserve... resident zone for Lake Clark National Park: Iliamna, Lime Village, Newhalen, Nondalton, Pedro Bay, and Port...
Advanced grid-stiffened composite shells for applications in heavy-lift helicopter rotor blade spars
NASA Astrophysics Data System (ADS)
Narayanan Nampy, Sreenivas
Modern rotor blades are constructed using composite materials to exploit their superior structural performance compared to metals. Helicopter rotor blade spars are conventionally designed as monocoque structures. Blades of the proposed Heavy Lift Helicopter are envisioned to be as heavy as 800 lbs when designed using the monocoque spar design. A new and innovative design is proposed to replace the conventional spar designs with light weight grid-stiffened composite shell. Composite stiffened shells have been known to provide excellent strength to weight ratio and damage tolerance with an excellent potential to reduce weight. Conventional stringer--rib stiffened construction is not suitable for rotor blade spars since they are limited in generating high torsion stiffness that is required for aeroelastic stability of the rotor. As a result, off-axis (helical) stiffeners must be provided. This is a new design space where innovative modeling techniques are needed. The structural behavior of grid-stiffened structures under axial, bending, and torsion loads, typically experienced by rotor blades need to be accurately predicted. The overall objective of the present research is to develop and integrate the necessary design analysis tools to conduct a feasibility study in employing grid-stiffened shells for heavy-lift rotor blade spars. Upon evaluating the limitations in state-of-the-art analytical models in predicting the axial, bending, and torsion stiffness coefficients of grid and grid-stiffened structures, a new analytical model was developed. The new analytical model based on the smeared stiffness approach was developed employing the stiffness matrices of the constituent members of the grid structure such as an arch, helical, or straight beam representing circumferential, helical, and longitudinal stiffeners. This analysis has the capability to model various stiffening configurations such as angle-grid, ortho-grid, and general-grid. Analyses were performed using an existing state-of-the-art and newly developed model to predict the torsion, bending, and axial stiffness of grid and grid-stiffened structures with various stiffening configurations. These predictions were compared to results generated using finite element analysis (FEA) to observe excellent correlation (within 6%) for a range of parameters for grid and grid-stiffened structures such as grid density, stiffener angle, and aspect ratio of the stiffener cross-section. Experimental results from cylindrical grid specimen testing were compared with analytical prediction using the new analysis. The new analysis predicted stiffness coefficients with nearly 7% error compared to FEA results. From the parametric studies conducted, it was observed that the previous state-of-the-art analysis on the other hand exhibited errors of the order of 39% for certain designs. Stability evaluations were also conducted by integrating the new analysis with established stability formulations. A design study was conducted to evaluate the potential weight savings of a simple grid-stiffened rotor blade spar structure compared to a baseline monocoque design. Various design constraints such as stiffness, strength, and stability were imposed. A manual search was conducted for design parameters such as stiffener density, stiffener angle, shell laminate, and stiffener aspect ratio that provide lightweight grid-stiffened designs compared to the baseline. It was found that a weight saving of 9.1% compared to the baseline is possible without violating any of the design constraints.
Code of Federal Regulations, 2011 CFR
2011-04-01
... intersection with Paraiso Road. (12) Then south following Paraiso Road to the intersection with Clark Road. (13) Then east-northeasterly along Clark Road for approximately 1,000 feet to its intersection with an...
Code of Federal Regulations, 2010 CFR
2010-04-01
... intersection with Paraiso Road. (12) Then south following Paraiso Road to the intersection with Clark Road. (13) Then east-northeasterly along Clark Road for approximately 1,000 feet to its intersection with an...
Steve Clark is an environmental engineer in EPA’s National Homeland Security Research Center (NHSRC). His research focuses on water security, exploring ways to protect and decontaminate pipes and other water “infrastructure.”
Find an Endocrinology - Thyroid Specialist
... Home About the ATA Work of the ATA Leadership & Staff Governance Society Awards Society Committees Clark T. ... Home About the ATA Work of the ATA Leadership & Staff Governance Society Awards Society Committees Clark T. ...
NASA Astrophysics Data System (ADS)
Abe, O. E.; Otero Villamide, X.; Paparini, C.; Radicella, S. M.; Nava, B.
2017-02-01
Investigating the effects of the Equatorial Ionization Anomaly (EIA) ionosphere and space weather on Global Navigation Satellite Systems (GNSS) is very crucial, and a key to successful implementation of a GNSS augmentation system (SBAS) over the equatorial and low-latitude regions. A possible ionospheric vertical delay (GIVD, Grid Ionospheric Vertical Delay) broadcast at a Ionospheric Grid Point (IGP) and its confidence bounds errors (GIVE, Grid Ionospheric Vertical Error) are analyzed and compared with the ionospheric vertical delay estimated at a nearby user location over the West African Sub-Saharan region. Since African sub-Saharan ionosphere falls within the EIA region, which is always characterized by a disturbance in form of irregularities after sunset, and the disturbance is even more during the geomagnetically quiet conditions unlike middle latitudes, the need to have a reliable ionospheric threat model to cater for the nighttime ionospheric plasma irregularities for the future SBAS user is essential. The study was done during the most quiet and disturbed geomagnetic conditions on October 2013. A specific low latitude EGNOS-like algorithm, based on single thin layer model, was engaged to simulate SBAS message in the study. Our preliminary results indicate that, the estimated GIVE detects and protects a potential SBAS user against sampled ionospheric plasma irregularities over the region with a steep increment in GIVE to non-monitored after local sunset to post midnight. This corresponds to the onset of the usual ionospheric plasma irregularities in the region. The results further confirm that the effects of the geomagnetic storms on the ionosphere are not consistent in affecting GNSS applications over the region. Finally, this paper suggests further work to be investigated in order to improve the threat integrity model activity, and thereby enhance the availability of the future SBAS over African sub-Saharan region.
Clinical study using novel endoscopic system for measuring size of gastrointestinal lesion
Oka, Kiyoshi; Seki, Takeshi; Akatsu, Tomohiro; Wakabayashi, Takao; Inui, Kazuo; Yoshino, Junji
2014-01-01
AIM: To verify the performance of a lesion size measurement system through a clinical study. METHODS: Our proposed system, which consists of a conventional endoscope, an optical device, an optical probe, and a personal computer, generates a grid scale to measure the lesion size from an endoscopic image. The width of the grid scale is constantly adjusted according to the distance between the tip of the endoscope and lesion because the lesion size on an endoscopic image changes according to the distance. The shape of the grid scale was corrected to match the distortion of the endoscopic image. The distance was calculated using the amount of laser light reflected from the lesion through an optical probe inserted into the instrument channel of the endoscope. The endoscopist can thus measure the lesion size without contact by comparing the lesion with the size of the grid scale on the endoscopic image. (1) A basic test was performed to verify the relationship between the measurement error eM and the tilt angle of the endoscope; and (2) The sizes of three colon polyps were measured using our system during endoscopy. These sizes were immediately measured by scale after their removal. RESULTS: There was no error at α = 0°. In addition, the values of eM (mean ± SD) were 0.24 ± 0.11 mm (α = 10°), 0.90 ± 0.58 mm (α = 20°) and 2.31 ± 1.41 mm (α = 30°). According to these results, our system has been confirmed to measure accurately when the tilt angle is less than 20°. The measurement error was approximately 1 mm in the clinical study. Therefore, it was concluded that our proposed measurement system was also effective in clinical examinations. CONCLUSION: By combining simple optical equipment with a conventional endoscope, a quick and accurate system for measuring lesion size was established. PMID:24744595
A self-adaptive-grid method with application to airfoil flow
NASA Technical Reports Server (NTRS)
Nakahashi, K.; Deiwert, G. S.
1985-01-01
A self-adaptive-grid method is described that is suitable for multidimensional steady and unsteady computations. Based on variational principles, a spring analogy is used to redistribute grid points in an optimal sense to reduce the overall solution error. User-specified parameters, denoting both maximum and minimum permissible grid spacings, are used to define the all-important constants, thereby minimizing the empiricism and making the method self-adaptive. Operator splitting and one-sided controls for orthogonality and smoothness are used to make the method practical, robust, and efficient. Examples are included for both steady and unsteady viscous flow computations about airfoils in two dimensions, as well as for a steady inviscid flow computation and a one-dimensional case. These examples illustrate the precise control the user has with the self-adaptive method and demonstrate a significant improvement in accuracy and quality of the solutions.
Accurate, robust and reliable calculations of Poisson-Boltzmann binding energies
Nguyen, Duc D.; Wang, Bao
2017-01-01
Poisson-Boltzmann (PB) model is one of the most popular implicit solvent models in biophysical modeling and computation. The ability of providing accurate and reliable PB estimation of electrostatic solvation free energy, ΔGel, and binding free energy, ΔΔGel, is important to computational biophysics and biochemistry. In this work, we investigate the grid dependence of our PB solver (MIBPB) with SESs for estimating both electrostatic solvation free energies and electrostatic binding free energies. It is found that the relative absolute error of ΔGel obtained at the grid spacing of 1.0 Å compared to ΔGel at 0.2 Å averaged over 153 molecules is less than 0.2%. Our results indicate that the use of grid spacing 0.6 Å ensures accuracy and reliability in ΔΔGel calculation. In fact, the grid spacing of 1.1 Å appears to deliver adequate accuracy for high throughput screening. PMID:28211071
An Exact Dual Adjoint Solution Method for Turbulent Flows on Unstructured Grids
NASA Technical Reports Server (NTRS)
Nielsen, Eric J.; Lu, James; Park, Michael A.; Darmofal, David L.
2003-01-01
An algorithm for solving the discrete adjoint system based on an unstructured-grid discretization of the Navier-Stokes equations is presented. The method is constructed such that an adjoint solution exactly dual to a direct differentiation approach is recovered at each time step, yielding a convergence rate which is asymptotically equivalent to that of the primal system. The new approach is implemented within a three-dimensional unstructured-grid framework and results are presented for inviscid, laminar, and turbulent flows. Improvements to the baseline solution algorithm, such as line-implicit relaxation and a tight coupling of the turbulence model, are also presented. By storing nearest-neighbor terms in the residual computation, the dual scheme is computationally efficient, while requiring twice the memory of the flow solution. The scheme is expected to have a broad impact on computational problems related to design optimization as well as error estimation and grid adaptation efforts.
A Sensemaking Perspective on Situation Awareness in Power Grid Operations
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greitzer, Frank L.; Schur, Anne; Paget, Mia L.
2008-07-21
With increasing complexity and interconnectivity of the electric power grid, the scope and complexity of grid operations continues to grow. New paradigms are needed to guide research to improve operations by enhancing situation awareness of operators. Research on human factors/situation awareness is described within a taxonomy of tools and approaches that address different levels of cognitive processing. While user interface features and visualization approaches represent the predominant focus of human factors studies of situation awareness, this paper argues that a complementary level, sensemaking, deserves further consideration by designers of decision support systems for power grid operations. A sensemaking perspective onmore » situation aware-ness may reveal new insights that complement ongoing human factors research, where the focus of the investigation of errors is to understand why the decision makers experienced the situation the way they did, or why what they saw made sense to them at the time.« less
40 CFR 52.776 - Control strategy: Particulate matter.
Code of Federal Regulations, 2010 CFR
2010-07-01
... Approval—The complete Indiana plan for Clark, Dearborn, Dubois, Marion (except for coke batteries), St..., Vandenburgh County; 6-1-17, Clark County; 6-1-18, St. Joseph County; 6-2, Particulate Emissions Limitations...
40 CFR 52.1879 - Review of new sources and modifications.
Code of Federal Regulations, 2010 CFR
2010-07-01
... approval exempts the Lucas, Wood, Clark, Greene, Miami, and Montgomery Counties from the requirements to.... Upon final approval of this exemption, the Clark, Greene, Miami, and Montgomery Counties shall not be...
40 CFR 52.776 - Control strategy: Particulate matter.
Code of Federal Regulations, 2011 CFR
2011-07-01
... Approval—The complete Indiana plan for Clark, Dearborn, Dubois, Marion (except for coke batteries), St..., Vandenburgh County; 6-1-17, Clark County; 6-1-18, St. Joseph County; 6-2, Particulate Emissions Limitations...
40 CFR 52.1879 - Review of new sources and modifications.
Code of Federal Regulations, 2011 CFR
2011-07-01
... approval exempts the Lucas, Wood, Clark, Greene, Miami, and Montgomery Counties from the requirements to.... Upon final approval of this exemption, the Clark, Greene, Miami, and Montgomery Counties shall not be...
Fort Clatsop : Review of Summer 2005 Operations.
DOT National Transportation Integrated Search
2005-09-30
In anticipation of increased visitation for the Lewis & Clark Bicentennial, Lewis & Clark National Historical Park (LEWI) implemented a remote parking (Netul Landing) and alternative transportation system involving both a shuttle and area transit rou...
21. VIEW OF CLARK OXYGEN BOOSTER COMPRESSOR IN THE HIGH ...
21. VIEW OF CLARK OXYGEN BOOSTER COMPRESSOR IN THE HIGH PURITY OXYGEN BUILDING LOOKING SOUTH. - U.S. Steel Duquesne Works, Fuel & Utilities Plant, Along Monongahela River, Duquesne, Allegheny County, PA
The Impact of the Grid Size on TomoTherapy for Prostate Cancer
Kawashima, Motohiro; Kawamura, Hidemasa; Onishi, Masahiro; Takakusagi, Yosuke; Okonogi, Noriyuki; Okazaki, Atsushi; Sekihara, Tetsuo; Ando, Yoshitaka; Nakano, Takashi
2017-01-01
Discretization errors due to the digitization of computed tomography images and the calculation grid are a significant issue in radiation therapy. Such errors have been quantitatively reported for a fixed multifield intensity-modulated radiation therapy using traditional linear accelerators. The aim of this study is to quantify the influence of the calculation grid size on the dose distribution in TomoTherapy. This study used ten treatment plans for prostate cancer. The final dose calculation was performed with “fine” (2.73 mm) and “normal” (5.46 mm) grid sizes. The dose distributions were compared from different points of view: the dose-volume histogram (DVH) parameters for planning target volume (PTV) and organ at risk (OAR), the various indices, and dose differences. The DVH parameters were used Dmax, D2%, D2cc, Dmean, D95%, D98%, and Dmin for PTV and Dmax, D2%, and D2cc for OARs. The various indices used were homogeneity index and equivalent uniform dose for plan evaluation. Almost all of DVH parameters for the “fine” calculations tended to be higher than those for the “normal” calculations. The largest difference of DVH parameters for PTV was Dmax and that for OARs was rectal D2cc. The mean difference of Dmax was 3.5%, and the rectal D2cc was increased up to 6% at the maximum and 2.9% on average. The mean difference of D95% for PTV was the smallest among the differences of the other DVH parameters. For each index, whether there was a significant difference between the two grid sizes was determined through a paired t-test. There were significant differences for most of the indices. The dose difference between the “fine” and “normal” calculations was evaluated. Some points around high-dose regions had differences exceeding 5% of the prescription dose. The influence of the calculation grid size in TomoTherapy is smaller than traditional linear accelerators. However, there was a significant difference. We recommend calculating the final dose using the “fine” grid size. PMID:28974860
The event notification and alarm system for the Open Science Grid operations center
NASA Astrophysics Data System (ADS)
Hayashi, S.; Teige and, S.; Quick, R.
2012-12-01
The Open Science Grid Operations (OSG) Team operates a distributed set of services and tools that enable the utilization of the OSG by several HEP projects. Without these services users of the OSG would not be able to run jobs, locate resources, obtain information about the status of systems or generally use the OSG. For this reason these services must be highly available. This paper describes the automated monitoring and notification systems used to diagnose and report problems. Described here are the means used by OSG Operations to monitor systems such as physical facilities, network operations, server health, service availability and software error events. Once detected, an error condition generates a message sent to, for example, Email, SMS, Twitter, an Instant Message Server, etc. The mechanism being developed to integrate these monitoring systems into a prioritized and configurable alarming system is emphasized.
Optimize of shrink process with X-Y CD bias on hole pattern
NASA Astrophysics Data System (ADS)
Koike, Kyohei; Hara, Arisa; Natori, Sakurako; Yamauchi, Shohei; Yamato, Masatoshi; Oyama, Kenichi; Yaegashi, Hidetami
2017-03-01
Gridded design rules[1] is major process in configuring logic circuit used 193-immersion lithography. In the scaling of grid patterning, we can make 10nm order line and space pattern by using multiple patterning techniques such as self-aligned multiple patterning (SAMP) and litho-etch- litho-etch (LELE)[2][3][4] . On the other hand, Line cut process has some error parameters such as pattern defect, placement error, roughness and X-Y CD bias with the decreasing scale. We tried to cure hole pattern roughness to use additional process such as Line smoothing[5] . Each smoothing process showed different effect. As the result, CDx shrink amount is smaller than CDy without one additional process. In this paper, we will report the pattern controllability comparison of EUV and 193-immersion. And we will discuss optimum method about CD bias on hole pattern.
Improving Energy Use Forecast for Campus Micro-grids using Indirect Indicators
DOE Office of Scientific and Technical Information (OSTI.GOV)
Aman, Saima; Simmhan, Yogesh; Prasanna, Viktor K.
2011-12-11
The rising global demand for energy is best addressed by adopting and promoting sustainable methods of power consumption. We employ an informatics approach towards forecasting the energy consumption patterns in a university campus micro-grid which can be used for energy use planning and conservation. We use novel indirect indicators of energy that are commonly available to train regression tree models that can predict campus and building energy use for coarse (daily) and fine (15-min) time intervals, utilizing 3 years of sensor data collected at 15min intervals from 170 smart power meters. We analyze the impact of individual features used inmore » the models to identify the ones best suited for the application. Our models show a high degree of accuracy with CV-RMSE errors ranging from 7.45% to 19.32%, and a reduction in error from baseline models by up to 53%.« less
Evaluation of the Lattice-Boltzmann Equation Solver PowerFLOW for Aerodynamic Applications
NASA Technical Reports Server (NTRS)
Lockard, David P.; Luo, Li-Shi; Singer, Bart A.; Bushnell, Dennis M. (Technical Monitor)
2000-01-01
A careful comparison of the performance of a commercially available Lattice-Boltzmann Equation solver (Power-FLOW) was made with a conventional, block-structured computational fluid-dynamics code (CFL3D) for the flow over a two-dimensional NACA-0012 airfoil. The results suggest that the version of PowerFLOW used in the investigation produced solutions with large errors in the computed flow field; these errors are attributed to inadequate resolution of the boundary layer for reasons related to grid resolution and primitive turbulence modeling. The requirement of square grid cells in the PowerFLOW calculations limited the number of points that could be used to span the boundary layer on the wing and still keep the computation size small enough to fit on the available computers. Although not discussed in detail, disappointing results were also obtained with PowerFLOW for a cavity flow and for the flow around a generic helicopter configuration.
Sando, Steven K.; Vecchia, Aldo V.
2016-07-20
During the extended history of mining in the upper Clark Fork Basin in Montana, large amounts of waste materials enriched with metallic contaminants (cadmium, copper, lead, and zinc) and the metalloid trace element arsenic were generated from mining operations near Butte and milling and smelting operations near Anaconda. Extensive deposition of mining wastes in the Silver Bow Creek and Clark Fork channels and flood plains had substantial effects on water quality. Federal Superfund remediation activities in the upper Clark Fork Basin began in 1983 and have included substantial remediation near Butte and removal of the former Milltown Dam near Missoula. To aid in evaluating the effects of remediation activities on water quality, the U.S. Geological Survey began collecting streamflow and water-quality data in the upper Clark Fork Basin in the 1980s.Trend analysis was done on specific conductance, selected trace elements (arsenic, copper, and zinc), and suspended sediment for seven sampling sites in the Milltown Reservoir/Clark Fork River Superfund Site for water years 1996–2015. The most upstream site included in trend analysis is Silver Bow Creek at Warm Springs, Montana (sampling site 8), and the most downstream site is Clark Fork above Missoula, Montana (sampling site 22), which is just downstream from the former Milltown Dam. Water year is the 12-month period from October 1 through September 30 and is designated by the year in which it ends. Trend analysis was done by using a joint time-series model for concentration and streamflow. To provide temporal resolution of changes in water quality, trend analysis was conducted for four sequential 5-year periods: period 1 (water years 1996–2000), period 2 (water years 2001–5), period 3 (water years 2006–10), and period 4 (water years 2011–15). Because of the substantial effect of the intentional breach of Milltown Dam on March 28, 2008, period 3 was subdivided into period 3A (October 1, 2005–March 27, 2008) and period 3B (March 28, 2008–September 30, 2010) for the Clark Fork above Missoula (sampling site 22). Trend results were considered statistically significant when the statistical probability level was less than 0.01.In conjunction with the trend analysis, estimated normalized constituent loads (hereinafter referred to as “loads”) were calculated and presented within the framework of a constituent-transport analysis to assess the temporal trends in flow-adjusted concentrations (FACs) in the context of sources and transport. The transport analysis allows assessment of temporal changes in relative contributions from upstream source areas to loads transported past each reach outflow.Trend results indicate that FACs of unfiltered-recoverable copper decreased at the sampling sites from the start of period 1 through the end of period 4; the decreases ranged from large for one sampling site (Silver Bow Creek at Warm Springs [sampling site 8]) to moderate for two sampling sites (Clark Fork near Galen, Montana [sampling site 11] and Clark Fork above Missoula [sampling site 22]) to small for four sampling sites (Clark Fork at Deer Lodge, Montana [sampling site 14], Clark Fork at Goldcreek, Montana [sampling site 16], Clark Fork near Drummond, Montana [sampling site 18], and Clark Fork at Turah Bridge near Bonner, Montana [sampling site 20]). For period 4 (water years 2011–15), the most notable changes indicated for the Milltown Reservoir/Clark Fork River Superfund Site were statistically significant decreases in FACs and loads of unfiltered-recoverable copper for sampling sites 8 and 22. The period 4 changes in FACs of unfiltered-recoverable copper for all other sampling sites were not statistically significant.Trend results indicate that FACs of unfiltered-recoverable arsenic decreased at the sampling sites from period 1 through period 4 (water years 1996–2015); the decreases ranged from minor (sampling sites 8–20) to small (sampling site 22). For period 4 (water years 2011–15), the most notable changes indicated for the Milltown Reservoir/Clark Fork River Superfund Site were statistically significant decreases in FACs and loads of unfiltered-recoverable arsenic for sampling site 8 and near statistically significant decreases for sampling site 22. The period 4 changes in FACs of unfiltered-recoverable arsenic for all other sampling sites were not statistically significant.Trend results indicate that FACs of suspended sediment decreased at the sampling sites from period 1 through period 4 (water years 1996–2015); the decreases ranged from moderate (sampling site 8) to small (sampling sites 11–22). For period 4 (water years 2011–15), the changes in FACs of suspended sediment were not statistically significant for any sampling sites.The reach of the Clark Fork from Galen to Deer Lodge is a large source of metallic contaminants and suspended sediment, which strongly affects downstream transport of those constituents. Mobilization of copper and suspended sediment from flood-plain tailings and the streambed of the Clark Fork and its tributaries within the reach results in a contribution of those constituents that is proportionally much larger than the contribution of streamflow from within the reach. Within the reach from Galen to Deer Lodge, unfiltered-recoverable copper loads increased by a factor of about 4 and suspended-sediment loads increased by a factor of about 5, whereas streamflow increased by a factor of slightly less than 2. For period 4 (water years 2011–15), unfiltered-recoverable copper and suspended-sediment loads sourced from within the reach accounted for about 41 and 14 percent, respectively, of the loads at Clark Fork above Missoula (sampling site 22), whereas streamflow sourced from within the reach accounted for about 4 percent of the streamflow at sampling site 22. During water years 1996–2015, decreases in FACs and loads of unfiltered-recoverable copper and suspended sediment for the reach generally were proportionally smaller than for most other reaches.Unfiltered-recoverable copper loads sourced within the reaches of the Clark Fork between Deer Lodge and Turah Bridge near Bonner (just upstream from the former Milltown Dam) were proportionally smaller than contributions of streamflow sourced from within the reaches; these reaches contributed proportionally much less to copper loading in the Clark Fork than the reach between Galen and Deer Lodge. Although substantial decreases in FACs and loads of unfiltered-recoverable copper and suspended sediment were indicated for Silver Bow Creek at Warm Springs (sampling site 8), those substantial decreases were not translated to downstream reaches between Deer Lodge and Turah Bridge near Bonner. The effect of the reach of the Clark Fork from Galen to Deer Lodge as a large source of copper and suspended sediment, in combination with little temporal change in those constituents for the reach, contributes to this pattern.With the removal of the former Milltown Dam in 2008, substantial amounts of contaminated sediments that remained in the Clark Fork channel and flood plain in reach 9 (downstream from Turah Bridge near Bonner) became more available for mobilization and transport than before the dam removal. After the removal of the former Milltown Dam, the Clark Fork above Missoula (sampling site 22) had statistically significant decreases in FACs of unfiltered-recoverable copper in period 3B (March 28, 2008, through water year 2010) that continued in period 4 (water years 2011–15). Also, decreases in FACs of unfiltered-recoverable arsenic and suspended sediment were indicated for period 4 at this site. The decrease in FACs of unfiltered-recoverable copper for sampling site 22 during period 4 was proportionally much larger than the decrease for the Clark Fork at Turah Bridge near Bonner (sampling site 20). Net mobilization of unfiltered-recoverable copper and arsenic from sources within reach 9 are smaller for period 4 than for period 1 when the former Milltown Dam was in place, providing evidence that contaminant source materials have been substantially reduced in reach 9.
Flux Sampling Errors for Aircraft and Towers
NASA Technical Reports Server (NTRS)
Mahrt, Larry
1998-01-01
Various errors and influences leading to differences between tower- and aircraft-measured fluxes are surveyed. This survey is motivated by reports in the literature that aircraft fluxes are sometimes smaller than tower-measured fluxes. Both tower and aircraft flux errors are larger with surface heterogeneity due to several independent effects. Surface heterogeneity may cause tower flux errors to increase with decreasing wind speed. Techniques to assess flux sampling error are reviewed. Such error estimates suffer various degrees of inapplicability in real geophysical time series due to nonstationarity of tower time series (or inhomogeneity of aircraft data). A new measure for nonstationarity is developed that eliminates assumptions on the form of the nonstationarity inherent in previous methods. When this nonstationarity measure becomes large, the surface energy imbalance increases sharply. Finally, strategies for obtaining adequate flux sampling using repeated aircraft passes and grid patterns are outlined.
Impact of cell size on inventory and mapping errors in a cellular geographic information system
NASA Technical Reports Server (NTRS)
Wehde, M. E. (Principal Investigator)
1979-01-01
The author has identified the following significant results. The effect of grid position was found insignificant for maps but highly significant for isolated mapping units. A modelable relationship between mapping error and cell size was observed for the map segment analyzed. Map data structure was also analyzed with an interboundary distance distribution approach. Map data structure and the impact of cell size on that structure were observed. The existence of a model allowing prediction of mapping error based on map structure was hypothesized and two generations of models were tested under simplifying assumptions.
Lewis and Clark Park Shuttle: Lessons Learned.
DOT National Transportation Integrated Search
2006-08-01
In anticipation of increased visitation expected for the Lewis & Clark bicentennial, the park, Sunset Empire Transportation District, and other partners implemented a seasonal summer bus service that provided an alternative to driving to Fort Clatsop...
40 CFR 62.3630 - Identification of plan.
Code of Federal Regulations, 2011 CFR
2011-07-01
..., Rule 8. Municipal Solid Waste Landfills Located in Clark, Floyd, Lake and Porter Counties and Rule 8.1. Municipal Solid Waste Landfills Not Located in Clark, Floyd, Lake and Porter Counties added at 21 Indiana...
40 CFR 52.730 - Compliance schedules.
Code of Federal Regulations, 2011 CFR
2011-07-01
.... cook county Harco Aluminum Inc Chicago 204(c) Dec. 9, 1973. J. L. Clark Manufacturing Co Downers Grove... Mills Inc Mendota 204(c) May 28, 1973. madison county Clark Oil & Refining Corp Hartford 204(f) Feb. 22...
40 CFR 52.730 - Compliance schedules.
Code of Federal Regulations, 2010 CFR
2010-07-01
.... cook county Harco Aluminum Inc Chicago 204(c) Dec. 9, 1973. J. L. Clark Manufacturing Co Downers Grove... Mills Inc Mendota 204(c) May 28, 1973. madison county Clark Oil & Refining Corp Hartford 204(f) Feb. 22...
27 CFR 9.139 - Santa Lucia Highlands.
Code of Federal Regulations, 2011 CFR
2011-04-01
... Paraiso Road in a southerly direction to the intersection with Clark Road on the Paraiso Springs, California U.S.G.S. map. (9) Then east-northeasterly along Clark Road for approximately 1,000 feet to its...
7 CFR 407.10 - Group risk plan for barley.
Code of Federal Regulations, 2010 CFR
2010-01-01
...; California; and Clark and Nye Counties, Nevada October 31 June 30. All Colorado counties except Kit Carson...; all Nevada counties except Clark and Nye Counties; Taos County, New Mexico; and all other states...
7 CFR 407.10 - Group risk plan for barley.
Code of Federal Regulations, 2011 CFR
2011-01-01
...; California; and Clark and Nye Counties, Nevada October 31 June 30. All Colorado counties except Kit Carson...; all Nevada counties except Clark and Nye Counties; Taos County, New Mexico; and all other states...
27 CFR 9.28 - Santa Maria Valley.
Code of Federal Regulations, 2010 CFR
2010-04-01
... locally as Clark Road) intersects; Thence northerly along U.S. 101 to a point where it intersects with... westerly direction along the unnamed road (known locally as Clark Road) to the point of beginning. [T.D...
27 CFR 9.139 - Santa Lucia Highlands.
Code of Federal Regulations, 2010 CFR
2010-04-01
... Paraiso Road in a southerly direction to the intersection with Clark Road on the Paraiso Springs, California U.S.G.S. map. (9) Then east-northeasterly along Clark Road for approximately 1,000 feet to its...