Sample records for flow measurement error

  1. Measurement of Fracture Aperture Fields Using Ttransmitted Light: An Evaluation of Measurement Errors and their Influence on Simulations of Flow and Transport through a Single Fracture

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Detwiler, Russell L.; Glass, Robert J.; Pringle, Scott E.

    Understanding of single and multi-phase flow and transport in fractures can be greatly enhanced through experimentation in transparent systems (analogs or replicas) where light transmission techniques yield quantitative measurements of aperture, solute concentration, and phase saturation fields. Here we quanti@ aperture field measurement error and demonstrate the influence of this error on the results of flow and transport simulations (hypothesized experimental results) through saturated and partially saturated fractures. find that precision and accuracy can be balanced to greatly improve the technique and We present a measurement protocol to obtain a minimum error field. Simulation results show an increased sensitivity tomore » error as we move from flow to transport and from saturated to partially saturated conditions. Significant sensitivity under partially saturated conditions results in differences in channeling and multiple-peaked breakthrough curves. These results emphasize the critical importance of defining and minimizing error for studies of flow and transpoti in single fractures.« less

  2. Correcting For Seed-Particle Lag In LV Measurements

    NASA Technical Reports Server (NTRS)

    Jones, Gregory S.; Gartrell, Luther R.; Kamemoto, Derek Y.

    1994-01-01

    Two experiments conducted to evaluate effects of sizes of seed particles on errors in LV measurements of mean flows. Both theoretical and conventional experimental methods used to evaluate errors. First experiment focused on measurement of decelerating stagnation streamline of low-speed flow around circular cylinder with two-dimensional afterbody. Second performed in transonic flow and involved measurement of decelerating stagnation streamline of hemisphere with cylindrical afterbody. Concluded, mean-quantity LV measurements subject to large errors directly attributable to sizes of particles. Predictions of particle-response theory showed good agreement with experimental results, indicating velocity-error-correction technique used in study viable for increasing accuracy of laser velocimetry measurements. Technique simple and useful in any research facility in which flow velocities measured.

  3. Effects of free convection and friction on heat-pulse flowmeter measurement

    NASA Astrophysics Data System (ADS)

    Lee, Tsai-Ping; Chia, Yeeping; Chen, Jiun-Szu; Chen, Hongey; Liu, Chen-Wuing

    2012-03-01

    SummaryHeat-pulse flowmeter can be used to measure low flow velocities in a borehole; however, bias in the results due to measurement error is often encountered. A carefully designed water circulation system was established in the laboratory to evaluate the accuracy and precision of flow velocity measured by heat-pulse flowmeter in various conditions. Test results indicated that the coefficient of variation for repeated measurements, ranging from 0.4% to 5.8%, tends to increase with flow velocity. The measurement error increases from 4.6% to 94.4% as the average flow velocity decreases from 1.37 cm/s to 0.18 cm/s. We found that the error resulted primarily from free convection and frictional loss. Free convection plays an important role in heat transport at low flow velocities. Frictional effect varies with the position of measurement and geometric shape of the inlet and flow-through cell of the flowmeter. Based on the laboratory test data, a calibration equation for the measured flow velocity was derived by the least-squares regression analysis. When the flowmeter is used with a diverter, the range of measured flow velocity can be extended, but the measurement error and the coefficient of variation due to friction increase significantly. At higher velocities under turbulent flow conditions, the measurement error is greater than 100%. Our laboratory experimental results suggested that, to avoid a large error, the heat-pulse flowmeter measurement is better conducted in laminar flow and the effect of free convection should be eliminated at any flow velocities. Field measurement of the vertical flow velocity using the heat-pulse flowmeter was tested in a monitoring well. The calibration of measured velocities not only improved the contrast in hydraulic conductivity between permeable and less permeable layers, but also corrected the inconsistency between the pumping rate and the measured flow rate. We identified two highly permeable sections where the horizontal hydraulic conductivity is 3.7-6.4 times of the equivalent hydraulic conductivity obtained from the pumping test. The field test results indicated that, with a proper calibration, the flowmeter measurement is capable of characterizing the vertical distribution of preferential flow or hydraulic conductivity.

  4. A cautionary note on the use of some mass flow controllers

    NASA Astrophysics Data System (ADS)

    Weinheimer, Andrew J.; Ridley, Brian A.

    1990-06-01

    Commercial mass flow controllers are widely used in atmospheric research where precise and constant gas flows are required. We have determined, however, that some commonly used controllers are far more sensitive to ambient pressure than is acknowledged in the literature of the manufacturers. Since a flow error can lead directly to a measurement error of the same magnitude, this is a matter of great concern. Indeed, in our particular application, were we not aware of this problem, our measurements would be subject to a systematic error that increased with altitude (i.e., a drift), up to a factor of 2 at the highest altitudes (˜37 km). In this note we present laboratory measurements of the errors of two brands of flow controllers when operated at pressures down to a few millibars. The errors are as large as a factor of 2 to 3 and depend not simply on the ambient pressure at a given time, but also on the pressure history. In addition there is a large dependence on flow setting. In light of these flow errors, some past measurements of chemical species in the stratosphere will need to be revised.

  5. Addressing Systematic Errors in Correlation Tracking on HMI Magnetograms

    NASA Astrophysics Data System (ADS)

    Mahajan, Sushant S.; Hathaway, David H.; Munoz-Jaramillo, Andres; Martens, Petrus C.

    2017-08-01

    Correlation tracking in solar magnetograms is an effective method to measure the differential rotation and meridional flow on the solar surface. However, since the tracking accuracy required to successfully measure meridional flow is very high, small systematic errors have a noticeable impact on measured meridional flow profiles. Additionally, the uncertainties of this kind of measurements have been historically underestimated, leading to controversy regarding flow profiles at high latitudes extracted from measurements which are unreliable near the solar limb.Here we present a set of systematic errors we have identified (and potential solutions), including bias caused by physical pixel sizes, center-to-limb systematics, and discrepancies between measurements performed using different time intervals. We have developed numerical techniques to get rid of these systematic errors and in the process improve the accuracy of the measurements by an order of magnitude.We also present a detailed analysis of uncertainties in these measurements using synthetic magnetograms and the quantification of an upper limit below which meridional flow measurements cannot be trusted as a function of latitude.

  6. Extension of sonic anemometry to high subsonic Mach number flows

    NASA Astrophysics Data System (ADS)

    Otero, R.; Lowe, K. T.; Ng, W. F.

    2017-03-01

    In the literature, the application of sonic anemometry has been limited to low subsonic Mach number, near-incompressible flow conditions. To the best of the authors’ knowledge, this paper represents the first time a sonic anemometry approach has been used to characterize flow velocity beyond Mach 0.3. Using a high speed jet, flow velocity was measured using a modified sonic anemometry technique in flow conditions up to Mach 0.83. A numerical study was conducted to identify the effects of microphone placement on the accuracy of the measured velocity. Based on estimated error strictly due to uncertainty in time-of-acoustic flight, a random error of +/- 4 m s-1 was identified for the configuration used in this experiment. Comparison with measurements from a Pitot probe indicated a velocity RMS error of +/- 9 m s-1. The discrepancy in error is attributed to a systematic error which may be calibrated out in future work. Overall, the experimental results from this preliminary study support the use of acoustics for high subsonic flow characterization.

  7. Development of multiple-eye PIV using mirror array

    NASA Astrophysics Data System (ADS)

    Maekawa, Akiyoshi; Sakakibara, Jun

    2018-06-01

    In order to reduce particle image velocimetry measurement error, we manufactured an ellipsoidal polyhedral mirror and placed it between a camera and flow target to capture n images of identical particles from n (=80 maximum) different directions. The 3D particle positions were determined from the ensemble average of n C2 intersecting points of a pair of line-of-sight back-projected points from a particle found in any combination of two images in the n images. The method was then applied to a rigid-body rotating flow and a turbulent pipe flow. In the former measurement, bias error and random error fell in a range of  ±0.02 pixels and 0.02–0.05 pixels, respectively; additionally, random error decreased in proportion to . In the latter measurement, in which the measured value was compared to direct numerical simulation, bias error was reduced and random error also decreased in proportion to .

  8. A dual-phantom system for validation of velocity measurements in stenosis models under steady flow.

    PubMed

    Blake, James R; Easson, William J; Hoskins, Peter R

    2009-09-01

    A dual-phantom system is developed for validation of velocity measurements in stenosis models. Pairs of phantoms with identical geometry and flow conditions are manufactured, one for ultrasound and one for particle image velocimetry (PIV). The PIV model is made from silicone rubber, and a new PIV fluid is made that matches the refractive index of 1.41 of silicone. Dynamic scaling was performed to correct for the increased viscosity of the PIV fluid compared with that of the ultrasound blood mimic. The degree of stenosis in the models pairs agreed to less than 1%. The velocities in the laminar flow region up to the peak velocity location agreed to within 15%, and the difference could be explained by errors in ultrasound velocity estimation. At low flow rates and in mild stenoses, good agreement was observed in the distal flow fields, excepting the maximum velocities. At high flow rates, there was considerable difference in velocities in the poststenosis flow field (maximum centreline differences of 30%), which would seem to represent real differences in hydrodynamic behavior between the two models. Sources of error included: variation of viscosity because of temperature (random error, which could account for differences of up to 7%); ultrasound velocity estimation errors (systematic errors); and geometry effects in each model, particularly because of imperfect connectors and corners (systematic errors, potentially affecting the inlet length and flow stability). The current system is best placed to investigate measurement errors in the laminar flow region rather than the poststenosis turbulent flow region.

  9. Role of turbulence fluctuations on uncertainties of acoutic Doppler current profiler discharge measurements

    USGS Publications Warehouse

    Tarrab, Leticia; Garcia, Carlos M.; Cantero, Mariano I.; Oberg, Kevin

    2012-01-01

    This work presents a systematic analysis quantifying the role of the presence of turbulence fluctuations on uncertainties (random errors) of acoustic Doppler current profiler (ADCP) discharge measurements from moving platforms. Data sets of three-dimensional flow velocities with high temporal and spatial resolution were generated from direct numerical simulation (DNS) of turbulent open channel flow. Dimensionless functions relating parameters quantifying the uncertainty in discharge measurements due to flow turbulence (relative variance and relative maximum random error) to sampling configuration were developed from the DNS simulations and then validated with field-scale discharge measurements. The validated functions were used to evaluate the role of the presence of flow turbulence fluctuations on uncertainties in ADCP discharge measurements. The results of this work indicate that random errors due to the flow turbulence are significant when: (a) a low number of transects is used for a discharge measurement, and (b) measurements are made in shallow rivers using high boat velocity (short time for the boat to cross a flow turbulence structure).

  10. Vector velocity volume flow estimation: Sources of error and corrections applied for arteriovenous fistulas.

    PubMed

    Jensen, Jonas; Olesen, Jacob Bjerring; Stuart, Matthias Bo; Hansen, Peter Møller; Nielsen, Michael Bachmann; Jensen, Jørgen Arendt

    2016-08-01

    A method for vector velocity volume flow estimation is presented, along with an investigation of its sources of error and correction of actual volume flow measurements. Volume flow errors are quantified theoretically by numerical modeling, through flow phantom measurements, and studied in vivo. This paper investigates errors from estimating volumetric flow using a commercial ultrasound scanner and the common assumptions made in the literature. The theoretical model shows, e.g. that volume flow is underestimated by 15%, when the scan plane is off-axis with the vessel center by 28% of the vessel radius. The error sources were also studied in vivo under realistic clinical conditions, and the theoretical results were applied for correcting the volume flow errors. Twenty dialysis patients with arteriovenous fistulas were scanned to obtain vector flow maps of fistulas. When fitting an ellipsis to cross-sectional scans of the fistulas, the major axis was on average 10.2mm, which is 8.6% larger than the minor axis. The ultrasound beam was on average 1.5mm from the vessel center, corresponding to 28% of the semi-major axis in an average fistula. Estimating volume flow with an elliptical, rather than circular, vessel area and correcting the ultrasound beam for being off-axis, gave a significant (p=0.008) reduction in error from 31.2% to 24.3%. The error is relative to the Ultrasound Dilution Technique, which is considered the gold standard for volume flow estimation for dialysis patients. The study shows the importance of correcting for volume flow errors, which are often made in clinical practice. Copyright © 2016 Elsevier B.V. All rights reserved.

  11. Correcting for particle counting bias error in turbulent flow

    NASA Technical Reports Server (NTRS)

    Edwards, R. V.; Baratuci, W.

    1985-01-01

    An ideal seeding device is proposed generating particles that exactly follow the flow out are still a major source of error, i.e., with a particle counting bias wherein the probability of measuring velocity is a function of velocity. The error in the measured mean can be as much as 25%. Many schemes have been put forward to correct for this error, but there is not universal agreement as to the acceptability of any one method. In particular it is sometimes difficult to know if the assumptions required in the analysis are fulfilled by any particular flow measurement system. To check various correction mechanisms in an ideal way and to gain some insight into how to correct with the fewest initial assumptions, a computer simulation is constructed to simulate laser anemometer measurements in a turbulent flow. That simulator and the results of its use are discussed.

  12. Groundwater flow in the transition zone between freshwater and saltwater: a field-based study and analysis of measurement errors

    NASA Astrophysics Data System (ADS)

    Post, Vincent E. A.; Banks, Eddie; Brunke, Miriam

    2018-02-01

    The quantification of groundwater flow near the freshwater-saltwater transition zone at the coast is difficult because of variable-density effects and tidal dynamics. Head measurements were collected along a transect perpendicular to the shoreline at a site south of the city of Adelaide, South Australia, to determine the transient flow pattern. This paper presents a detailed overview of the measurement procedure, data post-processing methods and uncertainty analysis in order to assess how measurement errors affect the accuracy of the inferred flow patterns. A particular difficulty encountered was that some of the piezometers were leaky, which necessitated regular measurements of the electrical conductivity and temperature of the water inside the wells to correct for density effects. Other difficulties included failure of pressure transducers, data logger clock drift and operator error. The data obtained were sufficiently accurate to show that there is net seaward horizontal flow of freshwater in the top part of the aquifer, and a net landward flow of saltwater in the lower part. The vertical flow direction alternated with the tide, but due to the large uncertainty of the head gradients and density terms, no net flow could be established with any degree of confidence. While the measurement problems were amplified under the prevailing conditions at the site, similar errors can lead to large uncertainties everywhere. The methodology outlined acknowledges the inherent uncertainty involved in measuring groundwater flow. It can also assist to establish the accuracy requirements of the experimental setup.

  13. Two-dimensional confocal laser scanning microscopy image correlation for nanoparticle flow velocimetry

    NASA Astrophysics Data System (ADS)

    Jun, Brian; Giarra, Matthew; Golz, Brian; Main, Russell; Vlachos, Pavlos

    2016-11-01

    We present a methodology to mitigate the major sources of error associated with two-dimensional confocal laser scanning microscopy (CLSM) images of nanoparticles flowing through a microfluidic channel. The correlation-based velocity measurements from CLSM images are subject to random error due to the Brownian motion of nanometer-sized tracer particles, and a bias error due to the formation of images by raster scanning. Here, we develop a novel ensemble phase correlation with dynamic optimal filter that maximizes the correlation strength, which diminishes the random error. In addition, we introduce an analytical model of CLSM measurement bias error correction due to two-dimensional image scanning of tracer particles. We tested our technique using both synthetic and experimental images of nanoparticles flowing through a microfluidic channel. We observed that our technique reduced the error by up to a factor of ten compared to ensemble standard cross correlation (SCC) for the images tested in the present work. Subsequently, we will assess our framework further, by interrogating nanoscale flow in the cell culture environment (transport within the lacunar-canalicular system) to demonstrate our ability to accurately resolve flow measurements in a biological system.

  14. Structural power flow measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falter, K.J.; Keltie, R.F.

    Previous investigations of structural power flow through beam-like structures resulted in some unexplained anomalies in the calculated data. In order to develop structural power flow measurement as a viable technique for machine tool design, the causes of these anomalies needed to be found. Once found, techniques for eliminating the errors could be developed. Error sources were found in the experimental apparatus itself as well as in the instrumentation. Although flexural waves are the carriers of power in the experimental apparatus, at some frequencies longitudinal waves were excited which were picked up by the accelerometers and altered power measurements. Errors weremore » found in the phase and gain response of the sensors and amplifiers used for measurement. A transfer function correction technique was employed to compensate for these instrumentation errors.« less

  15. Optimal plane search method in blood flow measurements by magnetic resonance imaging

    NASA Astrophysics Data System (ADS)

    Bargiel, Pawel; Orkisz, Maciej; Przelaskowski, Artur; Piatkowska-Janko, Ewa; Bogorodzki, Piotr; Wolak, Tomasz

    2004-07-01

    This paper offers an algorithm for determining the blood flow parameters in the neck vessel segments using a single (optimal) measurement plane instead of the usual approach involving four planes orthogonal to the artery axis. This new approach aims at significantly shortening the time required to complete measurements using Nuclear Magnetic Resonance techniques. Based on a defined error function, the algorithm scans the solution space to find the minimum of the error function, and thus to determine a single plane characterized by a minimum measurement error, which allows for an accurate measurement of blood flow in the four carotid arteries. The paper also comprises a practical implementation of this method (as a module of a larger imaging-measuring system), including preliminary research results.

  16. Quantifying error of lidar and sodar Doppler beam swinging measurements of wind turbine wakes using computational fluid dynamics

    DOE PAGES

    Lundquist, J. K.; Churchfield, M. J.; Lee, S.; ...

    2015-02-23

    Wind-profiling lidars are now regularly used in boundary-layer meteorology and in applications such as wind energy and air quality. Lidar wind profilers exploit the Doppler shift of laser light backscattered from particulates carried by the wind to measure a line-of-sight (LOS) velocity. The Doppler beam swinging (DBS) technique, used by many commercial systems, considers measurements of this LOS velocity in multiple radial directions in order to estimate horizontal and vertical winds. The method relies on the assumption of homogeneous flow across the region sampled by the beams. Using such a system in inhomogeneous flow, such as wind turbine wakes ormore » complex terrain, will result in errors. To quantify the errors expected from such violation of the assumption of horizontal homogeneity, we simulate inhomogeneous flow in the atmospheric boundary layer, notably stably stratified flow past a wind turbine, with a mean wind speed of 6.5 m s -1 at the turbine hub-height of 80 m. This slightly stable case results in 15° of wind direction change across the turbine rotor disk. The resulting flow field is sampled in the same fashion that a lidar samples the atmosphere with the DBS approach, including the lidar range weighting function, enabling quantification of the error in the DBS observations. The observations from the instruments located upwind have small errors, which are ameliorated with time averaging. However, the downwind observations, particularly within the first two rotor diameters downwind from the wind turbine, suffer from errors due to the heterogeneity of the wind turbine wake. Errors in the stream-wise component of the flow approach 30% of the hub-height inflow wind speed close to the rotor disk. Errors in the cross-stream and vertical velocity components are also significant: cross-stream component errors are on the order of 15% of the hub-height inflow wind speed (1.0 m s −1) and errors in the vertical velocity measurement exceed the actual vertical velocity. By three rotor diameters downwind, DBS-based assessments of wake wind speed deficits based on the stream-wise velocity can be relied on even within the near wake within 1.0 s -1 (or 15% of the hub-height inflow wind speed), and the cross-stream velocity error is reduced to 8% while vertical velocity estimates are compromised. Furthermore, measurements of inhomogeneous flow such as wind turbine wakes are susceptible to these errors, and interpretations of field observations should account for this uncertainty.« less

  17. Quantifying error of lidar and sodar Doppler beam swinging measurements of wind turbine wakes using computational fluid dynamics

    NASA Astrophysics Data System (ADS)

    Lundquist, J. K.; Churchfield, M. J.; Lee, S.; Clifton, A.

    2015-02-01

    Wind-profiling lidars are now regularly used in boundary-layer meteorology and in applications such as wind energy and air quality. Lidar wind profilers exploit the Doppler shift of laser light backscattered from particulates carried by the wind to measure a line-of-sight (LOS) velocity. The Doppler beam swinging (DBS) technique, used by many commercial systems, considers measurements of this LOS velocity in multiple radial directions in order to estimate horizontal and vertical winds. The method relies on the assumption of homogeneous flow across the region sampled by the beams. Using such a system in inhomogeneous flow, such as wind turbine wakes or complex terrain, will result in errors. To quantify the errors expected from such violation of the assumption of horizontal homogeneity, we simulate inhomogeneous flow in the atmospheric boundary layer, notably stably stratified flow past a wind turbine, with a mean wind speed of 6.5 m s-1 at the turbine hub-height of 80 m. This slightly stable case results in 15° of wind direction change across the turbine rotor disk. The resulting flow field is sampled in the same fashion that a lidar samples the atmosphere with the DBS approach, including the lidar range weighting function, enabling quantification of the error in the DBS observations. The observations from the instruments located upwind have small errors, which are ameliorated with time averaging. However, the downwind observations, particularly within the first two rotor diameters downwind from the wind turbine, suffer from errors due to the heterogeneity of the wind turbine wake. Errors in the stream-wise component of the flow approach 30% of the hub-height inflow wind speed close to the rotor disk. Errors in the cross-stream and vertical velocity components are also significant: cross-stream component errors are on the order of 15% of the hub-height inflow wind speed (1.0 m s-1) and errors in the vertical velocity measurement exceed the actual vertical velocity. By three rotor diameters downwind, DBS-based assessments of wake wind speed deficits based on the stream-wise velocity can be relied on even within the near wake within 1.0 m s-1 (or 15% of the hub-height inflow wind speed), and the cross-stream velocity error is reduced to 8% while vertical velocity estimates are compromised. Measurements of inhomogeneous flow such as wind turbine wakes are susceptible to these errors, and interpretations of field observations should account for this uncertainty.

  18. Flow tilt angle measurements using lidar anemometry

    NASA Astrophysics Data System (ADS)

    Dellwik, Ebba; Mann, Jakob

    2010-05-01

    A new way of estimating near-surface mean flow tilt angles from ground based Doppler lidar measurements is presented. The results are compared with traditional mast based in-situ sonic anemometry. The tilt angle assessed with the lidar is based on 10 or 30 minute mean values of the velocity field from a conically scanning lidar. In this mode of measurement, the lidar beam is rotated in a circle by a prism with a fixed angle to the vertical at varying focus distances. By fitting a trigonometric function to the scans, the mean vertical velocity can be estimated. Lidar measurements from (1) a fetch-limited beech forest site taken at 48-175m above ground level, (2) a reference site in flat agricultural terrain and (3) a second reference site in very complex terrain are presented. The method to derive flow tilt angles and mean vertical velocities from lidar has several advantages compared to sonic anemometry; there is no flow distortion caused by the instrument itself, there are no temperature effects and the instrument misalignment can be corrected for by comparing tilt estimates at various heights. Contrary to mast-based instruments, the lidar measures the wind field with the exact same alignment error at a multitude of heights. Disadvantages with estimating vertical velocities from a lidar compared to mast-based measurements are slightly increased levels of statistical errors due to limited sampling time, because the sampling is disjunct and a requirement for homogeneous flow. The estimated mean vertical velocity is biased if the flow over the scanned circle is not homogeneous. However, the error on the mean vertical velocity due to flow inhomogeneity can be approximated by a function of the angle of the lidar beam to the vertical, the measurement height and the vertical gradient of the mean vertical velocity, whereas the error due to flow inhomogeneity on the horizontal mean wind speed is independent of the lidar beam angle. For the presented measurements over forest, it is evaluated that the systematic error due to the inhomogeneity of the flow is less than 0.2 degrees. Other possibilities for utilizing lidars for flow tilt angle and mean vertical velocities are discussed.

  19. Error Propagation Dynamics of PIV-based Pressure Field Calculations: How well does the pressure Poisson solver perform inherently?

    PubMed

    Pan, Zhao; Whitehead, Jared; Thomson, Scott; Truscott, Tadd

    2016-08-01

    Obtaining pressure field data from particle image velocimetry (PIV) is an attractive technique in fluid dynamics due to its noninvasive nature. The application of this technique generally involves integrating the pressure gradient or solving the pressure Poisson equation using a velocity field measured with PIV. However, very little research has been done to investigate the dynamics of error propagation from PIV-based velocity measurements to the pressure field calculation. Rather than measure the error through experiment, we investigate the dynamics of the error propagation by examining the Poisson equation directly. We analytically quantify the error bound in the pressure field, and are able to illustrate the mathematical roots of why and how the Poisson equation based pressure calculation propagates error from the PIV data. The results show that the error depends on the shape and type of boundary conditions, the dimensions of the flow domain, and the flow type.

  20. Effect of random errors in planar PIV data on pressure estimation in vortex dominated flows

    NASA Astrophysics Data System (ADS)

    McClure, Jeffrey; Yarusevych, Serhiy

    2015-11-01

    The sensitivity of pressure estimation techniques from Particle Image Velocimetry (PIV) measurements to random errors in measured velocity data is investigated using the flow over a circular cylinder as a test case. Direct numerical simulations are performed for ReD = 100, 300 and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A range of random errors typical for PIV measurements is applied to synthetic PIV data extracted from numerical results. A parametric study is then performed using a number of common pressure estimation techniques. Optimal temporal and spatial resolutions are derived based on the sensitivity of the estimated pressure fields to the simulated random error in velocity measurements, and the results are compared to an optimization model derived from error propagation theory. It is shown that the reductions in spatial and temporal scales at higher Reynolds numbers leads to notable changes in the optimal pressure evaluation parameters. The effect of smaller scale wake structures is also quantified. The errors in the estimated pressure fields are shown to depend significantly on the pressure estimation technique employed. The results are used to provide recommendations for the use of pressure and force estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.

  1. Use of O-15 water and C-11 butanol to measure cerebral blood flow (CBF) and water permeability with positron emission tomography (PET)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herscovitch, P.; Raichle, M.E.; Kilbourn, M.R.

    1985-05-01

    Tracers used to measure CBF with PET and the Kety autoradiographic approach should freely cross the blood-brain barrier. 0-15 water, which is not freely permeable, may underestimate CBF, especially at higher flows. The authors determined this under-estimation relative to flow measured with a freely diffusible tracer, C-11 butanol and used these data to calculate the extraction (E) and permeability surface area product (PS) for 0-15 water. Paired flow measurements were made with 0-15 water (CBF-wat) and C-11 butanol (CBF-but) in eight normal human subjects. Average CBF-but, 55.6 ml/(min . 100g) was significantly greater than CBF-water, 47.6 ml/(min . 100g). Themore » ratio of regional gray matter (GM) flow to white matter (WM) flow was significantly greater with C-11 butanol, indicating a greater underestimation of CBF with 0-15 water in the higher flow GM. Average E for water was 0.92 in WM and 0.82 in GM. The mean PS in GM, 148 ml/(min . 100g), was significantly greater than in WM, 94 ml/(min . 100g). Simulation studies demonstrated that a measurement error in CBF-wat or CBF-but causes an approximately equivalent error in E but a considerably larger error in PS due to the sensitivity of the equation, PS=-CBF . ln(1-E), to variations in E. Modest errors in E and PS result from tissue heterogeneity that occurs due to the limited spatial resolution of PET. The authors' measurements of E and PS for water are similar to data obtained by more invasive methods and demonstrate the ability of PET to measure brain water permeability.« less

  2. A-posteriori error estimation for the finite point method with applications to compressible flow

    NASA Astrophysics Data System (ADS)

    Ortega, Enrique; Flores, Roberto; Oñate, Eugenio; Idelsohn, Sergio

    2017-08-01

    An a-posteriori error estimate with application to inviscid compressible flow problems is presented. The estimate is a surrogate measure of the discretization error, obtained from an approximation to the truncation terms of the governing equations. This approximation is calculated from the discrete nodal differential residuals using a reconstructed solution field on a modified stencil of points. Both the error estimation methodology and the flow solution scheme are implemented using the Finite Point Method, a meshless technique enabling higher-order approximations and reconstruction procedures on general unstructured discretizations. The performance of the proposed error indicator is studied and applications to adaptive grid refinement are presented.

  3. Evaluation of probe-induced flow distortion of Campbell CSAT3 sonic anemometers by numerical simulation

    NASA Astrophysics Data System (ADS)

    Mauder, M.; Huq, S.; De Roo, F.; Foken, T.; Manhart, M.; Schmid, H. P. E.

    2017-12-01

    The Campbell CSAT3 sonic anemometer is one of the most widely used instruments for eddy-covariance measurement. However, conflicting estimates for the probe-induced flow distortion error of this instrument have been reported recently, and those error estimates range between 3% and 14% for the measurement of vertical velocity fluctuations. This large discrepancy between the different studies can probably be attributed to the different experimental approaches applied. In order to overcome the limitations of both field intercomparison experiments and wind tunnel experiments, we propose a new approach that relies on virtual measurements in a large-eddy simulation (LES) environment. In our experimental set-up, we generate horizontal and vertical velocity fluctuations at frequencies that typically dominate the turbulence spectra of the surface layer. The probe-induced flow distortion error of a CSAT3 is then quantified by this numerical wind tunnel approach while the statistics of the prescribed inflow signal are taken as reference or etalon. The resulting relative error is found to range from 3% to 7% and from 1% to 3% for the standard deviation of the vertical and the horizontal velocity component, respectively, depending on the orientation of the CSAT3 in the flow field. We further demonstrate that these errors are independent of the frequency of fluctuations at the inflow of the simulation. The analytical corrections proposed by Kaimal et al. (Proc Dyn Flow Conf, 551-565, 1978) and Horst et al. (Boundary-Layer Meteorol, 155, 371-395, 2015) are compared against our simulated results, and we find that they indeed reduce the error by up to three percentage points. However, these corrections fail to reproduce the azimuth-dependence of the error that we observe. Moreover, we investigate the general Reynolds number dependence of the flow distortion error by more detailed idealized simulations.

  4. Use of micro-lightguide spectrophotometry for evaluation of microcirculation in the small and large intestines of horses without gastrointestinal disease.

    PubMed

    Reichert, Christof; Kästner, Sabine B R; Hopster, Klaus; Rohn, Karl; Rötting, Anna K

    2014-11-01

    To evaluate the use of a micro-lightguide tissue spectrophotometer for measurement of tissue oxygenation and blood flow in the small and large intestines of horses under anesthesia. 13 adult horses without gastrointestinal disease. Horses were anesthetized and placed in dorsal recumbency. Ventral midline laparotomy was performed. Intestinal segments were exteriorized to obtain measurements. Spectrophotometric measurements of tissue oxygenation and regional blood flow of the jejunum and pelvic flexure were obtained under various conditions that were considered to have a potential effect on measurement accuracy. In addition, arterial oxygen saturation at the measuring sites was determined by use of pulse oximetry. 12,791 single measurements of oxygen saturation, relative amount of hemoglobin, and blood flow were obtained. Errors occurred in 381 of 12,791 (2.98%) measurements. Most measurement errors occurred when surgical lights were directed at the measuring site; covering the probe with the surgeon's hand did not eliminate this error source. No measurement errors were observed when the probe was positioned on the intestinal wall with room light, at the mesenteric side, or between the mesenteric and antimesenteric side. Values for blood flow had higher variability, and this was most likely caused by motion artifacts of the intestines. The micro-lightguide spectrophotometry system was easy to use on the small and large intestines of horses and provided rapid evaluation of the microcirculation. Results indicated that measurements should be performed with room light only and intestinal motion should be minimized.

  5. Quantifying radar-rainfall uncertainties in urban drainage flow modelling

    NASA Astrophysics Data System (ADS)

    Rico-Ramirez, M. A.; Liguori, S.; Schellart, A. N. A.

    2015-09-01

    This work presents the results of the implementation of a probabilistic system to model the uncertainty associated to radar rainfall (RR) estimates and the way this uncertainty propagates through the sewer system of an urban area located in the North of England. The spatial and temporal correlations of the RR errors as well as the error covariance matrix were computed to build a RR error model able to generate RR ensembles that reproduce the uncertainty associated with the measured rainfall. The results showed that the RR ensembles provide important information about the uncertainty in the rainfall measurement that can be propagated in the urban sewer system. The results showed that the measured flow peaks and flow volumes are often bounded within the uncertainty area produced by the RR ensembles. In 55% of the simulated events, the uncertainties in RR measurements can explain the uncertainties observed in the simulated flow volumes. However, there are also some events where the RR uncertainty cannot explain the whole uncertainty observed in the simulated flow volumes indicating that there are additional sources of uncertainty that must be considered such as the uncertainty in the urban drainage model structure, the uncertainty in the urban drainage model calibrated parameters, and the uncertainty in the measured sewer flows.

  6. In-Bed Accountability Development for a Passively Cooled, Electrically Heated Hydride (PACE) Bed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klein, J.E.

    A nominal 1500 STP-L PAssively Cooled, Electrically heated hydride (PACE) Bed has been developed for implementation into a new Savannah River Site tritium project. The 1.2 meter (four-foot) long process vessel contains on internal 'U-tube' for tritium In-Bed Accountability (IBA) measurements. IBA will be performed on six, 12.6 kg production metal hydride storage beds.IBA tests were done on a prototype bed using electric heaters to simulate the radiolytic decay of tritium. Tests had gas flows from 10 to 100 SLPM through the U-tube or 100 SLPM through the bed's vacuum jacket. IBA inventory measurement errors at the 95% confidence levelmore » were calculated using the correlation of IBA gas temperature rise, or (hydride) bed temperature rise above ambient temperature, versus simulated tritium inventory.Prototype bed IBA inventory errors at 100 SLPM were the largest for gas flows through the vacuum jacket: 15.2 grams for the bed temperature rise and 11.5 grams for the gas temperature rise. For a 100 SLPM U-tube flow, the inventory error was 2.5 grams using bed temperature rise and 1.6 grams using gas temperature rise. For 50 to 100 SLPM U-tube flows, the IBA gas temperature rise inventory errors were nominally one to two grams that increased above four grams for flows less than 50 SLPM. For 50 to 100 SLPM U-tube flows, the IBA bed temperature rise inventory errors were greater than the gas temperature rise errors, but similar errors were found for both methods at gas flows of 20, 30, and 40 SLPM.Electric heater IBA tests were done for six production hydride beds using a 45 SLPM U-tube gas flow. Of the duplicate runs performed on these beds, five of the six beds produced IBA inventory errors of approximately three grams: consistent with results obtained in the laboratory prototype tests.« less

  7. In-Bed Accountability Development for a Passively Cooled, Electrically Heated Hydride (PACE) Bed

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    KLEIN, JAMES

    A nominal 1500 STP-L PAssively Cooled, Electrically heated hydride (PACE) Bed has been developed for implementation into a new Savannah River Site tritium project. The 1.2 meter (four-foot) long process vessel contains an internal ''U-tube'' for tritium In-Bed Accountability (IBA) measurements. IBA will be performed on six, 12.6 kg production metal hydride storage beds. IBA tests were done on a prototype bed using electric heaters to simulate the radiolytic decay of tritium. Tests had gas flows from 10 to 100 SLPM through the U-tube or 100 SLPM through the bed's vacuum jacket. IBA inventory measurement errors at the 95 percentmore » confidence level were calculated using the correlation of IBA gas temperature rise, or (hydride) bed temperature rise above ambient temperature, versus simulated tritium inventory. Prototype bed IBA inventory errors at 100 SLPM were the largest for gas flows through the vacuum jacket: 15.2 grams for the bed temperature rise and 11.5 grams for the gas temperature rise. For a 100 SLPM U-tube flow, the inventory error was 2.5 grams using bed temperature rise and 1.6 grams using gas temperature rise. For 50 to 100 SLPM U-tube flows, the IBA gas temperature rise inventory errors were nominally one to two grams that increased above four grams for flows less than 50 SLPM. For 50 to 100 SLPM U-tube flows, the IBA bed temperature rise inventory errors were greater than the gas temperature rise errors, but similar errors were found for both methods at gas flows of 20, 30, and 40 SLPM. Electric heater IBA tests were done for six production hydride beds using a 45 SLPM U-tube gas flow. Of the duplicate runs performed on these beds, five of the six beds produced IBA inventory errors of approximately three grams: consistent with results obtained in the laboratory prototype tests.« less

  8. Determination of the precision error of the pulmonary artery thermodilution catheter using an in vitro continuous flow test rig.

    PubMed

    Yang, Xiao-Xing; Critchley, Lester A; Joynt, Gavin M

    2011-01-01

    Thermodilution cardiac output using a pulmonary artery catheter is the reference method against which all new methods of cardiac output measurement are judged. However, thermodilution lacks precision and has a quoted precision error of ± 20%. There is uncertainty about its true precision and this causes difficulty when validating new cardiac output technology. Our aim in this investigation was to determine the current precision error of thermodilution measurements. A test rig through which water circulated at different constant rates with ports to insert catheters into a flow chamber was assembled. Flow rate was measured by an externally placed transonic flowprobe and meter. The meter was calibrated by timed filling of a cylinder. Arrow and Edwards 7Fr thermodilution catheters, connected to a Siemens SC9000 cardiac output monitor, were tested. Thermodilution readings were made by injecting 5 mL of ice-cold water. Precision error was divided into random and systematic components, which were determined separately. Between-readings (random) variability was determined for each catheter by taking sets of 10 readings at different flow rates. Coefficient of variation (CV) was calculated for each set and averaged. Between-catheter systems (systematic) variability was derived by plotting calibration lines for sets of catheters. Slopes were used to estimate the systematic component. Performances of 3 cardiac output monitors were compared: Siemens SC9000, Siemens Sirecust 1261, and Philips MP50. Five Arrow and 5 Edwards catheters were tested using the Siemens SC9000 monitor. Flow rates between 0.7 and 7.0 L/min were studied. The CV (random error) for Arrow was 5.4% and for Edwards was 4.8%. The random precision error was ± 10.0% (95% confidence limits). CV (systematic error) was 5.8% and 6.0%, respectively. The systematic precision error was ± 11.6%. The total precision error of a single thermodilution reading was ± 15.3% and ± 13.0% for triplicate readings. Precision error increased by 45% when using the Sirecust monitor and 100% when using the Philips monitor. In vitro testing of pulmonary artery catheters enabled us to measure both the random and systematic error components of thermodilution cardiac output measurement, and thus calculate the precision error. Using the Siemens monitor, we established a precision error of ± 15.3% for single and ± 13.0% for triplicate reading, which was similar to the previous estimate of ± 20%. However, this precision error was significantly worsened by using the Sirecust and Philips monitors. Clinicians should recognize that the precision error of thermodilution cardiac output is dependent on the selection of catheter and monitor model.

  9. Flow tilt angles near forest edges - Part 2: Lidar anemometry

    NASA Astrophysics Data System (ADS)

    Dellwik, E.; Mann, J.; Bingöl, F.

    2010-05-01

    A novel way of estimating near-surface mean flow tilt angles from ground based Doppler lidar measurements is presented. The results are compared with traditional mast based in-situ sonic anemometry. The tilt angle assessed with the lidar is based on 10 or 30 min mean values of the velocity field from a conically scanning lidar. In this mode of measurement, the lidar beam is rotated in a circle by a prism with a fixed angle to the vertical at varying focus distances. By fitting a trigonometric function to the scans, the mean vertical velocity can be estimated. Lidar measurements from (1) a fetch-limited beech forest site taken at 48-175 m a.g.l. (above ground level), (2) a reference site in flat agricultural terrain and (3) a second reference site in complex terrain are presented. The method to derive flow tilt angles and mean vertical velocities from lidar has several advantages compared to sonic anemometry; there is no flow distortion caused by the instrument itself, there are no temperature effects and the instrument misalignment can be corrected for by assuming zero tilt angle at high altitudes. Contrary to mast-based instruments, the lidar measures the wind field with the exact same alignment error at a multitude of heights. Disadvantages with estimating vertical velocities from a lidar compared to mast-based measurements are potentially slightly increased levels of statistical errors due to limited sampling time, because the sampling is disjunct, and a requirement for homogeneous flow. The estimated mean vertical velocity is biased if the flow over the scanned circle is not homogeneous. It is demonstrated that the error on the mean vertical velocity due to flow inhomogeneity can be approximated by a function of the angle of the lidar beam to the vertical and the vertical gradient of the mean vertical velocity, whereas the error due to flow inhomogeneity on the horizontal mean wind speed is independent of the lidar beam angle. For the presented measurements over forest, it is evaluated that the systematic error due to the inhomogeneity of the flow is less than 0.2°. The results of the vertical conical scans were promising, and yielded positive flow angles for a sector where the forest is fetch-limited. However, more data and analysis are needed for a complete evaluation of the lidar technique.

  10. Quantification of error associated with stormwater and wastewater flow measurement devices

    EPA Science Inventory

    A novel flow testbed has been designed to evaluate the performance of flumes as flow measurement devices. The newly constructed testbed produces both steady and unsteady flows ranging from 10 to 1500 gpm. Two types of flumes (Parshall and trapezoidal) are evaluated under differen...

  11. A methodology to reduce uncertainties in the high-flow portion of a rating curve

    USDA-ARS?s Scientific Manuscript database

    Flow monitoring at watershed scale relies on the establishment of a rating curve that describes the relationship between stage and flow and is developed from actual flow measurements at various stages. Measurement errors increase with out-of-bank flow conditions because of safety concerns and diffic...

  12. Wind tunnel seeding particles for laser velocimeter

    NASA Technical Reports Server (NTRS)

    Ghorieshi, Anthony

    1992-01-01

    The design of an optimal air foil has been a major challenge for aerospace industries. The main objective is to reduce the drag force while increasing the lift force in various environmental air conditions. Experimental verification of theoretical and computational results is a crucial part of the analysis because of errors buried in the solutions, due to the assumptions made in theoretical work. Experimental studies are an integral part of a good design procedure; however, empirical data are not always error free due to environmental obstacles or poor execution, etc. The reduction of errors in empirical data is a major challenge in wind tunnel testing. One of the recent advances of particular interest is the use of a non-intrusive measurement technique known as laser velocimetry (LV) which allows for obtaining quantitative flow data without introducing flow disturbing probes. The laser velocimeter technique is based on measurement of scattered light by the particles present in the flow but not the velocity of the flow. Therefore, for an accurate flow velocity measurement with laser velocimeters, two criterion are investigated: (1) how well the particles track the local flow field, and (2) the requirement of light scattering efficiency to obtain signals with the LV. In order to demonstrate the concept of predicting the flow velocity by velocity measurement of particle seeding, the theoretical velocity of the gas flow is computed and compared with experimentally obtained velocity of particle seeding.

  13. Identification of Carbon loss in the production of pilot-scale Carbon nanotube using gauze reactor

    NASA Astrophysics Data System (ADS)

    Wulan, P. P. D. K.; Purwanto, W. W.; Yeni, N.; Lestari, Y. D.

    2018-03-01

    Carbon loss more than 65% was the major obstacles in the Carbon Nanotube (CNT) production using gauze pilot scale reactor. The results showed that the initial carbon loss calculation is 27.64%. The calculation of carbon loss, then, takes place with various corrections parameters of: product flow rate error measurement, feed flow rate changes, gas product composition by Gas Chromatography Flame Ionization Detector (GC FID), and the carbon particulate by glass fiber filters. Error of product flow rate due to the measurement with bubble soap gives calculation error of carbon loss for about ± 4.14%. Changes in the feed flow rate due to CNT growth in the reactor reduce carbon loss by 4.97%. The detection of secondary hydrocarbon with GC FID during CNT production process reduces carbon loss by 5.14%. Particulates carried by product stream are very few and merely correct the carbon loss about 0.05%. Taking all the factors into account, the amount of carbon loss within this study is (17.21 ± 4.14)%. Assuming that 4.14% of carbon loss is due to the error measurement of product flow rate, the amount of carbon loss is 13.07%. It means that more than 57% of carbon loss within this study is identified.

  14. 40 CFR 92.107 - Fuel flow measurement.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    .... (iii) If the mass of fuel consumed is measured electronically (load cell, load beam, etc.), the error... 40 Protection of Environment 20 2014-07-01 2013-07-01 true Fuel flow measurement. 92.107 Section...) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Test Procedures § 92.107 Fuel flow...

  15. 40 CFR 92.107 - Fuel flow measurement.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    .... (iii) If the mass of fuel consumed is measured electronically (load cell, load beam, etc.), the error... 40 Protection of Environment 21 2013-07-01 2013-07-01 false Fuel flow measurement. 92.107 Section...) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Test Procedures § 92.107 Fuel flow...

  16. 40 CFR 92.107 - Fuel flow measurement.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    .... (iii) If the mass of fuel consumed is measured electronically (load cell, load beam, etc.), the error... 40 Protection of Environment 20 2010-07-01 2010-07-01 false Fuel flow measurement. 92.107 Section...) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Test Procedures § 92.107 Fuel flow...

  17. 40 CFR 92.107 - Fuel flow measurement.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    .... (iii) If the mass of fuel consumed is measured electronically (load cell, load beam, etc.), the error... 40 Protection of Environment 20 2011-07-01 2011-07-01 false Fuel flow measurement. 92.107 Section...) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Test Procedures § 92.107 Fuel flow...

  18. 40 CFR 92.107 - Fuel flow measurement.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    .... (iii) If the mass of fuel consumed is measured electronically (load cell, load beam, etc.), the error... 40 Protection of Environment 21 2012-07-01 2012-07-01 false Fuel flow measurement. 92.107 Section...) CONTROL OF AIR POLLUTION FROM LOCOMOTIVES AND LOCOMOTIVE ENGINES Test Procedures § 92.107 Fuel flow...

  19. A feasability study of color flow doppler vectorization for automated blood flow monitoring.

    PubMed

    Schorer, R; Badoual, A; Bastide, B; Vandebrouck, A; Licker, M; Sage, D

    2017-12-01

    An ongoing issue in vascular medicine is the measure of the blood flow. Catheterization remains the gold standard measurement method, although non-invasive techniques are an area of intense research. We hereby present a computational method for real-time measurement of the blood flow from color flow Doppler data, with a focus on simplicity and monitoring instead of diagnostics. We then analyze the performance of a proof-of-principle software implementation. We imagined a geometrical model geared towards blood flow computation from a color flow Doppler signal, and we developed a software implementation requiring only a standard diagnostic ultrasound device. Detection performance was evaluated by computing flow and its determinants (flow speed, vessel area, and ultrasound beam angle of incidence) on purposely designed synthetic and phantom-based arterial flow simulations. Flow was appropriately detected in all cases. Errors on synthetic images ranged from nonexistent to substantial depending on experimental conditions. Mean errors on measurements from our phantom flow simulation ranged from 1.2 to 40.2% for angle estimation, and from 3.2 to 25.3% for real-time flow estimation. This study is a proof of concept showing that accurate measurement can be done from automated color flow Doppler signal extraction, providing the industry the opportunity for further optimization using raw ultrasound data.

  20. An Evaluation of the Measurement Requirements for an In-Situ Wake Vortex Detection System

    NASA Technical Reports Server (NTRS)

    Fuhrmann, Henri D.; Stewart, Eric C.

    1996-01-01

    Results of a numerical simulation are presented to determine the feasibility of estimating the location and strength of a wake vortex from imperfect in-situ measurements. These estimates could be used to provide information to a pilot on how to avoid a hazardous wake vortex encounter. An iterative algorithm based on the method of secants was used to solve the four simultaneous equations describing the two-dimensional flow field around a pair of parallel counter-rotating vortices of equal and constant strength. The flow field information used by the algorithm could be derived from measurements from flow angle sensors mounted on the wing-tip of the detecting aircraft and an inertial navigation system. The study determined the propagated errors in the estimated location and strength of the vortex which resulted from random errors added to theoretically perfect measurements. The results are summarized in a series of charts and a table which make it possible to estimate these propagated errors for many practical situations. The situations include several generator-detector airplane combinations, different distances between the vortex and the detector airplane, as well as different levels of total measurement error.

  1. Measurement of static pressure on aircraft

    NASA Technical Reports Server (NTRS)

    Gracey, William

    1958-01-01

    Existing data on the errors involved in the measurement of static pressure by means of static-pressure tubes and fuselage vents are presented. The errors associated with the various design features of static-pressure tubes are discussed for the condition of zero angle of attack and for the case where the tube is inclined to flow. Errors which result from variations in the configuration of static-pressure vents are also presented. Errors due to the position of a static-pressure tube in the flow field of the airplane are given for locations ahead of the fuselage nose, ahead of the wing tip, and ahead of the vertical tail fin. The errors of static-pressure vents on the fuselage of an airplane are also presented. Various methods of calibrating static-pressure installations in flight are briefly discussed.

  2. Inferences of the deep solar meridional flow

    NASA Astrophysics Data System (ADS)

    Böning, Vincent G. A.

    2017-10-01

    Understanding the solar meridional flow is important for uncovering the origin of the solar activity cycle. Yet, recent helioseismic estimates of this flow have come to conflicting conclusions in deeper layers of the solar interior, i.e., at depths below about 0.9 solar radii. The aim of this thesis is to contribute to a better understanding of the deep solar meridional flow. Time-distance helioseismology is the major method for investigating this flow. In this method, travel times of waves propagating between pairs of locations on the solar surface are measured. Until now, the travel-time measurements have been modeled using the ray approximation, which assumes that waves travel along infinitely thin ray paths between these locations. In contrast, the scattering of the full wave field in the solar interior due to the flow is modeled in first order by the Born approximation. It is in general a more accurate model of the physics in the solar interior. In a first step, an existing model for calculating the sensitivity of travel-time measurements to solar interior flows using the Born approximation is extended from Cartesian to spherical geometry. The results are succesfully compared to the Cartesian ones and are tested for self-consistency. In a second step, the newly developed model is validated using an existing numerical simulation of linear wave propagation in the Sun. An inversion of artificial travel times for meridional flow shows excellent agreement for noiseless data and reproduces many features in the input flow profile in the case of noisy data. Finally, the new method is used to infer the deep meridional flow. I used Global Oscillation Network Group (GONG) data that were earlier analyzed using the ray approximation and I employed the same Substractive Optimized Local Averaging (SOLA) inversion technique as in the earlier study. Using an existing formula for the covariance of travel-time measurements, it is shown that the assumption of uncorrelated errors from earlier studies leads to errors in the inverted flows being underestimated by a factor of about two to four. The inverted meridional flow above about 0.85 solar radii confirms the earlier results from ray theory regarding the general pattern of the flow, especially regarding a shallow return flow at about 0.9 solar radii, with some differences in the magnitude of the flow. Below about 0.85 solar radii, the inversion result depends on the thresholds used in the singular value decomposition. One result is again similar to the original regarding its general single-cell shape. Other results show a multi-cell structure in the southern hemisphere with two or three cells stacked radially. However, both the single-cell and the multi-cell flow profiles are consistent with the measured travel times within the measurement errors. To reach an unambiguous conclusion on the meridional flow below about 0.85 solar radii, the errors in the measured travel times have to be decreased considerably in future studies. For now, I conclude that the existing controversy of recent measurements of the deep meridional flow is relaxed by properly taking the associated errors into account.

  3. A methodology to reduce uncertainties in the high-flow portion of the rating curve for Goodwater Creek Watershed

    USDA-ARS?s Scientific Manuscript database

    Flow monitoring at watershed scale relies on the establishment of a rating curve that describes the relationship between stage and flow and is developed from actual flow measurements at various stages. Measurement errors increase with out-of-bank flow conditions because of safety concerns and diffic...

  4. Systematic error of diode thermometer.

    PubMed

    Iskrenovic, Predrag S

    2009-08-01

    Semiconductor diodes are often used for measuring temperatures. The forward voltage across a diode decreases, approximately linearly, with the increase in temperature. The applied method is mainly the simplest one. A constant direct current flows through the diode, and voltage is measured at diode terminals. The direct current that flows through the diode, putting it into operating mode, heats up the diode. The increase in temperature of the diode-sensor, i.e., the systematic error due to self-heating, depends on the intensity of current predominantly and also on other factors. The results of systematic error measurements due to heating up by the forward-bias current have been presented in this paper. The measurements were made at several diodes over a wide range of bias current intensity.

  5. Peeling Away Timing Error in NetFlow Data

    NASA Astrophysics Data System (ADS)

    Trammell, Brian; Tellenbach, Bernhard; Schatzmann, Dominik; Burkhart, Martin

    In this paper, we characterize, quantify, and correct timing errors introduced into network flow data by collection and export via Cisco NetFlow version 9. We find that while some of these sources of error (clock skew, export delay) are generally implementation-dependent and known in the literature, there is an additional cyclic error of up to one second that is inherent to the design of the export protocol. We present a method for correcting this cyclic error in the presence of clock skew and export delay. In an evaluation using traffic with known timing collected from a national-scale network, we show that this method can successfully correct the cyclic error. However, there can also be other implementation-specific errors for which insufficient information remains for correction. On the routers we have deployed in our network, this limits the accuracy to about 70ms, reinforcing the point that implementation matters when conducting research on network measurement data.

  6. Fringe localization requirements for three-dimensional flow visualization of shock waves in diffuse-illumination double-pulse holographic interferometry

    NASA Technical Reports Server (NTRS)

    Decker, A. J.

    1982-01-01

    A theory of fringe localization in rapid-double-exposure, diffuse-illumination holographic interferometry was developed. The theory was then applied to compare holographic measurements with laser anemometer measurements of shock locations in a transonic axial-flow compressor rotor. The computed fringe localization error was found to agree well with the measured localization error. It is shown how the view orientation and the curvature and positional variation of the strength of a shock wave are used to determine the localization error and to minimize it. In particular, it is suggested that the view direction not deviate from tangency at the shock surface by more than 30 degrees.

  7. Lens or Prism? Patent Citations as a Measure of Knowledge Flows from Public Research

    PubMed Central

    Roach, Michael; Cohen, Wesley M.

    2013-01-01

    This paper assesses the validity and accuracy of firms’ backward patent citations as a measure of knowledge flows from public research by employing a newly constructed dataset that matches patents to survey data at the level of the R&D lab. Using survey-based measures of the dimensions of knowledge flows, we identify sources of systematic measurement error associated with backward citations to both patent and nonpatent references. We find that patent citations reflect the codified knowledge flows from public research, but they appear to miss knowledge flows that are more private and contract-based in nature, as well as those used in firm basic research. We also find that firms’ patenting and citing strategies affect patent citations, making citations less indicative of knowledge flows. In addition, an illustrative analysis examining the magnitude and direction of measurement error bias suggests that measuring knowledge flows with patent citations can lead to substantial underestimation of the effect of public research on firms’ innovative performance. Throughout our analyses we find that nonpatent references (e.g., journals, conferences, etc.), not the more commonly used patent references, are a better measure of knowledge originating from public research. PMID:24470690

  8. Analysis of low flows and selected methods for estimating low-flow characteristics at partial-record and ungaged stream sites in western Washington

    USGS Publications Warehouse

    Curran, Christopher A.; Eng, Ken; Konrad, Christopher P.

    2012-01-01

    Regional low-flow regression models for estimating Q7,10 at ungaged stream sites are developed from the records of daily discharge at 65 continuous gaging stations (including 22 discontinued gaging stations) for the purpose of evaluating explanatory variables. By incorporating the base-flow recession time constant τ as an explanatory variable in the regression model, the root-mean square error for estimating Q7,10 at ungaged sites can be lowered to 72 percent (for known values of τ), which is 42 percent less than if only basin area and mean annual precipitation are used as explanatory variables. If partial-record sites are included in the regression data set, τ must be estimated from pairs of discharge measurements made during continuous periods of declining low flows. Eight measurement pairs are optimal for estimating τ at partial-record sites, and result in a lowering of the root-mean square error by 25 percent. A low-flow survey strategy that includes paired measurements at partial-record sites requires additional effort and planning beyond a standard strategy, but could be used to enhance regional estimates of τ and potentially reduce the error of regional regression models for estimating low-flow characteristics at ungaged sites.

  9. Arterial Blood Flow Measurement Using Digital Subtraction Angiography (DSA)

    NASA Astrophysics Data System (ADS)

    Swanson, David K.; Myerowitz, P. David; Van Lysel, Michael S.; Peppler, Walter W.; Fields, Barry L.; Watson, Kim M.; O'Connor, Julia

    1984-08-01

    Standard angiography demonstrates the anatomy of arterial occlusive disease but not its physiological signficance. Using intravenous digital subtraction angiography (DSA), we investigated transit-time videodensitometric techniques in measuring femoral arterial flows in dogs. These methods have been successfully applied to intraarterial DSA but not to intravenous DSA. Eight 20 kg dogs were instrumented with an electromagnetic flow probe and a balloon occluder above an imaged segment of femoral artery. 20 cc of Renografin 76 was power injected at 15 cc/sec into the right atrium. Flow in the femoral artery was varied by partial balloon occlusion or peripheral dilatation following induced ischemia resulting in 51 flow measurements varying from 15 to 270 cc/min. Three different transit-time techniques were studied: crosscorrelation, mean square error, and two leading edge methods. Correlation between videodensitometry and flowmeter measurements using these different techniques ranged from 0.78 to 0.88 with a mean square error of 29 to 37 cc/min. Blood flow information using several different transit-time techniques can be obtained with intravenous DSA.

  10. Porous plug for reducing orifice induced pressure error in airfoils

    NASA Technical Reports Server (NTRS)

    Plentovich, Elizabeth B. (Inventor); Gloss, Blair B. (Inventor); Eves, John W. (Inventor); Stack, John P. (Inventor)

    1988-01-01

    A porous plug is provided for the reduction or elimination of positive error caused by the orifice during static pressure measurements of airfoils. The porous plug is press fitted into the orifice, thereby preventing the error caused either by fluid flow turning into the exposed orifice or by the fluid flow stagnating at the downstream edge of the orifice. In addition, the porous plug is made flush with the outer surface of the airfoil, by filing and polishing, to provide a smooth surface which alleviates the error caused by imperfections in the orifice. The porous plug is preferably made of sintered metal, which allows air to pass through the pores, so that the static pressure measurements can be made by remote transducers.

  11. A field technique for estimating aquifer parameters using flow log data

    USGS Publications Warehouse

    Paillet, Frederick L.

    2000-01-01

    A numerical model is used to predict flow along intervals between producing zones in open boreholes for comparison with measurements of borehole flow. The model gives flow under quasi-steady conditions as a function of the transmissivity and hydraulic head in an arbitrary number of zones communicating with each other along open boreholes. The theory shows that the amount of inflow to or outflow from the borehole under any one flow condition may not indicate relative zone transmissivity. A unique inversion for both hydraulic-head and transmissivity values is possible if flow is measured under two different conditions such as ambient and quasi-steady pumping, and if the difference in open-borehole water level between the two flow conditions is measured. The technique is shown to give useful estimates of water levels and transmissivities of two or more water-producing zones intersecting a single interval of open borehole under typical field conditions. Although the modeling technique involves some approximation, the principle limit on the accuracy of the method under field conditions is the measurement error in the flow log data. Flow measurements and pumping conditions are usually adjusted so that transmissivity estimates are most accurate for the most transmissive zones, and relative measurement error is proportionately larger for less transmissive zones. The most effective general application of the borehole-flow model results when the data are fit to models that systematically include more production zones of progressively smaller transmissivity values until model results show that all accuracy in the data set is exhausted.A numerical model is used to predict flow along intervals between producing zones in open boreholes for comparison with measurements of borehole flow. The model gives flow under quasi-steady conditions as a function of the transmissivity and hydraulic head in an arbitrary number of zones communicating with each other along open boreholes. The theory shows that the amount of inflow to or outflow from the borehole under any one flow condition may not indicate relative zone transmissivity. A unique inversion for both hydraulic-head and transmissivity values is possible if flow is measured under two different conditions such as ambient and quasi-steady pumping, and if the difference in open-borehole water level between the two flow conditions is measured. The technique is shown to give useful estimates of water levels and transmissivities of two or more water-producing zones intersecting a single interval of open borehole under typical field conditions. Although the modeling technique involves some approximation, the principle limit on the accuracy of the method under field conditions is the measurement error in the flow log data. Flow measurements and pumping conditions are usually adjusted so that transmissivity estimates are most accurate for the most transmissive zones, and relative measurement error is proportionately larger for less transmissive zones. The most effective general application of the borehole-flow model results when the data are fit to models that symmetrically include more production zones of progressively smaller transmissivity values until model results show that all accuracy in the data set is exhausted.

  12. Seepage investigation on selected reaches of Fish Creek, Teton County, Wyoming, 2004

    USGS Publications Warehouse

    Wheeler, Jerrod D.; Eddy-Miller, Cheryl A.

    2005-01-01

    A seepage investigation was conducted on Fish Creek, a tributary to the Snake River in Teton County in western Wyoming, near Wilson. Mainstem, return flow, tributary, spring, and diversion sites were selected and measured on six reaches along Fish Creek. Flow was measured under two flow regimes, high flow in August 2004 and base flow in November 2004. During August 17-19, 2004, 20 sites had quantifiable discharge with median values ranging from 0.93 to 384 ft3/s for the 14 mainstem sites on Fish Creek, and from 0.35 to 12.2 ft3/s for the 5 return, spring, and tributary sites (inflows). The discharge was 2.23 ft3/s for the single diversion site (outflow). Estimated gains or losses from ground water were calculated for all reaches using the median discharge values and the estimated measurement errors. Reach 1 had a calculated gain in discharge from ground water (23.8 ?3.3 ft3/s). Reaches 2-6 had no calculated gains in flow, greater than the estimated error, that could be attributed to ground water. A second set of measurements were made under base-flow conditions during November 3-4, 2004. Twelve of the 20 sites visited in August 2004 were flowing and were measured. All of the Reach 1 sites near Teton Village were dry. Median discharge values ranged from 10.3 to 70.0 ft3/s on the nine Fish Creek mainstem sites, and from 2.32 to 3.71 ft3/s on the three return, spring, and tributary sites (inflows). Reaches 2, 3 and 6 had a gain from ground water. Reaches 4 and 5 had no calculated gains in flow, greater than the estimated error, that could be attributed to ground water.

  13. Estimation of selected streamflow statistics for a network of low-flow partial-record stations in areas affected by Base Realignment and Closure (BRAC) in Maryland

    USGS Publications Warehouse

    Ries, Kernell G.; Eng, Ken

    2010-01-01

    The U.S. Geological Survey, in cooperation with the Maryland Department of the Environment, operated a network of 20 low-flow partial-record stations during 2008 in a region that extends from southwest of Baltimore to the northeastern corner of Maryland to obtain estimates of selected streamflow statistics at the station locations. The study area is expected to face a substantial influx of new residents and businesses as a result of military and civilian personnel transfers associated with the Federal Base Realignment and Closure Act of 2005. The estimated streamflow statistics, which include monthly 85-percent duration flows, the 10-year recurrence-interval minimum base flow, and the 7-day, 10-year low flow, are needed to provide a better understanding of the availability of water resources in the area to be affected by base-realignment activities. Streamflow measurements collected for this study at the low-flow partial-record stations and measurements collected previously for 8 of the 20 stations were related to concurrent daily flows at nearby index streamgages to estimate the streamflow statistics. Three methods were used to estimate the streamflow statistics and two methods were used to select the index streamgages. Of the three methods used to estimate the streamflow statistics, two of them--the Moments and MOVE1 methods--rely on correlating the streamflow measurements at the low-flow partial-record stations with concurrent streamflows at nearby, hydrologically similar index streamgages to determine the estimates. These methods, recommended for use by the U.S. Geological Survey, generally require about 10 streamflow measurements at the low-flow partial-record station. The third method transfers the streamflow statistics from the index streamgage to the partial-record station based on the average of the ratios of the measured streamflows at the partial-record station to the concurrent streamflows at the index streamgage. This method can be used with as few as one pair of streamflow measurements made on a single streamflow recession at the low-flow partial-record station, although additional pairs of measurements will increase the accuracy of the estimates. Errors associated with the two correlation methods generally were lower than the errors associated with the flow-ratio method, but the advantages of the flow-ratio method are that it can produce reasonably accurate estimates from streamflow measurements much faster and at lower cost than estimates obtained using the correlation methods. The two index-streamgage selection methods were (1) selection based on the highest correlation coefficient between the low-flow partial-record station and the index streamgages, and (2) selection based on Euclidean distance, where the Euclidean distance was computed as a function of geographic proximity and the basin characteristics: drainage area, percentage of forested area, percentage of impervious area, and the base-flow recession time constant, t. Method 1 generally selected index streamgages that were significantly closer to the low-flow partial-record stations than method 2. The errors associated with the estimated streamflow statistics generally were lower for method 1 than for method 2, but the differences were not statistically significant. The flow-ratio method for estimating streamflow statistics at low-flow partial-record stations was shown to be independent from the two correlation-based estimation methods. As a result, final estimates were determined for eight low-flow partial-record stations by weighting estimates from the flow-ratio method with estimates from one of the two correlation methods according to the respective variances of the estimates. Average standard errors of estimate for the final estimates ranged from 90.0 to 7.0 percent, with an average value of 26.5 percent. Average standard errors of estimate for the weighted estimates were, on average, 4.3 percent less than the best average standard errors of estima

  14. Estimation of perspective errors in 2D2C-PIV measurements for 3D concentrated vortices

    NASA Astrophysics Data System (ADS)

    Ma, Bao-Feng; Jiang, Hong-Gang

    2018-06-01

    Two-dimensional planar PIV (2D2C) is still extensively employed in flow measurement owing to its availability and reliability, although more advanced PIVs have been developed. It has long been recognized that there exist perspective errors in velocity fields when employing the 2D2C PIV to measure three-dimensional (3D) flows, the magnitude of which depends on out-of-plane velocity and geometric layouts of the PIV. For a variety of vortex flows, however, the results are commonly represented by vorticity fields, instead of velocity fields. The present study indicates that the perspective error in vorticity fields relies on gradients of the out-of-plane velocity along a measurement plane, instead of the out-of-plane velocity itself. More importantly, an estimation approach to the perspective error in 3D vortex measurements was proposed based on a theoretical vortex model and an analysis on physical characteristics of the vortices, in which the gradient of out-of-plane velocity is uniquely determined by the ratio of the maximum out-of-plane velocity to maximum swirling velocity of the vortex; meanwhile, the ratio has upper limits for naturally formed vortices. Therefore, if the ratio is imposed with the upper limits, the perspective error will only rely on the geometric layouts of PIV that are known in practical measurements. Using this approach, the upper limits of perspective errors of a concentrated vortex can be estimated for vorticity and other characteristic quantities of the vortex. In addition, the study indicates that the perspective errors in vortex location, vortex strength, and vortex radius can be all zero for axisymmetric vortices if they are calculated by proper methods. The dynamic mode decomposition on an oscillatory vortex indicates that the perspective errors of each DMD mode are also only dependent on the gradient of out-of-plane velocity if the modes are represented by vorticity.

  15. Two-Dimensional Automatic Measurement for Nozzle Flow Distribution Using Improved Ultrasonic Sensor

    PubMed Central

    Zhai, Changyuan; Zhao, Chunjiang; Wang, Xiu; Wang, Ning; Zou, Wei; Li, Wei

    2015-01-01

    Spray deposition and distribution are affected by many factors, one of which is nozzle flow distribution. A two-dimensional automatic measurement system, which consisted of a conveying unit, a system control unit, an ultrasonic sensor, and a deposition collecting dish, was designed and developed. The system could precisely move an ultrasonic sensor above a pesticide deposition collecting dish to measure the nozzle flow distribution. A sensor sleeve with a PVC tube was designed for the ultrasonic sensor to limit its beam angle in order to measure the liquid level in the small troughs. System performance tests were conducted to verify the designed functions and measurement accuracy. A commercial spray nozzle was also used to measure its flow distribution. The test results showed that the relative error on volume measurement was less than 7.27% when the liquid volume was 2 mL in trough, while the error was less than 4.52% when the liquid volume was 4 mL or more. The developed system was also used to evaluate the flow distribution of a commercial nozzle. It was able to provide the shape and the spraying width of the flow distribution accurately. PMID:26501288

  16. Comparison of gamma densitometry and electrical capacitance measurements applied to hold-up prediction of oil–water flow patterns in horizontal and slightly inclined pipes

    NASA Astrophysics Data System (ADS)

    Perera, Kshanthi; Kumara, W. A. S.; Hansen, Fredrik; Mylvaganam, Saba; Time, Rune W.

    2018-06-01

    Measurement techniques are vital for the control and operation of multiphase oil–water flow in pipes. The development of such techniques depends on laboratory experiments involving flow visualization, liquid fraction (‘hold-up’), phase slip and pressure drop measurements. They provide valuable information by revealing the physics, spatial and temporal structures of complex multiphase flow phenomena. This paper presents the hold-up measurement of oil–water flow in pipelines using gamma densitometry and electrical capacitance tomography (ECT) sensors. The experiments were carried out with different pipe inclinations from  ‑5° to  +6° for selected mixture velocities (0.2–1.5 m s‑1), and at selected watercuts (0.05–0.95). Mineral oil (Exxsol D60) and water were used as test fluids. Nine flow patterns were identified including a new pattern called stratified wavy and mixed interface flow. As a third direct method, visual observations and high-speed videos were used for the flow regime and interface identification. ECT and gamma densitometry hold-up measurements show similar trends for changes in pipeline inclinations. Changing the pipe inclination affected the flow mostly at lower mixture velocities and caused a change of flow patterns, allowing the highest change of hold-up. ECT hold-up measurements overpredict the gamma densitometry measurements at higher input water cuts and underpredict at intermediate water cuts. Gamma hold-up results showed good agreement with the literature results, having a maximum deviation of 6%, while it was as high as 22% for ECT in comparison to gamma densitometry. Uncertainty analysis of the measurement techniques was carried out with single-phase oil flow. This shows that the measurement error associated with gamma densitometry is approximately 3.2%, which includes 1.3% statistical error and 2.9% error identified as electromagnetically induced noise in electronics. Thus, gamma densitometry can predict hold-up with a higher accuracy in comparison to ECT when applied to oil–water systems at minimized electromagnetic noise.

  17. Research on the Conductivity-Based Detection Principles of Bubbles in Two-Phase Flows and the Design of a Bubble Sensor for CBM Wells.

    PubMed

    Wu, Chuan; Wen, Guojun; Han, Lei; Wu, Xiaoming

    2016-09-17

    The parameters of gas-liquid two-phase flow bubbles in field coalbed methane (CBM) wells are of great significance for analyzing coalbed methane output, judging faults in CBM wells, and developing gas drainage and extraction processes, which stimulates an urgent need for detecting bubble parameters for CBM wells in the field. However, existing bubble detectors cannot meet the requirements of the working environments of CBM wells. Therefore, this paper reports findings on the principles of measuring the flow pattern, velocity, and volume of two-phase flow bubbles based on conductivity, from which a new bubble sensor was designed. The structural parameters and other parameters of the sensor were then computed, the "water film phenomenon" produced by the sensor was analyzed, and the appropriate materials for making the sensor were tested and selected. After the sensor was successfully devised, laboratory tests and field tests were performed, and the test results indicated that the sensor was highly reliable and could detect the flow patterns of two-phase flows, as well as the quantities, velocities, and volumes of bubbles. With a velocity measurement error of ±5% and a volume measurement error of ±7%, the sensor can meet the requirements of field use. Finally, the characteristics and deficiencies of the bubble sensor are summarized based on an analysis of the measurement errors and a comparison of existing bubble-measuring devices and the designed sensor.

  18. Research on the Conductivity-Based Detection Principles of Bubbles in Two-Phase Flows and the Design of a Bubble Sensor for CBM Wells

    PubMed Central

    Wu, Chuan; Wen, Guojun; Han, Lei; Wu, Xiaoming

    2016-01-01

    The parameters of gas-liquid two-phase flow bubbles in field coalbed methane (CBM) wells are of great significance for analyzing coalbed methane output, judging faults in CBM wells, and developing gas drainage and extraction processes, which stimulates an urgent need for detecting bubble parameters for CBM wells in the field. However, existing bubble detectors cannot meet the requirements of the working environments of CBM wells. Therefore, this paper reports findings on the principles of measuring the flow pattern, velocity, and volume of two-phase flow bubbles based on conductivity, from which a new bubble sensor was designed. The structural parameters and other parameters of the sensor were then computed, the “water film phenomenon” produced by the sensor was analyzed, and the appropriate materials for making the sensor were tested and selected. After the sensor was successfully devised, laboratory tests and field tests were performed, and the test results indicated that the sensor was highly reliable and could detect the flow patterns of two-phase flows, as well as the quantities, velocities, and volumes of bubbles. With a velocity measurement error of ±5% and a volume measurement error of ±7%, the sensor can meet the requirements of field use. Finally, the characteristics and deficiencies of the bubble sensor are summarized based on an analysis of the measurement errors and a comparison of existing bubble-measuring devices and the designed sensor. PMID:27649206

  19. Simultaneous estimation of aquifer thickness, conductivity, and BC using borehole and hydrodynamic data with geostatistical inverse direct method

    NASA Astrophysics Data System (ADS)

    Gao, F.; Zhang, Y.

    2017-12-01

    A new inverse method is developed to simultaneously estimate aquifer thickness and boundary conditions using borehole and hydrodynamic measurements from a homogeneous confined aquifer under steady-state ambient flow. This method extends a previous groundwater inversion technique which had assumed known aquifer geometry and thickness. In this research, thickness inversion was successfully demonstrated when hydrodynamic data were supplemented with measured thicknesses from boreholes. Based on a set of hybrid formulations which describe approximate solutions to the groundwater flow equation, the new inversion technique can incorporate noisy observed data (i.e., thicknesses, hydraulic heads, Darcy fluxes or flow rates) at measurement locations as a set of conditioning constraints. Given sufficient quantity and quality of the measurements, the inverse method yields a single well-posed system of equations that can be solved efficiently with nonlinear optimization. The method is successfully tested on two-dimensional synthetic aquifer problems with regular geometries. The solution is stable when measurement errors are increased, with error magnitude reaching up to +/- 10% of the range of the respective measurement. When error-free observed data are used to condition the inversion, the estimated thickness is within a +/- 5% error envelope surrounding the true value; when data contain increasing errors, the estimated thickness become less accurate, as expected. Different combinations of measurement types are then investigated to evaluate data worth. Thickness can be inverted with the combination of observed heads and at least one of the other types of observations such as thickness, Darcy fluxes, or flow rates. Data requirement of the new inversion method is thus not much different from that of interpreting classic well tests. Future work will improve upon this research by developing an estimation strategy for heterogeneous aquifers while drawdown data from hydraulic tests will also be incorporated as conditioning measurements.

  20. Novel Downhole Electromagnetic Flowmeter for Oil-Water Two-Phase Flow in High-Water-Cut Oil-Producing Wells.

    PubMed

    Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang

    2016-10-14

    First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5-60 m³/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2-60 m³/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow.

  1. Novel Downhole Electromagnetic Flowmeter for Oil-Water Two-Phase Flow in High-Water-Cut Oil-Producing Wells

    PubMed Central

    Wang, Yanjun; Li, Haoyu; Liu, Xingbin; Zhang, Yuhui; Xie, Ronghua; Huang, Chunhui; Hu, Jinhai; Deng, Gang

    2016-01-01

    First, the measuring principle, the weight function, and the magnetic field of the novel downhole inserted electromagnetic flowmeter (EMF) are described. Second, the basic design of the EMF is described. Third, the dynamic experiments of two EMFs in oil-water two-phase flow are carried out. The experimental errors are analyzed in detail. The experimental results show that the maximum absolute value of the full-scale errors is better than 5%, the total flowrate is 5–60 m3/d, and the water-cut is higher than 60%. The maximum absolute value of the full-scale errors is better than 7%, the total flowrate is 2–60 m3/d, and the water-cut is higher than 70%. Finally, onsite experiments in high-water-cut oil-producing wells are conducted, and the possible reasons for the errors in the onsite experiments are analyzed. It is found that the EMF can provide an effective technology for measuring downhole oil-water two-phase flow. PMID:27754412

  2. End-diastolic fractional flow reserve: comparison with conventional full-cardiac cycle fractional flow reserve.

    PubMed

    Chalyan, David A; Zhang, Zhang; Takarada, Shigeho; Molloi, Sabee

    2014-02-01

    Diastolic fractional flow reserve (dFFR) has been shown to be highly sensitive for detection of inducible myocardial ischemia. However, its reliance on measurement of left-ventricular pressure for zero-flow pressure correction, as well as manual extraction of the diastolic interval, has been its major limitation. Given previous reports of minimal zero-flow pressure at end-diastole, we compared instantaneous ECG-gated end-diastolic FFR with conventional full-cardiac cycle FFR and other diastolic indices in the porcine model. Measurements of FFR in the left anterior descending and left circumflex arteries were performed in an open-chest swine model with an external occluder device on the coronary artery used to produce varying degrees of epicardial stenosis. An ultrasound flow-probe that was placed proximal to the occluder measured absolute blood flow in ml/min, and it was used as a gold standard for FFR measurement. A total of 17 measurements at maximal hyperemia were acquired in 5 animals. Correlation coefficient between conventional mean hyperemic FFR with pressure-wire and directly measured FFR with flow-probe was 0.876 (standard error estimate=0.069; P<0.0001). The hyperemic end-diastolic FFR with pressure-wire correlated better with FFR measured directly with flow-probe (r=0.941, standard error estimate=0.050; P<0.0001). Instantaneous hyperemic ECG-gated FFR acquired at end-diastole, as compared with conventional full-cardiac cycle FFR, has an improved correlation with FFR measured directly with ultrasound flow-probe.

  3. Repeatability and oblique flow response characteristics of current meters

    USGS Publications Warehouse

    Fulford, Janice M.; Thibodeaux, Kirk G.; Kaehrle, William R.; ,

    1993-01-01

    Laboratory investigation into the precision and accuracy of various mechanical-current meters are presented. Horizontal-axis and vertical-axis meters that are used for the measurement of point velocities in streams and rivers were tested. Meters were tested for repeatability and response to oblique flows. Both horizontal- and vertical-axis meters were found to under- and over-register oblique flows with errors generally increasing as the velocity and angle of flow increased. For the oblique flow tests, magnitude of errors were smallest for horizontal-axis meters. Repeatability of all meters tested was good, with the horizontal- and vertical-axis meters performing similarly.

  4. Variance of discharge estimates sampled using acoustic Doppler current profilers from moving boats

    USGS Publications Warehouse

    Garcia, Carlos M.; Tarrab, Leticia; Oberg, Kevin; Szupiany, Ricardo; Cantero, Mariano I.

    2012-01-01

    This paper presents a model for quantifying the random errors (i.e., variance) of acoustic Doppler current profiler (ADCP) discharge measurements from moving boats for different sampling times. The model focuses on the random processes in the sampled flow field and has been developed using statistical methods currently available for uncertainty analysis of velocity time series. Analysis of field data collected using ADCP from moving boats from three natural rivers of varying sizes and flow conditions shows that, even though the estimate of the integral time scale of the actual turbulent flow field is larger than the sampling interval, the integral time scale of the sampled flow field is on the order of the sampling interval. Thus, an equation for computing the variance error in discharge measurements associated with different sampling times, assuming uncorrelated flow fields is appropriate. The approach is used to help define optimal sampling strategies by choosing the exposure time required for ADCPs to accurately measure flow discharge.

  5. Effect of Doppler flow meter position on discharge measurement in surcharged manholes.

    PubMed

    Yang, Haoming; Zhu, David Z; Liu, Yanchen

    2018-02-01

    Determining the proper installation location of flow meters is important for accurate measurement of discharge in sewer systems. In this study, flow field and flow regimes in two types of manholes under surcharged flow were investigated using a commercial computational fluid dynamics (CFD) code. The error in measuring the flow discharge using a Doppler flow meter (based on the velocity in a Doppler beam) was then estimated. The values of the corrective coefficient were obtained for the Doppler flow meter at different locations under various conditions. Suggestions for selecting installation positions are provided.

  6. Electromagnetic Flow Meter Having a Driver Circuit Including a Current Transducer

    NASA Technical Reports Server (NTRS)

    Patel, Sandeep K. (Inventor); Karon, David M. (Inventor); Cushing, Vincent (Inventor)

    2014-01-01

    An electromagnetic flow meter (EMFM) accurately measures both the complete flow rate and the dynamically fluctuating flow rate of a fluid by applying a unipolar DC voltage to excitation coils for a predetermined period of time, measuring the electric potential at a pair of electrodes, determining a complete flow rate and independently measuring the dynamic flow rate during the "on" cycle of the DC excitation, and correcting the measurements for errors resulting from galvanic drift and other effects on the electric potential. The EMFM can also correct for effects from the excitation circuit induced during operation of the EMFM.

  7. Error compensation for thermally induced errors on a machine tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Krulewich, D.A.

    1996-11-08

    Heat flow from internal and external sources and the environment create machine deformations, resulting in positioning errors between the tool and workpiece. There is no industrially accepted method for thermal error compensation. A simple model has been selected that linearly relates discrete temperature measurements to the deflection. The biggest problem is how to locate the temperature sensors and to determine the number of required temperature sensors. This research develops a method to determine the number and location of temperature measurements.

  8. Flight calibration tests of a nose-boom-mounted fixed hemispherical flow-direction sensor

    NASA Technical Reports Server (NTRS)

    Armistead, K. H.; Webb, L. D.

    1973-01-01

    Flight calibrations of a fixed hemispherical flow angle-of-attack and angle-of-sideslip sensor were made from Mach numbers of 0.5 to 1.8. Maneuvers were performed by an F-104 airplane at selected altitudes to compare the measurement of flow angle of attack from the fixed hemispherical sensor with that from a standard angle-of-attack vane. The hemispherical flow-direction sensor measured differential pressure at two angle-of-attack ports and two angle-of-sideslip ports in diametrically opposed positions. Stagnation pressure was measured at a center port. The results of these tests showed that the calibration curves for the hemispherical flow-direction sensor were linear for angles of attack up to 13 deg. The overall uncertainty in determining angle of attack from these curves was plus or minus 0.35 deg or less. A Mach number position error calibration curve was also obtained for the hemispherical flow-direction sensor. The hemispherical flow-direction sensor exhibited a much larger position error than a standard uncompensated pitot-static probe.

  9. An analysis of estimation of pulmonary blood flow by the single-breath method

    NASA Technical Reports Server (NTRS)

    Srinivasan, R.

    1986-01-01

    The single-breath method represents a simple noninvasive technique for the assessment of capillary blood flow across the lung. However, this method has not gained widespread acceptance, because its accuracy is still being questioned. A rigorous procedure is described for estimating pulmonary blood flow (PBF) using data obtained with the aid of the single-breath method. Attention is given to the minimization of data-processing errors in the presence of measurement errors and to questions regarding a correction for possible loss of CO2 in the lung tissue. It is pointed out that the estimations are based on the exact solution of the underlying differential equations which describe the dynamics of gas exchange in the lung. The reported study demonstrates the feasibility of obtaining highly reliable estimates of PBF from expiratory data in the presence of random measurement errors.

  10. Flow measuring structures

    NASA Astrophysics Data System (ADS)

    Boiten, W.

    1993-11-01

    The use of flow measuring structures is one of the various methods for the continuous measurement of discharges in open channels. In this report a brief summary of these methods is presented to get some insight in the selection of the most appropriate method. Then the distinct functions of water control structures are described. The flow measuring structures are classified according to international rules. The fields of application are dealt with and the definitions of weir flow are given. Much attention is paid to the aspects of how to select the most suitable flow measuring structure. The accuracy in the evaluation of the discharge has been related to the different error sources. A review of international standards on flow measuring structures concludes the report.

  11. Noncontact methods for measuring water-surface elevations and velocities in rivers: Implications for depth and discharge extraction

    USGS Publications Warehouse

    Nelson, Jonathan M.; Kinzel, Paul J.; McDonald, Richard R.; Schmeeckle, Mark

    2016-01-01

    Recently developed optical and videographic methods for measuring water-surface properties in a noninvasive manner hold great promise for extracting river hydraulic and bathymetric information. This paper describes such a technique, concentrating on the method of infrared videog- raphy for measuring surface velocities and both acoustic (laboratory-based) and laser-scanning (field-based) techniques for measuring water-surface elevations. In ideal laboratory situations with simple flows, appropriate spatial and temporal averaging results in accurate water-surface elevations and water-surface velocities. In test cases, this accuracy is sufficient to allow direct inversion of the governing equations of motion to produce estimates of depth and discharge. Unlike other optical techniques for determining local depth that rely on transmissivity of the water column (bathymetric lidar, multi/hyperspectral correlation), this method uses only water-surface information, so even deep and/or turbid flows can be investigated. However, significant errors arise in areas of nonhydrostatic spatial accelerations, such as those associated with flow over bedforms or other relatively steep obstacles. Using laboratory measurements for test cases, the cause of these errors is examined and both a simple semi-empirical method and computational results are presented that can potentially reduce bathymetric inversion errors.

  12. Experimental measurement of structural power flow on an aircraft fuselage

    NASA Technical Reports Server (NTRS)

    Cuschieri, J. M.

    1989-01-01

    An experimental technique is used to measure the structural power flow through an aircraft fuselage with the excitation near the wing attachment location. Because of the large number of measurements required to analyze the whole of an aircraft fuselage, it is necessary that a balance be achieved between the number of measurement transducers, the mounting of these transducers, and the accuracy of the measurements. Using four transducers mounted on a bakelite platform, the structural intensity vectors at locations distributed throughout the fuselage are measured. To minimize the errors associated with using a four transducers technique the measurement positions are selected away from bulkheads and stiffeners. Because four separate transducers are used, with each transducer having its own drive and conditioning amplifiers, phase errors are introduced in the measurements that can be much greater than the phase differences associated with the measurements. To minimize these phase errors two sets of measurements are taken for each position with the orientation of the transducers rotated by 180 deg and an average taken between the two sets of measurements. Results are presented and discussed.

  13. Enhancement of flow measurements using fluid-dynamic constraints

    NASA Astrophysics Data System (ADS)

    Egger, H.; Seitz, T.; Tropea, C.

    2017-09-01

    Novel experimental modalities acquire spatially resolved velocity measurements for steady state and transient flows which are of interest for engineering and biological applications. One of the drawbacks of such high resolution velocity data is their susceptibility to measurement errors. In this paper, we propose a novel filtering strategy that allows enhancement of the noisy measurements to obtain reconstruction of smooth divergence free velocity and corresponding pressure fields which together approximately comply to a prescribed flow model. The main step in our approach consists of the appropriate use of the velocity measurements in the design of a linearized flow model which can be shown to be well-posed and consistent with the true velocity and pressure fields up to measurement and modeling errors. The reconstruction procedure is then formulated as an optimal control problem for this linearized flow model. The resulting filter has analyzable smoothing and approximation properties. We briefly discuss the discretization of the approach by finite element methods and comment on the efficient solution by iterative methods. The capability of the proposed filter to significantly reduce data noise is demonstrated by numerical tests including the application to experimental data. In addition, we compare with other methods like smoothing and solenoidal filtering.

  14. Air temperature sensors: dependence of radiative errors on sensor diameter in precision metrology and meteorology

    NASA Astrophysics Data System (ADS)

    de Podesta, Michael; Bell, Stephanie; Underwood, Robin

    2018-04-01

    In both meteorological and metrological applications, it is well known that air temperature sensors are susceptible to radiative errors. However, it is not widely known that the radiative error measured by an air temperature sensor in flowing air depends upon the sensor diameter, with smaller sensors reporting values closer to true air temperature. This is not a transient effect related to sensor heat capacity, but a fluid-dynamical effect arising from heat and mass flow in cylindrical geometries. This result has been known historically and is in meteorology text books. However, its significance does not appear to be widely appreciated and, as a consequence, air temperature can be—and probably is being—widely mis-estimated. In this paper, we first review prior descriptions of the ‘sensor size’ effect from the metrological and meteorological literature. We develop a heat transfer model to describe the process for cylindrical sensors, and evaluate the predicted temperature error for a range of sensor sizes and air speeds. We compare these predictions with published predictions and measurements. We report measurements demonstrating this effect in two laboratories at NPL in which the air flow and temperature are exceptionally closely controlled. The results are consistent with the heat-transfer model, and show that the air temperature error is proportional to the square root of the sensor diameter and that, even under good laboratory conditions, it can exceed 0.1 °C for a 6 mm diameter sensor. We then consider the implications of this result. In metrological applications, errors of the order of 0.1 °C are significant, representing limiting uncertainties in dimensional and mass measurements. In meteorological applications, radiative errors can easily be much larger. But in both cases, an understanding of the diameter dependence allows assessment and correction of the radiative error using a multi-sensor technique.

  15. Experiments and 3D simulations of flow structures in junctions and their influence on location of flowmeters.

    PubMed

    Mignot, E; Bonakdari, H; Knothe, P; Lipeme Kouyi, G; Bessette, A; Rivière, N; Bertrand-Krajewski, J-L

    2012-01-01

    Open-channel junctions are common occurrences in sewer networks and flow rate measurement often occurs near these singularities. Local flow structures are 3D, impact on the representativeness of the local flow measurements and thus lead to deviations in the flow rate estimation. The present study aims (i) to measure and simulate the flow pattern in a junction flow, (ii) to analyse the impact of the junction on the velocity distribution according to the distance from the junction and thus (iii) to evaluate the typical error derived from the computation of the flow rate close to the junction.

  16. Error analysis of 3D-PTV through unsteady interfaces

    NASA Astrophysics Data System (ADS)

    Akutina, Yulia; Mydlarski, Laurent; Gaskin, Susan; Eiff, Olivier

    2018-03-01

    The feasibility of stereoscopic flow measurements through an unsteady optical interface is investigated. Position errors produced by a wavy optical surface are determined analytically, as are the optimal viewing angles of the cameras to minimize such errors. Two methods of measuring the resulting velocity errors are proposed. These methods are applied to 3D particle tracking velocimetry (3D-PTV) data obtained through the free surface of a water flow within a cavity adjacent to a shallow channel. The experiments were performed using two sets of conditions, one having no strong surface perturbations, and the other exhibiting surface gravity waves. In the latter case, the amplitude of the gravity waves was 6% of the water depth, resulting in water surface inclinations of about 0.2°. (The water depth is used herein as a relevant length scale, because the measurements are performed in the entire water column. In a more general case, the relevant scale is the maximum distance from the interface to the measurement plane, H, which here is the same as the water depth.) It was found that the contribution of the waves to the overall measurement error is low. The absolute position errors of the system were moderate (1.2% of H). However, given that the velocity is calculated from the relative displacement of a particle between two frames, the errors in the measured water velocities were reasonably small, because the error in the velocity is the relative position error over the average displacement distance. The relative position error was measured to be 0.04% of H, resulting in small velocity errors of 0.3% of the free-stream velocity (equivalent to 1.1% of the average velocity in the domain). It is concluded that even though the absolute positions to which the velocity vectors are assigned is distorted by the unsteady interface, the magnitude of the velocity vectors themselves remains accurate as long as the waves are slowly varying (have low curvature). The stronger the disturbances on the interface are (high amplitude, short wave length), the smaller is the distance from the interface at which the measurements can be performed.

  17. Estimation of uncertainty bounds for individual particle image velocimetry measurements from cross-correlation peak ratio

    NASA Astrophysics Data System (ADS)

    Charonko, John J.; Vlachos, Pavlos P.

    2013-06-01

    Numerous studies have established firmly that particle image velocimetry (PIV) is a robust method for non-invasive, quantitative measurements of fluid velocity, and that when carefully conducted, typical measurements can accurately detect displacements in digital images with a resolution well below a single pixel (in some cases well below a hundredth of a pixel). However, to date, these estimates have only been able to provide guidance on the expected error for an average measurement under specific image quality and flow conditions. This paper demonstrates a new method for estimating the uncertainty bounds to within a given confidence interval for a specific, individual measurement. Here, cross-correlation peak ratio, the ratio of primary to secondary peak height, is shown to correlate strongly with the range of observed error values for a given measurement, regardless of flow condition or image quality. This relationship is significantly stronger for phase-only generalized cross-correlation PIV processing, while the standard correlation approach showed weaker performance. Using an analytical model of the relationship derived from synthetic data sets, the uncertainty bounds at a 95% confidence interval are then computed for several artificial and experimental flow fields, and the resulting errors are shown to match closely to the predicted uncertainties. While this method stops short of being able to predict the true error for a given measurement, knowledge of the uncertainty level for a PIV experiment should provide great benefits when applying the results of PIV analysis to engineering design studies and computational fluid dynamics validation efforts. Moreover, this approach is exceptionally simple to implement and requires negligible additional computational cost.

  18. A new method to calculate unsteady particle kinematics and drag coefficient in a subsonic post-shock flow

    NASA Astrophysics Data System (ADS)

    Bordoloi, Ankur D.; Ding, Liuyang; Martinez, Adam A.; Prestridge, Katherine; Adrian, Ronald J.

    2018-07-01

    We introduce a new method (piecewise integrated dynamics equation fit, PIDEF) that uses the particle dynamics equation to determine unsteady kinematics and drag coefficient (C D) for a particle in subsonic post-shock flow. The uncertainty of this method is assessed based on simulated trajectories for both quasi-steady and unsteady flow conditions. Traditional piecewise polynomial fitting (PPF) shows high sensitivity to measurement error and the function used to describe C D, creating high levels of relative error (1) when applied to unsteady shock-accelerated flows. The PIDEF method provides reduced uncertainty in calculations of unsteady acceleration and drag coefficient for both quasi-steady and unsteady flows. This makes PIDEF a preferable method over PPF for complex flows where the temporal response of C D is unknown. We apply PIDEF to experimental measurements of particle trajectories from 8-pulse particle tracking and determine the effect of incident Mach number on relaxation kinematics and drag coefficient of micron-sized particles.

  19. SSDA code to apply data assimilation in soil water flow modeling: Documentation and user manual

    USDA-ARS?s Scientific Manuscript database

    Soil water flow models are based on simplified assumptions about the mechanisms, processes, and parameters of water retention and flow. That causes errors in soil water flow model predictions. Data assimilation (DA) with the ensemble Kalman filter (EnKF) corrects modeling results based on measured s...

  20. Measurements of evaporated perfluorocarbon during partial liquid ventilation by a zeolite absorber.

    PubMed

    Proquitté, Hans; Rüdiger, Mario; Wauer, Roland R; Schmalisch, Gerd

    2004-01-01

    During partial liquid ventilation (PLV) the knowledge of the quantity of exhaled perfluorocarbon (PFC) allows a continuous substitution of the PFC loss to achieve a constant PFC level in the lungs. The aim of our in vitro study was to determine the PFC loss in the mixed expired gas by an absorber and to investigate the effect of the evaporated PFC on ventilatory measurements. To simulate the PFC loss during PLV, a heated flask was rinsed with a constant airflow of 4 L min(-1) and PFC was infused by different speeds (5, 10, 20 mL h(-1)). An absorber filled with PFC selective zeolites was connected with the flask to measure the PFC in the gas. The evaporated PFC volume and the PFC concentration were determined from the weight gain of the absorber measured by an electronic scale. The PFC-dependent volume error of the CO2SMO plus neonatal pneumotachograph was measured by manual movements of a syringe with volumes of 10 and 28 mL with a rate of 30 min(-1). Under steady state conditions there was a strong correlation (r2 = 0.999) between the infusion speed of PFC and the calculated PFC flow rate. The PFC flow rate was slightly underestimated by 4.3% (p < 0.01). However, this bias was independent from PFC infusion rate. The evaporated PFC volume was precisely measured with errors < 1%. The volume error of the CO2SMO-Plus pneumotachograph increased with increasing PFC content for both tidal volumes (p < 0.01). However for PFC flow rates up to 20 mL/h the error of the measured tidal volumes was < 5%. PFC selective zeolites can be used to quantify accurately the evaporated PFC volume during PLV. With increasing PFC concentrations in the exhaled air the measurement errors of ventilatory parameters have to be taken into account.

  1. Estimates of streamflow characteristics for selected small streams, Baker River basin, Washington

    USGS Publications Warehouse

    Williams, John R.

    1987-01-01

    Regression equations were used to estimate streamflow characteristics at eight ungaged sites on small streams in the Baker River basin in the North Cascade Mountains, Washington, that could be suitable for run-of-the-river hydropower development. The regression equations were obtained by relating known streamflow characteristics at 25 gaging stations in nearby basins to several physical and climatic variables that could be easily measured in gaged or ungaged basins. The known streamflow characteristics were mean annual flows, 1-, 3-, and 7-day low flows and high flows, mean monthly flows, and flow duration. Drainage area and mean annual precipitation were not the most significant variables in all the regression equations. Variance in the low flows and the summer mean monthly flows was reduced by including an index of glacierized area within the basin as a third variable. Standard errors of estimate of the regression equations ranged from 25 to 88%, and the largest errors were associated with the low flow characteristics. Discharge measurements made at the eight sites near midmonth each month during 1981 were used to estimate monthly mean flows at the sites for that period. These measurements also were correlated with concurrent daily mean flows from eight operating gaging stations. The correlations provided estimates of mean monthly flows that compared reasonably well with those estimated by the regression analyses. (Author 's abstract)

  2. Export of nutrients and major ionic solutes from a rain forest catchment in the Central Amazon Basin

    NASA Astrophysics Data System (ADS)

    Lesack, Lance F. W.

    1993-03-01

    The relative roles of base flow runoff versus storm flow runoff versus subsurface outflow in controlling total export of solutes from a 23.4-ha catchment of undisturbed rain forest in the central Amazon Basin were evaluated from water and solute flux measurements performed over a 1 year period. Solutes exported via 173 storms during the study were estimated from stream water samples collected during base flow conditions and during eight storms, and by utilizing a hydrograph separation technique in combination with a mixing model to partition storm flow from base flow fluxes. Solutes exported by subsurface outflow were estimated from groundwater samples from three nests of piezometers installed into the streambed, and concurrent measurements of hydraulic conductivity and hydraulic head gradients. Base flow discharge represented 92% of water outflow from the basin and was the dominant pathway of solute export. Although storm flow discharge represented only 5% of total water outflow, storm flow solute fluxes represented up to 25% of the total annual export flux, though for many solutes the portion was less. Subsurface outflow represented only 2.5% of total water outflow, and subsurface solute fluxes never represented more than 5% of the total annual export flux. Measurement errors were relatively high for storm flow and subsurface outflow fluxes, but cumulative measurement errors associated with the total solute fluxes exported from the catchment, in most cases, ranged from only ±7% to 14% because base flow fluxes were measured relatively well. The export fluxes of most solutes are substantially less than previously reported for comparable small catchments in the Amazon basin, and these differences cannot be reconciled by the fact that storm flow and subsurface outflows were not appropriately measured in previous studies.

  3. Carbon dioxide emission tallies for 210 U.S. coal-fired power plants: a comparison of two accounting methods.

    PubMed

    Quick, Jeffrey C

    2014-01-01

    Annual CO2 emission tallies for 210 coal-fired power plants during 2009 were more accurately calculated from fuel consumption records reported by the US. Energy Information Administration (EIA) than measurements from Continuous Emissions Monitoring Systems (CEMS) reported by the US. Environmental Protection Agency. Results from these accounting methods for individual plants vary by +/- 10.8%. Although the differences systematically vary with the method used to certify flue-gas flow instruments in CEMS, additional sources of CEMS measurement error remain to be identified. Limitations of the EIA fuel consumption data are also discussed. Consideration of weighing, sample collection, laboratory analysis, emission factor, and stock adjustment errors showed that the minimum error for CO2 emissions calculated from the fuel consumption data ranged from +/- 1.3% to +/- 7.2% with a plant average of +/- 1.6%. This error might be reduced by 50% if the carbon content of coal delivered to U.S. power plants were reported. Potentially, this study might inform efforts to regulate CO2 emissions (such as CO2 performance standards or taxes) and more immediately, the U.S. Greenhouse Gas Reporting Rule where large coal-fired power plants currently use CEMS to measure CO2 emissions. Moreover, if, as suggested here, the flue-gas flow measurement limits the accuracy of CO2 emission tallies from CEMS, then the accuracy of other emission tallies from CEMS (such as SO2, NOx, and Hg) would be similarly affected. Consequently, improved flue gas flow measurements are needed to increase the reliability of emission measurements from CEMS.

  4. Velocity encoding with the slice select refocusing gradient for faster imaging and reduced chemical shift-induced phase errors.

    PubMed

    Middione, Matthew J; Thompson, Richard B; Ennis, Daniel B

    2014-06-01

    To investigate a novel phase-contrast MRI velocity-encoding technique for faster imaging and reduced chemical shift-induced phase errors. Velocity encoding with the slice select refocusing gradient achieves the target gradient moment by time shifting the refocusing gradient, which enables the use of the minimum in-phase echo time (TE) for faster imaging and reduced chemical shift-induced phase errors. Net forward flow was compared in 10 healthy subjects (N = 10) within the ascending aorta (aAo), main pulmonary artery (PA), and right/left pulmonary arteries (RPA/LPA) using conventional flow compensated and flow encoded (401 Hz/px and TE = 3.08 ms) and slice select refocused gradient velocity encoding (814 Hz/px and TE = 2.46 ms) at 3 T. Improved net forward flow agreement was measured across all vessels for slice select refocused gradient compared to flow compensated and flow encoded: aAo vs. PA (1.7% ± 1.9% vs. 5.8% ± 2.8%, P = 0.002), aAo vs. RPA + LPA (2.1% ± 1.7% vs. 6.0% ± 4.3%, P = 0.03), and PA vs. RPA + LPA (2.9% ± 2.1% vs. 6.1% ± 6.3%, P = 0.04), while increasing temporal resolution (35%) and signal-to-noise ratio (33%). Slice select refocused gradient phase-contrast MRI with a high receiver bandwidth and minimum in-phase TE provides more accurate and less variable flow measurements through the reduction of chemical shift-induced phase errors and a reduced TE/repetition time, which can be used to increase the temporal/spatial resolution and/or reduce breath hold durations. Copyright © 2013 Wiley Periodicals, Inc.

  5. Investigating Systematic Errors of the Interstellar Flow Longitude Derived from the Pickup Ion Cutoff

    NASA Astrophysics Data System (ADS)

    Taut, A.; Berger, L.; Drews, C.; Bower, J.; Keilbach, D.; Lee, M. A.; Moebius, E.; Wimmer-Schweingruber, R. F.

    2017-12-01

    Complementary to the direct neutral particle measurements performed by e.g. IBEX, the measurement of PickUp Ions (PUIs) constitutes a diagnostic tool to investigate the local interstellar medium. PUIs are former neutral particles that have been ionized in the inner heliosphere. Subsequently, they are picked up by the solar wind and its frozen-in magnetic field. Due to this process, a characteristic Velocity Distribution Function (VDF) with a sharp cutoff evolves, which carries information about the PUI's injection speed and thus the former neutral particle velocity. The symmetry of the injection speed about the interstellar flow vector is used to derive the interstellar flow longitude from PUI measurements. Using He PUI data obtained by the PLASTIC sensor on STEREO A, we investigate how this concept may be affected by systematic errors. The PUI VDF strongly depends on the orientation of the local interplanetary magnetic field. Recently injected PUIs with speeds just below the cutoff speed typically form a highly anisotropic torus distribution in velocity space, which leads to a longitudinal transport for certain magnetic field orientation. Therefore, we investigate how the selection of magnetic field configurations in the data affects the result for the interstellar flow longitude that we derive from the PUI cutoff. Indeed, we find that the results follow a systematic trend with the filtered magnetic field angles that can lead to a shift of the result up to 5°. In turn, this means that every value for the interstellar flow longitude derived from the PUI cutoff is affected by a systematic error depending on the utilized magnetic field orientations. Here, we present our observations, discuss possible reasons for the systematic trend we discovered, and indicate selections that may minimize the systematic errors.

  6. Reducing hydrologic model uncertainty in monthly streamflow predictions using multimodel combination

    NASA Astrophysics Data System (ADS)

    Li, Weihua; Sankarasubramanian, A.

    2012-12-01

    Model errors are inevitable in any prediction exercise. One approach that is currently gaining attention in reducing model errors is by combining multiple models to develop improved predictions. The rationale behind this approach primarily lies on the premise that optimal weights could be derived for each model so that the developed multimodel predictions will result in improved predictions. A new dynamic approach (MM-1) to combine multiple hydrological models by evaluating their performance/skill contingent on the predictor state is proposed. We combine two hydrological models, "abcd" model and variable infiltration capacity (VIC) model, to develop multimodel streamflow predictions. To quantify precisely under what conditions the multimodel combination results in improved predictions, we compare multimodel scheme MM-1 with optimal model combination scheme (MM-O) by employing them in predicting the streamflow generated from a known hydrologic model (abcd model orVICmodel) with heteroscedastic error variance as well as from a hydrologic model that exhibits different structure than that of the candidate models (i.e., "abcd" model or VIC model). Results from the study show that streamflow estimated from single models performed better than multimodels under almost no measurement error. However, under increased measurement errors and model structural misspecification, both multimodel schemes (MM-1 and MM-O) consistently performed better than the single model prediction. Overall, MM-1 performs better than MM-O in predicting the monthly flow values as well as in predicting extreme monthly flows. Comparison of the weights obtained from each candidate model reveals that as measurement errors increase, MM-1 assigns weights equally for all the models, whereas MM-O assigns higher weights for always the best-performing candidate model under the calibration period. Applying the multimodel algorithms for predicting streamflows over four different sites revealed that MM-1 performs better than all single models and optimal model combination scheme, MM-O, in predicting the monthly flows as well as the flows during wetter months.

  7. Error in Dasibi flight measurements of atmospheric ozone due to instrument wall-loss

    NASA Technical Reports Server (NTRS)

    Ainsworth, J. E.; Hagemeyer, J. R.; Reed, E. I.

    1981-01-01

    Theory suggests that in laminar flow the percent loss of a trace constituent to the walls of a measuring instrument varies as P to the -2/3, where P is the total gas pressure. Preliminary laboratory ozone wall-loss measurements confirm this P to the -2/3 dependence. Accurate assessment of wall-loss is thus of particular importance for those balloon-borne instruments utilizing laminar flow at ambient pressure, since the ambient pressure decreases by a factor of 350 during ascent to 40 km. Measurements and extrapolations made for a Dasibi ozone monitor modified for balloon flight indicate that the wall-loss error at 40 km was between 6 and 30 percent and that the wall-loss error in the derived total ozone column-content for the region from the surface to 40 km altitude was between 2 and 10 percent. At 1000 mb, turbulence caused an order of magnitude increase in the Dasibi wall-loss.

  8. Diffuse-flow conceptualization and simulation of the Edwards aquifer, San Antonio region, Texas

    USGS Publications Warehouse

    Lindgren, R.J.

    2006-01-01

    A numerical ground-water-flow model (hereinafter, the conduit-flow Edwards aquifer model) of the karstic Edwards aquifer in south-central Texas was developed for a previous study on the basis of a conceptualization emphasizing conduit development and conduit flow, and included simulating conduits as one-cell-wide, continuously connected features. Uncertainties regarding the degree to which conduits pervade the Edwards aquifer and influence ground-water flow, as well as other uncertainties inherent in simulating conduits, raised the question of whether a model based on the conduit-flow conceptualization was the optimum model for the Edwards aquifer. Accordingly, a model with an alternative hydraulic conductivity distribution without conduits was developed in a study conducted during 2004-05 by the U.S. Geological Survey, in cooperation with the San Antonio Water System. The hydraulic conductivity distribution for the modified Edwards aquifer model (hereinafter, the diffuse-flow Edwards aquifer model), based primarily on a conceptualization in which flow in the aquifer predominantly is through a network of numerous small fractures and openings, includes 38 zones, with hydraulic conductivities ranging from 3 to 50,000 feet per day. Revision of model input data for the diffuse-flow Edwards aquifer model was limited to changes in the simulated hydraulic conductivity distribution. The root-mean-square error for 144 target wells for the calibrated steady-state simulation for the diffuse-flow Edwards aquifer model is 20.9 feet. This error represents about 3 percent of the total head difference across the model area. The simulated springflows for Comal and San Marcos Springs for the calibrated steady-state simulation were within 2.4 and 15 percent of the median springflows for the two springs, respectively. The transient calibration period for the diffuse-flow Edwards aquifer model was 1947-2000, with 648 monthly stress periods, the same as for the conduit-flow Edwards aquifer model. The root-mean-square error for a period of drought (May-November 1956) for the calibrated transient simulation for 171 target wells is 33.4 feet, which represents about 5 percent of the total head difference across the model area. The root-mean-square error for a period of above-normal rainfall (November 1974-July 1975) for the calibrated transient simulation for 169 target wells is 25.8 feet, which represents about 4 percent of the total head difference across the model area. The root-mean-square error ranged from 6.3 to 30.4 feet in 12 target wells with long-term water-level measurements for varying periods during 1947-2000 for the calibrated transient simulation for the diffuse-flow Edwards aquifer model, and these errors represent 5.0 to 31.3 percent of the range in water-level fluctuations of each of those wells. The root-mean-square errors for the five major springs in the San Antonio segment of the aquifer for the calibrated transient simulation, as a percentage of the range of discharge fluctuations measured at the springs, varied from 7.2 percent for San Marcos Springs and 8.1 percent for Comal Springs to 28.8 percent for Leona Springs. The root-mean-square errors for hydraulic heads for the conduit-flow Edwards aquifer model are 27, 76, and 30 percent greater than those for the diffuse-flow Edwards aquifer model for the steady-state, drought, and above-normal rainfall synoptic time periods, respectively. The goodness-of-fit between measured and simulated springflows is similar for Comal, San Marcos, and Leona Springs for the diffuse-flow Edwards aquifer model and the conduit-flow Edwards aquifer model. The root-mean-square errors for Comal and Leona Springs were 15.6 and 21.3 percent less, respectively, whereas the root-mean-square error for San Marcos Springs was 3.3 percent greater for the diffuse-flow Edwards aquifer model compared to the conduit-flow Edwards aquifer model. The root-mean-square errors for San Antonio and San Pedro Springs were appreciably greater, 80.2 and 51.0 percent, respectively, for the diffuse-flow Edwards aquifer model. The simulated water budgets for the diffuse-flow Edwards aquifer model are similar to those for the conduit-flow Edwards aquifer model. Differences in percentage of total sources or discharges for a budget component are 2.0 percent or less for all budget components for the steady-state and transient simulations. The largest difference in terms of the magnitude of water budget components for the transient simulation for 1956 was a decrease of about 10,730 acre-feet per year (about 2 per-cent) in springflow for the diffuse-flow Edwards aquifer model compared to the conduit-flow Edwards aquifer model. This decrease in springflow (a water budget discharge) was largely offset by the decreased net loss of water from storage (a water budget source) of about 10,500 acre-feet per year.

  9. Assessment of volume and leak measurements during CPAP using a neonatal lung model.

    PubMed

    Fischer, H S; Roehr, C C; Proquitté, H; Wauer, R R; Schmalisch, G

    2008-01-01

    Although several commercial devices are available which allow tidal volume and air leak monitoring during continuous positive airway pressure (CPAP) in neonates, little is known about their measurement accuracy and about the influence of air leaks on volume measurement. The aim of this in vitro study was the validation of volume and leak measurement under CPAP using a commercial ventilatory device, taking into consideration the clinical conditions in neonatology. The measurement accuracy of the Leoni ventilator (Heinen & Löwenstein, Germany) was investigated both in a leak-free system and with leaks simulated using calibration syringes (2-10 ml, 20-100 ml) and a mechanical lung model. Open tubes of variable lengths were connected for leak simulation. Leak flow was measured with the flow-through technique. In a leak-free system the mean relative volume error +/-SD was 3.5 +/- 2.6% (2-10 ml) and 5.9 +/- 0.7% (20-60 ml), respectively. The influence of CPAP level, driving flow, respiratory rate and humidification of the breathing gas on the volume error was negligible. However, an increasing F(i)O(2) caused the measured tidal volume to increase by up to 25% (F(i)O(2) = 1.0). The relative error +/- SD of the leak measurements was -0.2 +/- 11.9%. For leaks > 19%, measured tidal volume was underestimated by more than 10%. In conclusion, the present in vitro study showed that the Leoni allowed accurate volume monitoring under CPAP conditions similar to neonates. Air leaks of up to 90% of patient flow were reliably detected. For an F(i)O(2) > 0.4 and for leaks > 19%, a numerical correction of the displayed volume should be performed.

  10. Quantifying measurement uncertainties in ADCP measurements in non-steady, inhomogeneous flow

    NASA Astrophysics Data System (ADS)

    Schäfer, Stefan

    2017-04-01

    The author presents a laboratory study of fixed-platform four-beam ADCP and three-beam ADV measurements in the tailrace of a micro hydro power setup with a 35kW Kaplan-turbine and 2.5m head. The datasets discussed quantify measurement uncertainties of the ADCP measurement technique coming from non-steady, inhomogeneous flow. For constant discharge of 1.5m3/s, two different flow scenarios were investigated: one being the regular tailrace flow downstream the draft tube and the second being a straightened, less inhomogeneous flow, which was generated by the use of a flow straightening device: A rack of diameter 40mm pipe sections was mounted right behind the draft tube. ADCP measurements (sampling rate 1.35Hz) were conducted in three distances behind the draft tube and compared bin-wise to measurements of three simultaneously measuring ADV probes (sampling rate 64Hz). The ADV probes were aligned horizontally and the ADV bins were placed in the centers of two facing ADCP bins and in the vertical under the ADCP probe of the corresponding depth. Rotating the ADV probes by 90° allowed for measurements of the other two facing ADCP bins. For reasons of mutual probe interaction, ADCP and ADV measurements were not conducted at the same time. The datasets were evaluated by using mean and fluctuation velocities. Turbulence parameters were calculated and compared as far as applicable. Uncertainties coming from non-steady flow were estimated with the normalized mean square error und evaluated by comparing long-term measurements of 60 minutes to shorter measurement intervals. Uncertainties coming from inhomogeneous flow were evaluated by comparison of ADCP with ADV data along the ADCP beams where ADCP data were effectively measured and in the vertical under the ADCP probe where velocities of the ADCP measurements were displayed. Errors coming from non-steady flow could be compensated through sufficiently long measurement intervals with high enough sampling rates depending on the turbulence scales of the flow. In case of heterogeneous distributions of vertical velocity components in the ADCP beams, the resulting errors significantly biased the mean velocities and could not be recognized by sole ADCP measurements. For the straightened flow scenario, the results showed good agreement of ADCP and ADV data for mean velocities, whereas the ADCP data consistently overestimated turbulence intensities by a factor of 2. Reynolds stresses were in good agreement as well as were turbulent kinetic energies, apart from one measurement with outliers of up to 30%. For the tailrace flow scenario, the mean velocities from the ADCP data underestimated the ADV data by 23%. Turbulence intensities from the ADCP data were fluctuant, overestimated the ADV data by factors of up to 2.8 and showed spatial discrepancies over the depth. Reynolds stresses were only partly in good agreement and turbulent kinetic energies were over- and underestimated in a range of [-50; +30] %.

  11. A technique for measuring hypersonic flow velocity profiles

    NASA Technical Reports Server (NTRS)

    Gartrell, L. R.

    1973-01-01

    A technique for measuring hypersonic flow velocity profiles is described. This technique utilizes an arc-discharge-electron-beam system to produce a luminous disturbance in the flow. The time of flight of this disturbance was measured. Experimental tests were conducted in the Langley pilot model expansion tube. The measured velocities were of the order of 6000 m/sec over a free-stream density range from 0.000196 to 0.00186 kg/cu m. The fractional error in the velocity measurements was less than 5 percent. Long arc discharge columns (0.356 m) were generated under hypersonic flow conditions in the expansion-tube modified to operate as an expansion tunnel.

  12. The Effects of Sampling Probe Design and Sampling Techniques on Aerosol Measurements

    DTIC Science & Technology

    1975-05-01

    Schematic of Extraction and Sampling System 39 16. Filter Housing 40 17. Theoretical Isokinetic Flow Requirements of the EPA Sampling...from the flow parameters based on a zero-error assumption at isokinetic sampling conditions. Isokinetic , or equal velocity sampling, was...prior to testing the probes. It was also used to measure the flow field adjacent to the probe inlets to determine the isokinetic condition of the

  13. Retrospective cost adaptive Reynolds-averaged Navier-Stokes k-ω model for data-driven unsteady turbulent simulations

    NASA Astrophysics Data System (ADS)

    Li, Zhiyong; Hoagg, Jesse B.; Martin, Alexandre; Bailey, Sean C. C.

    2018-03-01

    This paper presents a data-driven computational model for simulating unsteady turbulent flows, where sparse measurement data is available. The model uses the retrospective cost adaptation (RCA) algorithm to automatically adjust the closure coefficients of the Reynolds-averaged Navier-Stokes (RANS) k- ω turbulence equations to improve agreement between the simulated flow and the measurements. The RCA-RANS k- ω model is verified for steady flow using a pipe-flow test case and for unsteady flow using a surface-mounted-cube test case. Measurements used for adaptation of the verification cases are obtained from baseline simulations with known closure coefficients. These verification test cases demonstrate that the RCA-RANS k- ω model can successfully adapt the closure coefficients to improve agreement between the simulated flow field and a set of sparse flow-field measurements. Furthermore, the RCA-RANS k- ω model improves agreement between the simulated flow and the baseline flow at locations at which measurements do not exist. The RCA-RANS k- ω model is also validated with experimental data from 2 test cases: steady pipe flow, and unsteady flow past a square cylinder. In both test cases, the adaptation improves agreement with experimental data in comparison to the results from a non-adaptive RANS k- ω model that uses the standard values of the k- ω closure coefficients. For the steady pipe flow, adaptation is driven by mean stream-wise velocity measurements at 24 locations along the pipe radius. The RCA-RANS k- ω model reduces the average velocity error at these locations by over 35%. For the unsteady flow over a square cylinder, adaptation is driven by time-varying surface pressure measurements at 2 locations on the square cylinder. The RCA-RANS k- ω model reduces the average surface-pressure error at these locations by 88.8%.

  14. A study of the local pressure field in turbulent shear flow and its relation to aerodynamic noise generation

    NASA Technical Reports Server (NTRS)

    Jones, B. G.; Planchon, H. P., Jr.

    1973-01-01

    Work during the period of this report has been in three areas: (1) pressure transducer error analysis, (2) fluctuating velocity and pressure measurements in the NASA Lewis 6-inch diameter quiet jet facility, and (3) measurement analysis. A theory was developed and experimentally verified to quantify the pressure transducer velocity interference error. The theory and supporting experimental evidence show that the errors are a function of the velocity field's turbulent structure. It is shown that near the mixing layer center the errors are negligible. Turbulent velocity and pressure measurements were made in the NASA Lewis quiet jet facility. Some preliminary results are included.

  15. Beam localization in HIFU temperature measurements using thermocouples, with application to cooling by large blood vessels.

    PubMed

    Dasgupta, Subhashish; Banerjee, Rupak K; Hariharan, Prasanna; Myers, Matthew R

    2011-02-01

    Experimental studies of thermal effects in high-intensity focused ultrasound (HIFU) procedures are often performed with the aid of fine wire thermocouples positioned within tissue phantoms. Thermocouple measurements are subject to several types of error which must be accounted for before reliable inferences can be made on the basis of the measurements. Thermocouple artifact due to viscous heating is one source of error. A second is the uncertainty regarding the position of the beam relative to the target location or the thermocouple junction, due to the error in positioning the beam at the junction. This paper presents a method for determining the location of the beam relative to a fixed pair of thermocouples. The localization technique reduces the uncertainty introduced by positioning errors associated with very narrow HIFU beams. The technique is presented in the context of an investigation into the effect of blood flow through large vessels on the efficacy of HIFU procedures targeted near the vessel. Application of the beam localization method allowed conclusions regarding the effects of blood flow to be drawn from previously inconclusive (because of localization uncertainties) data. Comparison of the position-adjusted transient temperature profiles for flow rates of 0 and 400ml/min showed that blood flow can reduce temperature elevations by more than 10%, when the HIFU focus is within a 2mm distance from the vessel wall. At acoustic power levels of 17.3 and 24.8W there is a 20- to 70-fold decrease in thermal dose due to the convective cooling effect of blood flow, implying a shrinkage in lesion size. The beam-localization technique also revealed the level of thermocouple artifact as a function of sonication time, providing investigators with an indication of the quality of thermocouple data for a given exposure time. The maximum artifact was found to be double the measured temperature rise, during initial few seconds of sonication. Copyright © 2010 Elsevier B.V. All rights reserved.

  16. The Accuracy and Precision of Flow Measurements Using Phase Contrast Techniques

    NASA Astrophysics Data System (ADS)

    Tang, Chao

    Quantitative volume flow rate measurements using the magnetic resonance imaging technique are studied in this dissertation because the volume flow rates have a special interest in the blood supply of the human body. The method of quantitative volume flow rate measurements is based on the phase contrast technique, which assumes a linear relationship between the phase and flow velocity of spins. By measuring the phase shift of nuclear spins and integrating velocity across the lumen of the vessel, we can determine the volume flow rate. The accuracy and precision of volume flow rate measurements obtained using the phase contrast technique are studied by computer simulations and experiments. The various factors studied include (1) the partial volume effect due to voxel dimensions and slice thickness relative to the vessel dimensions; (2) vessel angulation relative to the imaging plane; (3) intravoxel phase dispersion; (4) flow velocity relative to the magnitude of the flow encoding gradient. The partial volume effect is demonstrated to be the major obstacle to obtaining accurate flow measurements for both laminar and plug flow. Laminar flow can be measured more accurately than plug flow in the same condition. Both the experiment and simulation results for laminar flow show that, to obtain the accuracy of volume flow rate measurements to within 10%, at least 16 voxels are needed to cover the vessel lumen. The accuracy of flow measurements depends strongly on the relative intensity of signal from stationary tissues. A correction method is proposed to compensate for the partial volume effect. The correction method is based on a small phase shift approximation. After the correction, the errors due to the partial volume effect are compensated, allowing more accurate results to be obtained. An automatic program based on the correction method is developed and implemented on a Sun workstation. The correction method is applied to the simulation and experiment results. The results show that the correction significantly reduces the errors due to the partial volume effect. We apply the correction method to the data of in vivo studies. Because the blood flow is not known, the results of correction are tested according to the common knowledge (such as cardiac output) and conservation of flow. For example, the volume of blood flowing to the brain should be equal to the volume of blood flowing from the brain. Our measurement results are very convincing.

  17. Predictive modelling of flow in a two-dimensional intermediate-scale, heterogeneous porous media

    USGS Publications Warehouse

    Barth, Gilbert R.; Hill, M.C.; Illangasekare, T.H.; Rajaram, H.

    2000-01-01

    To better understand the role of sedimentary structures in flow through porous media, and to determine how small-scale laboratory-measured values of hydraulic conductivity relate to in situ values this work deterministically examines flow through simple, artificial structures constructed for a series of intermediate-scale (10 m long), two-dimensional, heterogeneous, laboratory experiments. Nonlinear regression was used to determine optimal values of in situ hydraulic conductivity, which were compared to laboratory-measured values. Despite explicit numerical representation of the heterogeneity, the optimized values were generally greater than the laboratory-measured values. Discrepancies between measured and optimal values varied depending on the sand sieve size, but their contribution to error in the predicted flow was fairly consistent for all sands. Results indicate that, even under these controlled circumstances, laboratory-measured values of hydraulic conductivity need to be applied to models cautiously.To better understand the role of sedimentary structures in flow through porous media, and to determine how small-scale laboratory-measured values of hydraulic conductivity relate to in situ values this work deterministically examines flow through simple, artificial structures constructed for a series of intermediate-scale (10 m long), two-dimensional, heterogeneous, laboratory experiments. Nonlinear regression was used to determine optimal values of in situ hydraulic conductivity, which were compared to laboratory-measured values. Despite explicit numerical representation of the heterogeneity, the optimized values were generally greater than the laboratory-measured values. Discrepancies between measured and optimal values varied depending on the sand sieve size, but their contribution to error in the predicted flow was fairly consistent for all sands. Results indicate that, even under these controlled circumstances, laboratory-measured values of hydraulic conductivity need to be applied to models cautiously.

  18. Quantification of the transient mass flow rate in a simplex swirl injector

    NASA Astrophysics Data System (ADS)

    Khil, Taeock; Kim, Sunghyuk; Cho, Seongho; Yoon, Youngbin

    2009-07-01

    When a heat release and acoustic pressure fluctuations are generated in a combustor by irregular and local combustions, these fluctuations affect the mass flow rate of the propellants injected through the injectors. In addition, variations of the mass flow rate caused by these fluctuations bring about irregular combustion, which is associated with combustion instability, so it is very important to identify a mass variation through the pressure fluctuation on the injector and to investigate its transfer function. Therefore, quantification of the variation of the mass flow rate generated in a simplex swirl injector via the injection pressure fluctuation was the subject of an initial study. To acquire the transient mass flow rate in the orifice with time, the axial velocity of flows and the liquid film thickness in the orifice were measured. The axial velocity was acquired through a theoretical approach after measuring the pressure in the orifice. In an effort to understand the flow area in the orifice, the liquid film thickness was measured by an electric conductance method. In the results, the mass flow rate calculated from the axial velocity and the liquid film thickness measured by the electric conductance method in the orifice was in good agreement with the mass flow rate acquired by the direct measuring method in a small error range within 1% in the steady state and within 4% for the average mass flow rate in a pulsated state. Also, the amplitude (gain) of the mass flow rate acquired by the proposed direct measuring method was confirmed using the PLLIF technique in the low pressure fluctuation frequency ranges with an error under 6%. This study shows that our proposed method can be used to measure the mass flow rate not only in the steady state but also in the unsteady state (or the pulsated state). Moreover, this method shows very high accuracy based on the experimental results.

  19. Hepatic Blood Perfusion Estimated by Dynamic Contrast-Enhanced Computed Tomography in Pigs Limitations of the Slope Method

    PubMed Central

    Winterdahl, Michael; Sørensen, Michael; Keiding, Susanne; Mortensen, Frank V.; Alstrup, Aage K. O.; Hansen, Søren B.; Munk, Ole L.

    2012-01-01

    Objective To determine whether dynamic contrast-enhanced computed tomography (DCE-CT) and the slope method can provide absolute measures of hepatic blood perfusion from hepatic artery (HA) and portal vein (PV) at experimentally varied blood flow rates. Materials and Methods Ten anesthetized 40-kg pigs underwent DCE-CT during periods of normocapnia (normal flow), hypocapnia (decreased flow), and hypercapnia (increased flow), which was induced by adjusting the ventilation. Reference blood flows in HA and PV were measured continuously by surgically-placed ultrasound transit-time flowmeters. For each capnic condition, the DCE-CT estimated absolute hepatic blood perfusion from HA and PV were calculated using the slope method and compared with flowmeter based absolute measurements of hepatic perfusions and relative errors were analyzed. Results The relative errors (mean±SEM) of the DCE-CT based perfusion estimates were −21±23% for HA and 81±31% for PV (normocapnia), 9±23% for HA and 92±42% for PV (hypocapnia), and 64±28% for HA and −2±20% for PV (hypercapnia). The mean relative errors for HA were not significantly different from zero during hypo- and normocapnia, and the DCE-CT slope method could detect relative changes in HA perfusion between scans. Infusion of contrast agent led to significantly increased hepatic blood perfusion, which biased the PV perfusion estimates. Conclusions Using the DCE-CT slope method, HA perfusion estimates were accurate at low and normal flow rates whereas PV perfusion estimates were inaccurate and imprecise. At high flow rate, both HA perfusion estimates were significantly biased. PMID:22836307

  20. Errors in radial velocity variance from Doppler wind lidar

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, H.; Barthelmie, R. J.; Doubrawa, P.

    A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less

  1. Errors in radial velocity variance from Doppler wind lidar

    DOE PAGES

    Wang, H.; Barthelmie, R. J.; Doubrawa, P.; ...

    2016-08-29

    A high-fidelity lidar turbulence measurement technique relies on accurate estimates of radial velocity variance that are subject to both systematic and random errors determined by the autocorrelation function of radial velocity, the sampling rate, and the sampling duration. Our paper quantifies the effect of the volumetric averaging in lidar radial velocity measurements on the autocorrelation function and the dependence of the systematic and random errors on the sampling duration, using both statistically simulated and observed data. For current-generation scanning lidars and sampling durations of about 30 min and longer, during which the stationarity assumption is valid for atmospheric flows, themore » systematic error is negligible but the random error exceeds about 10%.« less

  2. Ultrasonic Blood Flow Measurement in Haemodialysis

    PubMed Central

    Sampson, D.; Papadimitriou, M.; Kulatilake, A. E.

    1970-01-01

    A 5-megacycle Doppler flow meter, calibrated in-vitro, was found to give a linear response to blood flow in the ranges commonly encountered in haemodialysis. With this, blood flow through artificial kidneys could be measured simply and with a clinically acceptable error. The method is safe, as blood lines do not have to be punctured or disconnected and hence there is no risk of introducing infection. Besides its value as a research tool the flow meter is useful in evaluating new artificial kidneys. Suitably modified it could form the basis of an arterial flow alarm system. PMID:5416812

  3. Comparison of different tree sap flow up-scaling procedures using Monte-Carlo simulations

    NASA Astrophysics Data System (ADS)

    Tatarinov, Fyodor; Preisler, Yakir; Roahtyn, Shani; Yakir, Dan

    2015-04-01

    An important task in determining forest ecosystem water balance is the estimation of stand transpiration, allowing separating evapotranspiration into transpiration and soil evaporation. This can be based on up-scaling measurements of sap flow in representative trees (SF), which can be done by different mathematical algorithms. The aim of the present study was to evaluate the error associated with different up-scaling algorithms under different conditions. Other types of errors (such as, measurement error, within tree SF variability, choice of sample plot etc.) were not considered here. A set of simulation experiments using Monte-Carlo technique was carried out and three up-scaling procedures were tested. (1) Multiplying mean stand sap flux density based on unit sapwood cross-section area (SFD) by total sapwood area (Klein et al, 2014); (2) deriving of linear dependence of tree sap flow on tree DBH and calculating SFstand using predicted SF by DBH classes and stand DBH distribution (Cermak et al., 2004); (3) same as method 2 but using non-linear dependency. Simulations were performed under different SFD(DBH) slope (bs, positive, negative, zero); different DBH and SFD standard deviations (Δd and Δs, respectively) and DBH class size. It was assumed that all trees in a unit area are measured and the total SF of all trees in the experimental plot was taken as the reference SFstand value. Under negative bs all models tend to overestimate SFstand and the error increases exponentially with decreasing bs. Under bs >0 all models tend to underestimate SFstand, but the error is much smaller than for bs

  4. A new method to calculate unsteady particle kinematics and drag coefficient in a subsonic post-shock flow

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bordoloi, Ankur D.; Ding, Liuyang; Martinez, Adam A.

    In this paper, we introduce a new method (piecewise integrated dynamics equation fit, PIDEF) that uses the particle dynamics equation to determine unsteady kinematics and drag coefficient (C D) for a particle in subsonic post-shock flow. The uncertainty of this method is assessed based on simulated trajectories for both quasi-steady and unsteady flow conditions. Traditional piecewise polynomial fitting (PPF) shows high sensitivity to measurement error and the function used to describe C D, creating high levels of relative error (>>1) when applied to unsteady shock-accelerated flows. The PIDEF method provides reduced uncertainty in calculations of unsteady acceleration and drag coefficientmore » for both quasi-steady and unsteady flows. This makes PIDEF a preferable method over PPF for complex flows where the temporal response of C D is unknown. Finally, we apply PIDEF to experimental measurements of particle trajectories from 8-pulse particle tracking and determine the effect of incident Mach number on relaxation kinematics and drag coefficient of micron-sized particles.« less

  5. A new method to calculate unsteady particle kinematics and drag coefficient in a subsonic post-shock flow

    DOE PAGES

    Bordoloi, Ankur D.; Ding, Liuyang; Martinez, Adam A.; ...

    2018-04-26

    In this paper, we introduce a new method (piecewise integrated dynamics equation fit, PIDEF) that uses the particle dynamics equation to determine unsteady kinematics and drag coefficient (C D) for a particle in subsonic post-shock flow. The uncertainty of this method is assessed based on simulated trajectories for both quasi-steady and unsteady flow conditions. Traditional piecewise polynomial fitting (PPF) shows high sensitivity to measurement error and the function used to describe C D, creating high levels of relative error (>>1) when applied to unsteady shock-accelerated flows. The PIDEF method provides reduced uncertainty in calculations of unsteady acceleration and drag coefficientmore » for both quasi-steady and unsteady flows. This makes PIDEF a preferable method over PPF for complex flows where the temporal response of C D is unknown. Finally, we apply PIDEF to experimental measurements of particle trajectories from 8-pulse particle tracking and determine the effect of incident Mach number on relaxation kinematics and drag coefficient of micron-sized particles.« less

  6. Precision and accuracy of clinical quantification of myocardial blood flow by dynamic PET: A technical perspective.

    PubMed

    Moody, Jonathan B; Lee, Benjamin C; Corbett, James R; Ficaro, Edward P; Murthy, Venkatesh L

    2015-10-01

    A number of exciting advances in PET/CT technology and improvements in methodology have recently converged to enhance the feasibility of routine clinical quantification of myocardial blood flow and flow reserve. Recent promising clinical results are pointing toward an important role for myocardial blood flow in the care of patients. Absolute blood flow quantification can be a powerful clinical tool, but its utility will depend on maintaining precision and accuracy in the face of numerous potential sources of methodological errors. Here we review recent data and highlight the impact of PET instrumentation, image reconstruction, and quantification methods, and we emphasize (82)Rb cardiac PET which currently has the widest clinical application. It will be apparent that more data are needed, particularly in relation to newer PET technologies, as well as clinical standardization of PET protocols and methods. We provide recommendations for the methodological factors considered here. At present, myocardial flow reserve appears to be remarkably robust to various methodological errors; however, with greater attention to and more detailed understanding of these sources of error, the clinical benefits of stress-only blood flow measurement may eventually be more fully realized.

  7. Assessment of Spectral Doppler in Preclinical Ultrasound Using a Small-Size Rotating Phantom

    PubMed Central

    Yang, Xin; Sun, Chao; Anderson, Tom; Moran, Carmel M.; Hadoke, Patrick W.F.; Gray, Gillian A.; Hoskins, Peter R.

    2013-01-01

    Preclinical ultrasound scanners are used to measure blood flow in small animals, but the potential errors in blood velocity measurements have not been quantified. This investigation rectifies this omission through the design and use of phantoms and evaluation of measurement errors for a preclinical ultrasound system (Vevo 770, Visualsonics, Toronto, ON, Canada). A ray model of geometric spectral broadening was used to predict velocity errors. A small-scale rotating phantom, made from tissue-mimicking material, was developed. True and Doppler-measured maximum velocities of the moving targets were compared over a range of angles from 10° to 80°. Results indicate that the maximum velocity was overestimated by up to 158% by spectral Doppler. There was good agreement (<10%) between theoretical velocity errors and measured errors for beam-target angles of 50°–80°. However, for angles of 10°–40°, the agreement was not as good (>50%). The phantom is capable of validating the performance of blood velocity measurement in preclinical ultrasound. PMID:23711503

  8. The velocity and vorticity fields of the turbulent near wake of a circular cylinder

    NASA Technical Reports Server (NTRS)

    Wallace, James; Ong, Lawrence; Moin, Parviz

    1995-01-01

    The purpose of this research is to provide a detailed experimental database of velocity and vorticity statistics in the very near wake (x/d less than 10) of a circular cylinder at Reynolds number of 3900. This study has determined that estimations of the streamwise velocity component in flow fields with large nonzero cross-stream components are not accurate. Similarly, X-wire measurements of the u and v velocity components in flows containing large w are also subject to the errors due to binormal cooling. Using the look-up table (LUT) technique, and by calibrating the X-wire probe used here to include the range of expected angles of attack (+/- 40 deg), accurate X-wire measurements of instantaneous u and v velocity components in the very near wake region of a circular cylinder has been accomplished. The approximate two-dimensionality of the present flow field was verified with four-wire probe measurements, and to some extent the spanwise correlation measurements with the multisensor rake. Hence, binormal cooling errors in the present X-wire measurements are small.

  9. Simultaneous Online Measurement of H2O and CO2 in the Humid CO2 Adsorption/Desorption Process.

    PubMed

    Yu, Qingni; Ye, Sha; Zhu, Jingke; Lei, Lecheng; Yang, Bin

    2015-01-01

    A dew point meter (DP) and an infrared (IR) CO2 analyzer were assembled in a humid CO2 adsorption/desorption system in series for simultaneous online measurements of H2O and CO2, respectively. The humidifier, by using surface-flushing on a saturated brine solution was self-made for the generation of humid air flow. It was found that by this method it became relatively easy to obtain a low H2O content in air flow and that its fluctuation could be reduced compared to the bubbling method. Water calibration for the DP-IR detector is necessary to be conducted for minimizing the measurement error of H2O. It demonstrated that the relative error (RA) for simultaneous online measurements H2O and CO2 in the desorption process is lower than 0.1%. The high RA in the adsorption of H2O is attributed to H2O adsorption on the transfer pipe and amplification of the measurement error. The high accuracy of simultaneous online measurements of H2O and CO2 is promising for investigating their co-adsorption/desorption behaviors, especially for direct CO2 capture from ambient air.

  10. Error Estimates of the Ares I Computed Turbulent Ascent Longitudinal Aerodynamic Analysis

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Ghaffari, Farhad

    2012-01-01

    Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on an unstructured grid, Reynolds-averaged Navier-Stokes analysis. The validity of the approach to compute the associated error estimates, derived from a base grid to an extrapolated infinite-size grid, was first demonstrated on a sub-scaled wind tunnel model at representative ascent flow conditions for which the experimental data existed. Such analysis at the transonic flow conditions revealed a maximum deviation of about 23% between the computed longitudinal aerodynamic coefficients with the base grid and the measured data across the entire roll angles. This maximum deviation from the wind tunnel data was associated with the computed normal force coefficient at the transonic flow condition and was reduced to approximately 16% based on the infinite-size grid. However, all the computed aerodynamic coefficients with the base grid at the supersonic flow conditions showed a maximum deviation of only about 8% with that level being improved to approximately 5% for the infinite-size grid. The results and the error estimates based on the established procedure are also presented for the flight flow conditions.

  11. DOE Office of Scientific and Technical Information (OSTI.GOV)

    N, Gwilliam M; J, Collins D; O, Leach M

    Purpose: To assess the feasibility of accurately quantifying the concentration of MRI contrast agent (CA) in pulsatile flowing blood by measuring its T{sub 1}, as is common for the purposes of obtaining a patientspecific arterial input function (AIF). Dynamic contrast enhanced (DCE) - MRI and pharmacokinetic (PK) modelling is widely used to produce measures of vascular function but accurate measurement of the AIF undermines their accuracy. A proposed solution is to measure the T{sub 1} of blood in a large vessel using the Fram double flip angle method during the passage of a bolus of CA. This work expands onmore » previous work by assessing pulsatile flow and the changes in T{sub 1} seen with a CA bolus. Methods: A phantom was developed which used a physiological pump to pass fluid of a known T{sub 1} (812ms) through the centre of a head coil of a clinical 1.5T MRI scanner. Measurements were made using high temporal resolution sequences suitable for DCE-MRI and were used to validate a virtual phantom that simulated the expected errors due to pulsatile flow and bolus of CA concentration changes typically found in patients. Results: : Measured and virtual results showed similar trends, although there were differences that may be attributed to the virtual phantom not accurately simulating the spin history of the fluid before entering the imaging volume. The relationship between T{sub 1} measurement and flow speed was non-linear. T{sub 1} measurement is compromised by new spins flowing into the imaging volume, not being subject to enough excitations to have reached steady-state. The virtual phantom demonstrated a range of recorded T{sub 1} for various simulated T{sub 1} / flow rates. Conclusion: T{sub 1} measurement of flowing blood using standard DCE-MRI sequences is very challenging. Measurement error is non-linear with relation to instantaneous flow speed. Optimising sequence parameters and lowering baseline T{sub 1} of blood should be considered.« less

  12. Statistical models for estimating daily streamflow in Michigan

    USGS Publications Warehouse

    Holtschlag, D.J.; Salehi, Habib

    1992-01-01

    Statistical models for estimating daily streamflow were analyzed for 25 pairs of streamflow-gaging stations in Michigan. Stations were paired by randomly choosing a station operated in 1989 at which 10 or more years of continuous flow data had been collected and at which flow is virtually unregulated; a nearby station was chosen where flow characteristics are similar. Streamflow data from the 25 randomly selected stations were used as the response variables; streamflow data at the nearby stations were used to generate a set of explanatory variables. Ordinary-least squares regression (OLSR) equations, autoregressive integrated moving-average (ARIMA) equations, and transfer function-noise (TFN) equations were developed to estimate the log transform of flow for the 25 randomly selected stations. The precision of each type of equation was evaluated on the basis of the standard deviation of the estimation errors. OLSR equations produce one set of estimation errors; ARIMA and TFN models each produce l sets of estimation errors corresponding to the forecast lead. The lead-l forecast is the estimate of flow l days ahead of the most recent streamflow used as a response variable in the estimation. In this analysis, the standard deviation of lead l ARIMA and TFN forecast errors were generally lower than the standard deviation of OLSR errors for l < 2 days and l < 9 days, respectively. Composite estimates were computed as a weighted average of forecasts based on TFN equations and backcasts (forecasts of the reverse-ordered series) based on ARIMA equations. The standard deviation of composite errors varied throughout the length of the estimation interval and generally was at maximum near the center of the interval. For comparison with OLSR errors, the mean standard deviation of composite errors were computed for intervals of length 1 to 40 days. The mean standard deviation of length-l composite errors were generally less than the standard deviation of the OLSR errors for l < 32 days. In addition, the composite estimates ensure a gradual transition between periods of estimated and measured flows. Model performance among stations of differing model error magnitudes were compared by computing ratios of the mean standard deviation of the length l composite errors to the standard deviation of OLSR errors. The mean error ratio for the set of 25 selected stations was less than 1 for intervals l < 32 days. Considering the frequency characteristics of the length of intervals of estimated record in Michigan, the effective mean error ratio for intervals < 30 days was 0.52. Thus, for intervals of estimation of 1 month or less, the error of the composite estimate is substantially lower than error of the OLSR estimate.

  13. Phase Error Correction in Time-Averaged 3D Phase Contrast Magnetic Resonance Imaging of the Cerebral Vasculature

    PubMed Central

    MacDonald, M. Ethan; Forkert, Nils D.; Pike, G. Bruce; Frayne, Richard

    2016-01-01

    Purpose Volume flow rate (VFR) measurements based on phase contrast (PC)-magnetic resonance (MR) imaging datasets have spatially varying bias due to eddy current induced phase errors. The purpose of this study was to assess the impact of phase errors in time averaged PC-MR imaging of the cerebral vasculature and explore the effects of three common correction schemes (local bias correction (LBC), local polynomial correction (LPC), and whole brain polynomial correction (WBPC)). Methods Measurements of the eddy current induced phase error from a static phantom were first obtained. In thirty healthy human subjects, the methods were then assessed in background tissue to determine if local phase offsets could be removed. Finally, the techniques were used to correct VFR measurements in cerebral vessels and compared statistically. Results In the phantom, phase error was measured to be <2.1 ml/s per pixel and the bias was reduced with the correction schemes. In background tissue, the bias was significantly reduced, by 65.6% (LBC), 58.4% (LPC) and 47.7% (WBPC) (p < 0.001 across all schemes). Correction did not lead to significantly different VFR measurements in the vessels (p = 0.997). In the vessel measurements, the three correction schemes led to flow measurement differences of -0.04 ± 0.05 ml/s, 0.09 ± 0.16 ml/s, and -0.02 ± 0.06 ml/s. Although there was an improvement in background measurements with correction, there was no statistical difference between the three correction schemes (p = 0.242 in background and p = 0.738 in vessels). Conclusions While eddy current induced phase errors can vary between hardware and sequence configurations, our results showed that the impact is small in a typical brain PC-MR protocol and does not have a significant effect on VFR measurements in cerebral vessels. PMID:26910600

  14. Measurement of unsteady airflow velocity at nozzle outlet

    NASA Astrophysics Data System (ADS)

    Pyszko, René; Machů, Mário

    2017-09-01

    The paper deals with a method of measuring and evaluating the cooling air flow velocity at the outlet of the flat nozzle for cooling a rolled steel product. The selected properties of the Prandtl and Pitot sensing tubes were measured and compared. A Pitot tube was used for operational measurements of unsteady dynamic pressure of the air flowing from nozzles to abtain the flow velocity. The article also discusses the effects of air temperature, pressure and relative air humidity on air density, as well as the influence of dynamic pressure filtering on the error of averaged velocity.

  15. Analysing the accuracy of machine learning techniques to develop an integrated influent time series model: case study of a sewage treatment plant, Malaysia.

    PubMed

    Ansari, Mozafar; Othman, Faridah; Abunama, Taher; El-Shafie, Ahmed

    2018-04-01

    The function of a sewage treatment plant is to treat the sewage to acceptable standards before being discharged into the receiving waters. To design and operate such plants, it is necessary to measure and predict the influent flow rate. In this research, the influent flow rate of a sewage treatment plant (STP) was modelled and predicted by autoregressive integrated moving average (ARIMA), nonlinear autoregressive network (NAR) and support vector machine (SVM) regression time series algorithms. To evaluate the models' accuracy, the root mean square error (RMSE) and coefficient of determination (R 2 ) were calculated as initial assessment measures, while relative error (RE), peak flow criterion (PFC) and low flow criterion (LFC) were calculated as final evaluation measures to demonstrate the detailed accuracy of the selected models. An integrated model was developed based on the individual models' prediction ability for low, average and peak flow. An initial assessment of the results showed that the ARIMA model was the least accurate and the NAR model was the most accurate. The RE results also prove that the SVM model's frequency of errors above 10% or below - 10% was greater than the NAR model's. The influent was also forecasted up to 44 weeks ahead by both models. The graphical results indicate that the NAR model made better predictions than the SVM model. The final evaluation of NAR and SVM demonstrated that SVM made better predictions at peak flow and NAR fit well for low and average inflow ranges. The integrated model developed includes the NAR model for low and average influent and the SVM model for peak inflow.

  16. A new approach for flow-through respirometry measurements in humans

    PubMed Central

    Ingebrigtsen, Jan P.; Bergouignan, Audrey; Ohkawara, Kazunori; Kohrt, Wendy M.; Lighton, John R. B.

    2010-01-01

    Indirect whole room calorimetry is commonly used in studies of human metabolism. These calorimeters can be configured as either push or pull systems. A major obstacle to accurately calculating gas exchange rates in a pull system is that the excurrent flow rate is increased above the incurrent flow rate, because the organism produces water vapor, which also dilutes the concentrations of respiratory gasses in the excurrent sample. A common approach to this problem is to dry the excurrent gasses prior to measurement, but if drying is incomplete, large errors in the calculated oxygen consumption will result. The other major potential source of error is fluctuations in the concentration of O2 and CO2 in the incurrent airstream. We describe a novel approach to measuring gas exchange using a pull-type whole room indirect calorimeter. Relative humidity and temperature of the incurrent and excurrent airstreams are measured continuously using high-precision, relative humidity and temperature sensors, permitting accurate measurement of water vapor pressure. The excurrent flow rates are then adjusted to eliminate the flow contribution from water vapor, and respiratory gas concentrations are adjusted to eliminate the effect of water vapor dilution. In addition, a novel switching approach is used that permits constant, uninterrupted measurement of the excurrent airstream while allowing frequent measurements of the incurrent airstream. To demonstrate the accuracy of this approach, we present the results of validation trials compared with our existing system and metabolic carts, as well as the results of standard propane combustion tests. PMID:20200135

  17. Velocity surveys in a turbine stator annular-cascade facility using laser Doppler techniques. [flow measurement and flow characteristics

    NASA Technical Reports Server (NTRS)

    Goldman, L. J.; Seasholtz, R. G.; Mclallin, K. L.

    1976-01-01

    A laser Doppler velocimeter (LDV) was used to determine the flow conditions downstream of an annular cascade of stator blades operating at an exit critical velocity ratio of 0.87. Two modes of LDV operation (continuous scan and discrete point) were investigated. Conventional pressure probe measurements were also made for comparison with the LDV results. Biasing errors that occur in the LDV measurement of velocity components were also studied. In addition, the effect of pressure probe blockage on the flow conditions was determined with the LDV. Photographs and descriptions of the test equipment used are given.

  18. Shear flow control of cold and heated rectangular jets by mechanical tabs. Volume 1: Results and discussion

    NASA Technical Reports Server (NTRS)

    Brown, W. H.; Ahuja, K. K.

    1989-01-01

    The effects of mechanical protrusions on the jet mixing characteristics of rectangular nozzles for heated and unheated subsonic and supersonic jet plumes were studied. The characteristics of a rectangular nozzle of aspect ratio 4 without the mechanical protrusions were first investigated. Intrusive probes were used to make the flow measurements. Possible errors introduced by intrusive probes in making shear flow measurements were also examined. Several scaled sizes of mechanical tabs were then tested, configured around the perimeter of the rectangular jet. Both the number and the location of the tabs were varied. From this, the best configuration was selected. The conclusions derived were: (1) intrusive probes can produce significant errors in the measurements of the velocity of jets if they are large in diameter and penetrate beyond the jet center; (2) rectangular jets without tabs, compared to circular jets of the same exit area, provide faster jet mixing; and (3) further mixing enhancement is possible by using mechanical tabs.

  19. Laser transit anemometer measurements of a JANNAF nozzle base velocity flow field

    NASA Technical Reports Server (NTRS)

    Hunter, William W., Jr.; Russ, C. E., Jr.; Clemmons, J. I., Jr.

    1990-01-01

    Velocity flow fields of a nozzle jet exhausting into a supersonic flow were surveyed. The measurements were obtained with a laser transit anemometer (LTA) system in the time domain with a correlation instrument. The LTA data is transformed into the velocity domain to remove the error that occurs when the data is analyzed in the time domain. The final data is shown in velocity vector plots for positions upstream, downstream, and in the exhaust plane of the jet nozzle.

  20. Laser velocimetry: A state-of-the-art overview

    NASA Technical Reports Server (NTRS)

    Stevenson, W. H.

    1982-01-01

    General systems design and optical and signal processing requirements for laser velocimetric measurement of flows are reviewed. Bias errors which occur in measurements using burst (counter) processors are discussed and particle seeding requirements are suggested.

  1. Multi-hole pressure probes to wind tunnel experiments and air data systems

    NASA Astrophysics Data System (ADS)

    Shevchenko, A. M.; Shmakov, A. S.

    2017-10-01

    The problems to develop a multihole pressure system to measure flow angularity, Mach number and dynamic head for wind tunnel experiments or air data systems are discussed. A simple analytical model with separation of variables is derived for the multihole spherical pressure probe. The proposed model is uniform for small subsonic and supersonic speeds. An error analysis was performed. The error functions are obtained, allowing to estimate the influence of the Mach number, the pitch angle, the location of the pressure ports on the uncertainty of determining the flow parameters.

  2. State of charge monitoring of vanadium redox flow batteries using half cell potentials and electrolyte density

    NASA Astrophysics Data System (ADS)

    Ressel, Simon; Bill, Florian; Holtz, Lucas; Janshen, Niklas; Chica, Antonio; Flower, Thomas; Weidlich, Claudia; Struckmann, Thorsten

    2018-02-01

    The operation of vanadium redox flow batteries requires reliable in situ state of charge (SOC) monitoring. In this study, two SOC estimation approaches for the negative half cell are investigated. First, in situ open circuit potential measurements are combined with Coulomb counting in a one-step calibration of SOC and Nernst potential which doesn't need additional reference SOCs. In-sample and out-of-sample SOCs are estimated and analyzed, estimation errors ≤ 0.04 are obtained. In the second approach, temperature corrected in situ electrolyte density measurements are used for the first time in vanadium redox flow batteries for SOC estimation. In-sample and out-of-sample SOC estimation errors ≤ 0.04 demonstrate the feasibility of this approach. Both methods allow recalibration during battery operation. The actual capacity obtained from SOC calibration can be used in a state of health model.

  3. Radioisotope measurement of selected parameters of liquid-gas flow using single detector system

    NASA Astrophysics Data System (ADS)

    Zych, Marcin; Hanus, Robert; Jaszczur, Marek; Mosorov, Volodymyr; Świsulski, Dariusz

    2018-06-01

    To determine the parameters of two-phase flows using radioisotopes, usually two detectors are used. Knowing the distance between them, the velocity of the dispersed phase is calculated based on time delay estimation. Such a measurement system requires the use of two gamma-ray sealed sources. But in some situations it is also possible to determine velocity of dispersed phase using only one scintillation probe and one gamma-ray source. However, this requires proper signal analysis and prior calibration. This may also cause larger measurement errors. On the other hand, it allows measurements in hard to reach areas where there is often no place for the second detector. Additionally, by performing a previous calibration, it is possible to determine the void fraction or concentration of the selected phase. In this work an autocorrelation function was used to analyze the signal from the scintillation detector, which allowed for the determination of air velocities in slug and plug flows with an accuracy of 8.5%. Based on the analysis of the same signal, a void fraction with error of 15% was determined.

  4. Aerodynamic parameters from distributed heterogeneous CNT hair sensors with a feedforward neural network.

    PubMed

    Magar, Kaman Thapa; Reich, Gregory W; Kondash, Corey; Slinker, Keith; Pankonien, Alexander M; Baur, Jeffery W; Smyers, Brian

    2016-11-10

    Distributed arrays of artificial hair sensors have bio-like sensing capabilities to obtain spatial and temporal surface flow information which is an important aspect of an effective fly-by-feel system. The spatiotemporal surface flow measurement enables further exploration of additional flow features such as flow stagnation, separation, and reattachment points. Due to their inherent robustness and fault tolerant capability, distributed arrays of hair sensors are well equipped to assess the aerodynamic and flow states in adverse conditions. In this paper, a local flow measurement from an array of artificial hair sensors in a wind tunnel experiment is used with a feedforward artificial neural network to predict aerodynamic parameters such as lift coefficient, moment coefficient, free-stream velocity, and angle of attack on an airfoil. We find the prediction error within 6% and 10% for lift and moment coefficients. The error for free-stream velocity and angle of attack were within 0.12 mph and 0.37 degrees. Knowledge of these parameters are key to finding the real time forces and moments which paves the way for effective control design to increase flight agility, stability, and maneuverability.

  5. An assessment of flow data from Klamath River sites between Link River Dam and Keno Dam, south-central Oregon

    USGS Publications Warehouse

    Risley, John C.; Hess, Glen W.; Fisher, Bruce J.

    2006-01-01

    Records of diversion and return flows for water years 1961?2004 along a reach of the Klamath River between Link River and Keno Dams in south-central Oregon were evaluated to determine the cause of a water-balance inconsistency in the hydrologic data. The data indicated that the reach was losing flow in the 1960s and 1970s and gaining flow in the 1980s and 1990s. The absolute mean annual net water-balance difference in flows between the first and second half of the 44-year period (1961-2004) was approximately 103,000 acre-feet per year (acre-ft/yr). The quality of the diversion and return-flow records used in the water balance was evaluated using U.S. Geological Survey (USGS) criteria for accuracy. With the exception of the USGS Klamath River at Keno record, which was rated as 'good' or 'excellent,' the eight other flow records, all from non-USGS flow-measurement sites, were rated as 'poor' by USGS standards due to insufficient data-collection documentation and a lack of direct discharge measurements to verify the rating curves. The record for the Link River site, the most upstream in the study area, included both river and westside power canal flows. Because of rating curve biases, the river flows might have been overestimated by 25,000 acre-ft/yr on average from water years 1961 to 1982 and underestimated by 7,000 acre-ft/yr on average from water years 1983 to 2004. For water years 1984-2004, westside power canal flows might have been underestimated by 11,000 acre-ft/yr. Some diversion and return flows (for mostly agricultural, industrial, and urban use) along the Klamath River study reach, not measured continuously and not included in the water-balance equation, also were evaluated. However, the sum of these diversion and return flows was insufficient to explain the water-balance inconsistency. The possibility that ground-water levels in lands adjacent to the river rose during water years 1961-2004 and caused an increase in ground-water discharge to the river also was evaluated. However, water-level data from local wells did not have a rising trend during the period. The most likely cause of the water-balance inconsistency was flow measurement error in the eight non-USGS flow records. Part of the water-balance inconsistency can be explained by a 43,000 acre-foot error in the river and canal flow portions of the Link River flow record. A remaining 60,000 acre-foot error might have been distributed among the seven other flow records, or much of the remaining 60,000 acre-foot error might have been in the Link River flow record because flows in that record had a greater magnitude than flows in the seven other records. As an additional analysis of the water-balance issue, flow records used in the water balance were evaluated for trends and compared to known changes in water management in the Bureau of Reclamation Klamath Project and Lower Klamath and Tule Lake National Wildlife Refuges over the 44-year period. Many of the water-management changes were implemented in the early 1980s. For three diversion flow records, 1983-2004 mean annual flows were 16,000, 8,000, and 21,000 acre-ft/yr greater than their 1961-82 mean annual flows. Return flows to the Klamath River at two flow-measurement sites decreased by 31,000 and 27,000 acre-ft/yr for 1983-2004 compared with the 1961-82 period.

  6. Void fraction and velocity measurement of simulated bubble in a rotating disc using high frame rate neutron radiography.

    PubMed

    Saito, Y; Mishima, K; Matsubayashi, M

    2004-10-01

    To evaluate measurement error of local void fraction and velocity field in a gas-molten metal two-phase flow by high-frame-rate neutron radiography, experiments using a rotating stainless-steel disc, which has several holes of various diameters and depths simulating gas bubbles, were performed. Measured instantaneous void fraction and velocity field of the simulated bubbles were compared with the calculated values based on the rotating speed, the diameter and the depth of the holes as parameters and the measurement error was evaluated. The rotating speed was varied from 0 to 350 rpm (tangential velocity of the simulated bubbles from 0 to 1.5 m/s). The effect of shutter speed of the imaging system on the measurement error was also investigated. It was revealed from the Lagrangian time-averaged void fraction profile that the measurement error of the instantaneous void fraction depends mainly on the light-decay characteristics of the fluorescent converter. The measurement error of the instantaneous local void fraction of simulated bubbles is estimated to be 20%. In the present imaging system, the light-decay characteristics of the fluorescent converter affect the measurement remarkably, and so should be taken into account in estimating the measurement error of the local void fraction profile.

  7. High Accuracy Acoustic Relative Humidity Measurement in Duct Flow with Air

    PubMed Central

    van Schaik, Wilhelm; Grooten, Mart; Wernaart, Twan; van der Geld, Cees

    2010-01-01

    An acoustic relative humidity sensor for air-steam mixtures in duct flow is designed and tested. Theory, construction, calibration, considerations on dynamic response and results are presented. The measurement device is capable of measuring line averaged values of gas velocity, temperature and relative humidity (RH) instantaneously, by applying two ultrasonic transducers and an array of four temperature sensors. Measurement ranges are: gas velocity of 0–12 m/s with an error of ±0.13 m/s, temperature 0–100 °C with an error of ±0.07 °C and relative humidity 0–100% with accuracy better than 2 % RH above 50 °C. Main advantage over conventional humidity sensors is the high sensitivity at high RH at temperatures exceeding 50 °C, with accuracy increasing with increasing temperature. The sensors are non-intrusive and resist highly humid environments. PMID:22163610

  8. High accuracy acoustic relative humidity measurement in duct flow with air.

    PubMed

    van Schaik, Wilhelm; Grooten, Mart; Wernaart, Twan; van der Geld, Cees

    2010-01-01

    An acoustic relative humidity sensor for air-steam mixtures in duct flow is designed and tested. Theory, construction, calibration, considerations on dynamic response and results are presented. The measurement device is capable of measuring line averaged values of gas velocity, temperature and relative humidity (RH) instantaneously, by applying two ultrasonic transducers and an array of four temperature sensors. Measurement ranges are: gas velocity of 0-12 m/s with an error of ± 0.13 m/s, temperature 0-100 °C with an error of ± 0.07 °C and relative humidity 0-100% with accuracy better than 2 % RH above 50 °C. Main advantage over conventional humidity sensors is the high sensitivity at high RH at temperatures exceeding 50 °C, with accuracy increasing with increasing temperature. The sensors are non-intrusive and resist highly humid environments.

  9. Density and Cavitating Flow Results from a Full-Scale Optical Multiphase Cryogenic Flowmeter

    NASA Technical Reports Server (NTRS)

    Korman, Valentin

    2007-01-01

    Liquid propulsion systems are hampered by poor flow measurements. The measurement of flow directly impacts safe motor operations, performance parameters as well as providing feedback from ground testing and developmental work. NASA Marshall Space Flight Center, in an effort to improve propulsion sensor technology, has developed an all optical flow meter that directly measures the density of the fluid. The full-scale sensor was tested in a transient, multiphase liquid nitrogen fluid environment. Comparison with traditional density models shows excellent agreement with fluid density with an error of approximately 0.8%. Further evaluation shows the sensor is able to detect cavitation or bubbles in the flow stream and separate out their resulting effects in fluid density.

  10. Problems with indirect determinations of peak streamflows in steep, desert stream channels

    USGS Publications Warehouse

    Glancy, Patrick A.; Williams, Rhea P.

    1994-01-01

    Many peak streamflow values used in flood analyses for desert areas are derived using the Manning equation. Data used in the equation are collected after the flow has subsided, and peak flow is thereby determined indirectly. Most measurement problems and associated errors in peak-flow determinations result from (1) channel erosion or deposition that cannot be discerned or properly evaluated after the fact, (2) unsteady and non-uniform flow that rapidly changes in magnitude, and (3) appreciable sediment transport that has unknown effects on energy dissipation. High calculated velocities and Froude numbers are unacceptable to some investigators. Measurement results could be improved by recording flows with a video camera, installing a recording stream gage and recording rain gages, measuring channel scour with buried chains, analyzing measured data by multiple techniques, and supplementing indirect measurements with direct measurements of stream velocities in similar ephemeral streams.

  11. Controls of channel morphology and sediment concentration on flow resistance in a large sand-bed river: A case study of the lower Yellow River

    NASA Astrophysics Data System (ADS)

    Ma, Yuanxu; Huang, He Qing

    2016-07-01

    Accurate estimation of flow resistance is crucial for flood routing, flow discharge and velocity estimation, and engineering design. Various empirical and semiempirical flow resistance models have been developed during the past century; however, a universal flow resistance model for varying types of rivers has remained difficult to be achieved to date. In this study, hydrometric data sets from six stations in the lower Yellow River during 1958-1959 are used to calibrate three empirical flow resistance models (Eqs. (5)-(7)) and evaluate their predictability. A group of statistical measures have been used to evaluate the goodness of fit of these models, including root mean square error (RMSE), coefficient of determination (CD), the Nash coefficient (NA), mean relative error (MRE), mean symmetry error (MSE), percentage of data with a relative error ≤ 50% and 25% (P50, P25), and percentage of data with overestimated error (POE). Three model selection criterions are also employed to assess the model predictability: Akaike information criterion (AIC), Bayesian information criterion (BIC), and a modified model selection criterion (MSC). The results show that mean flow depth (d) and water surface slope (S) can only explain a small proportion of variance in flow resistance. When channel width (w) and suspended sediment concentration (SSC) are involved, the new model (7) achieves a better performance than the previous ones. The MRE of model (7) is generally < 20%, which is apparently better than that reported by previous studies. This model is validated using the data sets from the corresponding stations during 1965-1966, and the results show larger uncertainties than the calibrating model. This probably resulted from the temporal shift of dominant controls caused by channel change resulting from varying flow regime. With the advancements of earth observation techniques, information about channel width, mean flow depth, and suspended sediment concentration can be effectively extracted from multisource satellite images. We expect that the empirical methods developed in this study can be used as an effective surrogate in estimation of flow resistance in the large sand-bed rivers like the lower Yellow River.

  12. Evaluation of commercially available techniques and development of simplified methods for measuring grille airflows in HVAC systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Iain S.; Wray, Craig P.; Guillot, Cyril

    2003-08-01

    In this report, we discuss the accuracy of flow hoods for residential applications, based on laboratory tests and field studies. The results indicate that commercially available hoods are often inadequate to measure flows in residential systems, and that there can be a wide range of performance between different flow hoods. The errors are due to poor calibrations, sensitivity of existing hoods to grille flow non-uniformities, and flow changes from added flow resistance. We also evaluated several simple techniques for measuring register airflows that could be adopted by the HVAC industry and homeowners as simple diagnostics that are often as accuratemore » as commercially available devices. Our test results also show that current calibration procedures for flow hoods do not account for field application problems. As a result, organizations such as ASHRAE or ASTM need to develop a new standard for flow hood calibration, along with a new measurement standard to address field use of flow hoods.« less

  13. Numerical experiment for ultrasonic-measurement-integrated simulation of three-dimensional unsteady blood flow.

    PubMed

    Funamoto, Kenichi; Hayase, Toshiyuki; Saijo, Yoshifumi; Yambe, Tomoyuki

    2008-08-01

    Integration of ultrasonic measurement and numerical simulation is a possible way to break through limitations of existing methods for obtaining complete information on hemodynamics. We herein propose Ultrasonic-Measurement-Integrated (UMI) simulation, in which feedback signals based on the optimal estimation of errors in the velocity vector determined by measured and computed Doppler velocities at feedback points are added to the governing equations. With an eye towards practical implementation of UMI simulation with real measurement data, its efficiency for three-dimensional unsteady blood flow analysis and a method for treating low time resolution of ultrasonic measurement were investigated by a numerical experiment dealing with complicated blood flow in an aneurysm. Even when simplified boundary conditions were applied, the UMI simulation reduced the errors of velocity and pressure to 31% and 53% in the feedback domain which covered the aneurysm, respectively. Local maximum wall shear stress was estimated, showing both the proper position and the value with 1% deviance. A properly designed intermittent feedback applied only at the time when measurement data were obtained had the same computational accuracy as feedback applied at every computational time step. Hence, this feedback method is a possible solution to overcome the insufficient time resolution of ultrasonic measurement.

  14. Evaluation of Probe-Induced Flow Distortion of Campbell CSAT3 Sonic Anemometers by Numerical Simulation

    NASA Astrophysics Data System (ADS)

    Huq, Sadiq; De Roo, Frederik; Foken, Thomas; Mauder, Matthias

    2017-10-01

    The Campbell CSAT3 sonic anemometer is one of the most popular instruments for turbulence measurements in basic micrometeorological research and ecological applications. While measurement uncertainty has been characterized by field experiments and wind-tunnel studies in the past, there are conflicting estimates, which motivated us to conduct a numerical experiment using large-eddy simulation to evaluate the probe-induced flow distortion of the CSAT3 anemometer under controlled conditions, and with exact knowledge of the undisturbed flow. As opposed to wind-tunnel studies, we imposed oscillations in both the vertical and horizontal velocity components at the distinct frequencies and amplitudes found in typical turbulence spectra in the surface layer. The resulting flow-distortion errors for the standard deviations of the vertical velocity component range from 3 to 7%, and from 1 to 3% for the horizontal velocity component, depending on the azimuth angle. The magnitude of these errors is almost independent of the frequency of wind speed fluctuations, provided the amplitude is typical for surface-layer turbulence. A comparison of the corrections for transducer shadowing proposed by both Kaimal et al. (Proc Dyn Flow Conf, 551-565, 1978) and Horst et al. (Boundary-Layer Meteorol 155:371-395, 2015) show that both methods compensate for a larger part of the observed error, but do not sufficiently account for the azimuth dependency. Further numerical simulations could be conducted in the future to characterize the flow distortion induced by other existing types of sonic anemometers for the purposes of optimizing their geometry.

  15. Open-ocean boundary conditions from interior data: Local and remote forcing of Massachusetts Bay

    USGS Publications Warehouse

    Bogden, P.S.; Malanotte-Rizzoli, P.; Signell, R.

    1996-01-01

    Massachusetts and Cape Cod Bays form a semienclosed coastal basin that opens onto the much larger Gulf of Maine. Subtidal circulation in the bay is driven by local winds and remotely driven flows from the gulf. The local-wind forced flow is estimated with a regional shallow water model driven by wind measurements. The model uses a gravity wave radiation condition along the open-ocean boundary. Results compare reasonably well with observed currents near the coast. In some offshore regions however, modeled flows are an order of magnitude less energetic than the data. Strong flows are observed even during periods of weak local wind forcing. Poor model-data comparisons are attributable, at least in part, to open-ocean boundary conditions that neglect the effects of remote forcing. Velocity measurements from within Massachusetts Bay are used to estimate the remotely forced component of the flow. The data are combined with shallow water dynamics in an inverse-model formulation that follows the theory of Bennett and McIntosh [1982], who considered tides. We extend their analysis to consider the subtidal response to transient forcing. The inverse model adjusts the a priori open-ocean boundary condition, thereby minimizing a combined measure of model-data misfit and boundary condition adjustment. A "consistency criterion" determines the optimal trade-off between the two. The criterion is based on a measure of plausibility for the inverse solution. The "consistent" inverse solution reproduces 56% of the average squared variation in the data. The local-wind-driven flow alone accounts for half of the model skill. The other half is attributable to remotely forced flows from the Gulf of Maine. The unexplained 44% comes from measurement errors and model errors that are not accounted for in the analysis. 

  16. Swirling flow in a model of the carotid artery: Numerical and experimental study

    NASA Astrophysics Data System (ADS)

    Kotmakova, Anna A.; Gataulin, Yakov A.; Yukhnev, Andrey D.

    2018-05-01

    The present contribution is aimed at numerical and experimental study of inlet swirling flow in a model of the carotid artery. Flow visualization is performed both with the ultrasound color Doppler imaging mode and with CFD data postprocessing of swirling flows in a carotid artery model. Special attention is paid to obtaining data for the secondary motion in the internal carotid artery. Principal errors of the measurement technique developed are estimated using the results of flow calculations.

  17. Effects of Reynolds number on orifice induced pressure error

    NASA Technical Reports Server (NTRS)

    Plentovich, E. B.; Gloss, B. B.

    1982-01-01

    Data previously reported for orifice induced pressure errors are extended to the case of higher Reynolds number flows, and a remedy is presented in the form of a porous metal plug for the orifice. Test orifices with apertures 0.330, 0.660, and 1.321 cm in diam. were fabricated on a flat plate for trials in the NASA Langley wind tunnel at Mach numbers 0.40-0.72. A boundary layer survey rake was also mounted on the flat plate to allow measurement of the total boundary layer pressures at the orifices. At the high Reynolds number flows studied, the orifice induced pressure error was found to be a function of the ratio of the orifice diameter to the boundary layer thickness. The error was effectively eliminated by the insertion of a porous metal disc set flush with the orifice outside surface.

  18. Motion of particles with inertia in a compressible free shear layer

    NASA Technical Reports Server (NTRS)

    Samimy, M.; Lele, S. K.

    1991-01-01

    The effects of the inertia of a particle on its flow-tracking accuracy and particle dispersion are studied using direct numerical simulations of 2D compressible free shear layers in convective Mach number (Mc) range of 0.2 to 0.6. The results show that particle response is well characterized by tau, the ratio of particle response time to the flow time scales (Stokes' number). The slip between particle and fluid imposes a fundamental limit on the accuracy of optical measurements such as LDV and PIV. The error is found to grow like tau up to tau = 1 and taper off at higher tau. For tau = 0.2 the error is about 2 percent. In the flow visualizations based on Mie scattering, particles with tau more than 0.05 are found to grossly misrepresent the flow features. These errors are quantified by calculating the dispersion of particles relative to the fluid. Overall, the effect of compressibility does not seem to be significant on the motion of particles in the range of Mc considered here.

  19. Comparison of Flow-Dependent and Static Error Correlation Models in the DAO Ozone Data Assimilation System

    NASA Technical Reports Server (NTRS)

    Wargan, K.; Stajner, I.; Pawson, S.

    2003-01-01

    In a data assimilation system the forecast error covariance matrix governs the way in which the data information is spread throughout the model grid. Implementation of a correct method of assigning covariances is expected to have an impact on the analysis results. The simplest models assume that correlations are constant in time and isotropic or nearly isotropic. In such models the analysis depends on the dynamics only through assumed error standard deviations. In applications to atmospheric tracer data assimilation this may lead to inaccuracies, especially in regions with strong wind shears or high gradient of potential vorticity, as well as in areas where no data are available. In order to overcome this problem we have developed a flow-dependent covariance model that is based on short term evolution of error correlations. The presentation compares performance of a static and a flow-dependent model applied to a global three- dimensional ozone data assimilation system developed at NASA s Data Assimilation Office. We will present some results of validation against WMO balloon-borne sondes and the Polar Ozone and Aerosol Measurement (POAM) III instrument. Experiments show that allowing forecast error correlations to evolve with the flow results in positive impact on assimilated ozone within the regions where data were not assimilated, particularly at high latitudes in both hemispheres and in the troposphere. We will also discuss statistical characteristics of both models; in particular we will argue that including evolution of error correlations leads to stronger internal consistency of a data assimilation ,

  20. Accuracy of non-resonant laser-induced thermal acoustics (LITA) in a convergent-divergent nozzle flow

    NASA Astrophysics Data System (ADS)

    Richter, J.; Mayer, J.; Weigand, B.

    2018-02-01

    Non-resonant laser-induced thermal acoustics (LITA) was applied to measure Mach number, temperature and turbulence level along the centerline of a transonic nozzle flow. The accuracy of the measurement results was systematically studied regarding misalignment of the interrogation beam and frequency analysis of the LITA signals. 2D steady-state Reynolds-averaged Navier-Stokes (RANS) simulations were performed for reference. The simulations were conducted using ANSYS CFX 18 employing the shear-stress transport turbulence model. Post-processing of the LITA signals is performed by applying a discrete Fourier transformation (DFT) to determine the beat frequencies. It is shown that the systematical error of the DFT, which depends on the number of oscillations, signal chirp, and damping rate, is less than 1.5% for our experiments resulting in an average error of 1.9% for Mach number. Further, the maximum calibration error is investigated for a worst-case scenario involving maximum in situ readjustment of the interrogation beam within the limits of constructive interference. It is shown that the signal intensity becomes zero if the interrogation angle is altered by 2%. This, together with the accuracy of frequency analysis, results in an error of about 5.4% for temperature throughout the nozzle. Comparison with numerical results shows good agreement within the error bars.

  1. Evaluation of a method of estimating low-flow frequencies from base-flow measurements at Indiana streams

    USGS Publications Warehouse

    Wilson, John Thomas

    2000-01-01

    A mathematical technique of estimating low-flow frequencies from base-flow measurements was evaluated by using data for streams in Indiana. Low-flow frequencies at low- flow partial-record stations were estimated by relating base-flow measurements to concurrent daily flows at nearby streamflow-gaging stations (index stations) for which low-flowfrequency curves had been developed. A network of long-term streamflow-gaging stations in Indiana provided a sample of sites with observed low-flow frequencies. Observed values of 7-day, 10-year low flow and 7-day, 2-year low flow were compared to predicted values to evaluate the accuracy of the method. Five test cases were used to evaluate the method under a variety of conditions in which the location of the index station and its drainage area varied relative to the partial-record station. A total of 141 pairs of streamflow-gaging stations were used in the five test cases. Four of the test cases used one index station, the fifth test case used two index stations. The number of base-flow measurements was varied for each test case to see if the accuracy of the method was affected by the number of measurements used. The most accurate and least variable results were produced when two index stations on the same stream or tributaries of the partial-record station were used. All but one value of the predicted 7-day, 10-year low flow were within 15 percent of the values observed for the long-term continuous record, and all of the predicted values of the 7-day, 2-year lowflow were within 15 percent of the observed values. This apparent accuracy, to some extent, may be a result of the small sample set of 15. Of the four test cases that used one index station, the most accurate and least variable results were produced in the test case where the index station and partial-record station were on the same stream or on streams tributary to each other and where the index station had a larger drainage area than the partial-record station. In that test case, the method tended to over predict, based on the median relative error. In 23 of 28 test pairs, the predicted 7-day, 10-year low flow was within 15 percent of the observed value; in 26 of 28 test pairs, the predicted 7-day, 2-year low flow was within 15 percent of the observed value. When the index station and partial-record station were on the same stream or streams tributary to each other and the index station had a smaller drainage area than the partial-record station, the method tended to under predict the low-flow frequencies. Nineteen of 28 predicted values of the 7-day, 10-year low flow were within 15 percent of the observed values. Twenty-five of 28 predicted values of the 7-day, 2-year low flow were within 15 percent of the observed values. When the index station and the partial-record station were on different streams, the method tended to under predict regardless of whether the index station had a larger or smaller drainage area than that of the partial-record station. Also, the variability of the relative error of estimate was greatest for the test cases that used index stations and partial-record stations from different streams. This variability, in part, may be caused by using more streamflow-gaging stations with small low-flow frequencies in these test cases. A small difference in the predicted and observed values can equate to a large relative error when dealing with stations that have small low-flow frequencies. In the test cases that used one index station, the method tended to predict smaller low-flow frequencies as the number of base-flow measurements was reduced from 20 to 5. Overall, the average relative error of estimate and the variability of the predicted values increased as the number of base-flow measurements was reduced.

  2. The Effects of Turbulence on Tthe Measurements of Five-Hole Probes

    NASA Astrophysics Data System (ADS)

    Diebold, Jeffrey Michael

    The primary goals of this research were to quantify the effects of turbulence on the measurements of five-hole pressure probes (5HP) and to develop a model capable of predicting the response of a 5HP to turbulence. The five-hole pressure probe is a commonly used device in experimental fluid dynamics and aerodynamics. By measuring the pressure at the five pressure ports located on the tip of the probe it is possible to determine the total pressure, static pressure and the three components of velocity at a point in the flow. Previous research has demonstrated that the measurements of simple pressure probes such as Pitot probes are significantly influenced by the presence of turbulence. Turbulent velocity fluctuations contaminate the measurement of pressure due to the nonlinear relationship between pressure and velocity as well as the angular response characteristics of the probe. Despite our understanding of the effects of turbulence on Pitot and static pressure probes, relatively little is known about the influence of turbulence on five-hole probes. This study attempts to fill this gap in our knowledge by using advanced experimental techniques to quantify these turbulence-induced errors and by developing a novel method of predicting the response of a five-hole probe to turbulence. A few studies have attempted to quantify turbulence-induced errors in five-hole probe measurements but they were limited by their inability to accurately measure the total and static pressure in the turbulent flow. The current research utilizes a fast-response five-hole probe (FR5HP) in order to accurately quantify the effects of turbulence on different standard five-hole probes (Std5HP). The FR5HP is capable of measuring the instantaneous flowfield and unlike the Std5HP the FR5HP measurements are not contaminated by the turbulent velocity fluctuations. Measurements with the FR5HP and two different Std5HPs were acquired in the highly turbulent wakes of 2D and 3D cylinders in order to quantify the turbulence-induced errors in Std5HP measurements. The primary contribution of this work is the development and validation of a simulation method to predict the measurements of a Std5HP in an arbitrary turbulent flow. This simulation utilizes a statistical approach to estimating the pressure at each port on the tip of the probe. The angular response of the probe is modeled using experimental calibration data for each five-hole probe. The simulation method is validated against the experimental measurements of the Std5HPs, and then used to study the how the characteristics of the turbulent flowfield influence the measurements of the Std5HPs. It is shown that total pressure measured by a Std5HP is increased by axial velocity fluctuations but decreased by the transverse fluctuations. The static pressure was shown to be very sensitive to the transverse fluctuations while the axial fluctuations had a negligible effect. As with Pitot probes, the turbulence-induced errors in the Std5HPs measurements were dependent on both the properties of the turbulent flow and the geometry of the probe tip. It is then demonstrated that this simulation method can be used to correct the measurements of a Std5HP in a turbulent flow if the characteristics of the turbulence are known. Finally, it is demonstrated that turbulence-induced errors in Std5HP measurements can have a substantial effect on the determination of the profile and vortex-induced drag from measurements in the wake of a 3D body. The results showed that while the calculation of both drag components was influenced by turbulence-induced errors the largest effect was on the determination of vortex-induced drag.

  3. Multiple Velocity Profile Measurements in Hypersonic Flows Using Sequentially-Imaged Fluorescence Tagging

    NASA Technical Reports Server (NTRS)

    Bathel, Brett F.; Danehy, Paul M.; Inman, Jennifer A.; Jones, Stephen B.; Ivey,Christopher b.; Goyne, Christopher P.

    2010-01-01

    Nitric-oxide planar laser-induced fluorescence (NO PLIF) was used to perform velocity measurements in hypersonic flows by generating multiple tagged lines which fluoresce as they convect downstream. For each laser pulse, a single interline, progressive scan intensified CCD (charge-coupled device) camera was used to obtain two sequential images of the NO molecules that had been tagged by the laser. The CCD configuration allowed for sub-microsecond acquisition of both images, resulting in sub-microsecond temporal resolution as well as sub-mm spatial resolution (0.5-mm horizontal, 0.7-mm vertical). Determination of axial velocity was made by application of a cross-correlation analysis of the horizontal shift of individual tagged lines. A numerical study of measured velocity error due to a uniform and linearly-varying collisional rate distribution was performed. Quantification of systematic errors, the contribution of gating/exposure duration errors, and the influence of collision rate on temporal uncertainty were made. Quantification of the spatial uncertainty depended upon the signal-to-noise ratio of the acquired profiles. This velocity measurement technique has been demonstrated for two hypersonic flow experiments: (1) a reaction control system (RCS) jet on an Orion Crew Exploration Vehicle (CEV) wind tunnel model and (2) a 10-degree half-angle wedge containing a 2-mm tall, 4-mm wide cylindrical boundary layer trip. The experiments were performed at the NASA Langley Research Center's 31-Inch Mach 10 Air Tunnel.

  4. Influence of Spatial Resolution in Three-dimensional Cine Phase Contrast Magnetic Resonance Imaging on the Accuracy of Hemodynamic Analysis

    PubMed Central

    Fukuyama, Atsushi; Isoda, Haruo; Morita, Kento; Mori, Marika; Watanabe, Tomoya; Ishiguro, Kenta; Komori, Yoshiaki; Kosugi, Takafumi

    2017-01-01

    Introduction: We aim to elucidate the effect of spatial resolution of three-dimensional cine phase contrast magnetic resonance (3D cine PC MR) imaging on the accuracy of the blood flow analysis, and examine the optimal setting for spatial resolution using flow phantoms. Materials and Methods: The flow phantom has five types of acrylic pipes that represent human blood vessels (inner diameters: 15, 12, 9, 6, and 3 mm). The pipes were fixed with 1% agarose containing 0.025 mol/L gadolinium contrast agent. A blood-mimicking fluid with human blood property values was circulated through the pipes at a steady flow. Magnetic resonance (MR) images (three-directional phase images with speed information and magnitude images for information of shape) were acquired using the 3-Tesla MR system and receiving coil. Temporal changes in spatially-averaged velocity and maximum velocity were calculated using hemodynamic analysis software. We calculated the error rates of the flow velocities based on the volume flow rates measured with a flowmeter and examined measurement accuracy. Results: When the acrylic pipe was the size of the thoracicoabdominal or cervical artery and the ratio of pixel size for the pipe was set at 30% or lower, spatially-averaged velocity measurements were highly accurate. When the pixel size ratio was set at 10% or lower, maximum velocity could be measured with high accuracy. It was difficult to accurately measure maximum velocity of the 3-mm pipe, which was the size of an intracranial major artery, but the error for spatially-averaged velocity was 20% or less. Conclusions: Flow velocity measurement accuracy of 3D cine PC MR imaging for pipes with inner sizes equivalent to vessels in the cervical and thoracicoabdominal arteries is good. The flow velocity accuracy for the pipe with a 3-mm-diameter that is equivalent to major intracranial arteries is poor for maximum velocity, but it is relatively good for spatially-averaged velocity. PMID:28132996

  5. Noninvasive calculation of the aortic blood pressure waveform from the flow velocity waveform: a proof of concept

    PubMed Central

    Vennin, Samuel; Mayer, Alexia; Li, Ye; Fok, Henry; Clapp, Brian; Alastruey, Jordi

    2015-01-01

    Estimation of aortic and left ventricular (LV) pressure usually requires measurements that are difficult to acquire during the imaging required to obtain concurrent LV dimensions essential for determination of LV mechanical properties. We describe a novel method for deriving aortic pressure from the aortic flow velocity. The target pressure waveform is divided into an early systolic upstroke, determined by the water hammer equation, and a diastolic decay equal to that in the peripheral arterial tree, interposed by a late systolic portion described by a second-order polynomial constrained by conditions of continuity and conservation of mean arterial pressure. Pulse wave velocity (PWV, which can be obtained through imaging), mean arterial pressure, diastolic pressure, and diastolic decay are required inputs for the algorithm. The algorithm was tested using 1) pressure data derived theoretically from prespecified flow waveforms and properties of the arterial tree using a single-tube 1-D model of the arterial tree, and 2) experimental data acquired from a pressure/Doppler flow velocity transducer placed in the ascending aorta in 18 patients (mean ± SD: age 63 ± 11 yr, aortic BP 136 ± 23/73 ± 13 mmHg) at the time of cardiac catheterization. For experimental data, PWV was calculated from measured pressures/flows, and mean and diastolic pressures and diastolic decay were taken from measured pressure (i.e., were assumed to be known). Pressure reconstructed from measured flow agreed well with theoretical pressure: mean ± SD root mean square (RMS) error 0.7 ± 0.1 mmHg. Similarly, for experimental data, pressure reconstructed from measured flow agreed well with measured pressure (mean RMS error 2.4 ± 1.0 mmHg). First systolic shoulder and systolic peak pressures were also accurately rendered (mean ± SD difference 1.4 ± 2.0 mmHg for peak systolic pressure). This is the first noninvasive derivation of aortic pressure based on fluid dynamics (flow and wave speed) in the aorta itself. PMID:26163442

  6. Propagation of stage measurement uncertainties to streamflow time series

    NASA Astrophysics Data System (ADS)

    Horner, Ivan; Le Coz, Jérôme; Renard, Benjamin; Branger, Flora; McMillan, Hilary

    2016-04-01

    Streamflow uncertainties due to stage measurements errors are generally overlooked in the promising probabilistic approaches that have emerged in the last decade. We introduce an original error model for propagating stage uncertainties through a stage-discharge rating curve within a Bayesian probabilistic framework. The method takes into account both rating curve (parametric errors and structural errors) and stage uncertainty (systematic and non-systematic errors). Practical ways to estimate the different types of stage errors are also presented: (1) non-systematic errors due to instrument resolution and precision and non-stationary waves and (2) systematic errors due to gauge calibration against the staff gauge. The method is illustrated at a site where the rating-curve-derived streamflow can be compared with an accurate streamflow reference. The agreement between the two time series is overall satisfying. Moreover, the quantification of uncertainty is also satisfying since the streamflow reference is compatible with the streamflow uncertainty intervals derived from the rating curve and the stage uncertainties. Illustrations from other sites are also presented. Results are much contrasted depending on the site features. In some cases, streamflow uncertainty is mainly due to stage measurement errors. The results also show the importance of discriminating systematic and non-systematic stage errors, especially for long term flow averages. Perspectives for improving and validating the streamflow uncertainty estimates are eventually discussed.

  7. Non-intrusive acoustic measurement of flow velocity and temperature in a high subsonic Mach number jet

    NASA Astrophysics Data System (ADS)

    Otero, R., Jr.; Lowe, K. T.; Ng, W. F.

    2018-01-01

    In previous studies, sonic anemometry and thermometry have generally been used to measure low subsonic Mach flow conditions. Recently, a novel configuration was proposed and used to measure unheated jet velocities up to Mach 0.83 non-intrusively. The objective of this investigation is to test the novel configuration in higher temperature conditions and explore the effects of fluid temperature on mean velocity and temperature measurement accuracy. The current work presents non-intrusive acoustic measurements of single-stream jet conditions up to Mach 0.7 and total temperatures from 299 K to 700 K. Comparison of acoustically measured velocity and static temperature with probe data indicate root mean square (RMS) velocity errors of 2.6 m s-1 (1.1% of the maximum jet centerline velocity), 4.0 m s-1 (1.2%), and 8.5 m s-1 (2.4%), respectively, for 299, 589, and 700 K total temperature flows up to Mach 0.7. RMS static temperature errors of 7.5 K (2.5% of total temperature), 8.1 K (1.3%), and 23.3 K (3.3%) were observed for the same respective total temperature conditions. To the authors’ knowledge, this is the first time a non-intrusive acoustic technique has been used to simultaneously measure mean fluid velocity and static temperatures in high subsonic Mach numbers up to 0.7. Overall, the findings of this work support the use of acoustics for non-intrusive flow monitoring. The ability to measure mean flow conditions at high subsonic Mach numbers and temperatures makes this technique a viable candidate for gas turbine applications, in particular.

  8. Evaluation of a turbine flow meter (Ventilometer Mark 2) in the measurement of ventilation.

    PubMed

    Cooper, C B; Harris, N D; Howard, P

    1990-01-01

    We have evaluated a turbine flow meter (Ventilometer Mark 2, PK Morgan, Kent, UK) at low flow rates and levels of ventilation which are likely to be encountered during exercise in patients with chronic respiratory disease. Pulsatile flows were generated from a volume-cycled mechanical ventilator, the flow wave-form was modified by damping to simulate a human breathing pattern. Comparative measurements of ventilation were made whilst varying tidal volume (VT) from 0.22 to 1.131 and respiratory rate (fR) from 10 to 35 min-1. At lower levels of ventilation the instrument tended to underread especially with increasing fR. The calibration factor must be adjusted to match the level of ventilation if the measurement errors are to be within 5%.

  9. Orifice-induced pressure error studies in Langley 7- by 10-foot high-speed tunnel

    NASA Technical Reports Server (NTRS)

    Plentovich, E. B.; Gloss, B. B.

    1986-01-01

    For some time it has been known that the presence of a static pressure measuring hole will disturb the local flow field in such a way that the sensed static pressure will be in error. The results of previous studies aimed at studying the error induced by the pressure orifice were for relatively low Reynolds number flows. Because of the advent of high Reynolds number transonic wind tunnels, a study was undertaken to assess the magnitude of this error at high Reynolds numbers than previously published and to study a possible method of eliminating this pressure error. This study was conducted in the Langley 7- by 10-Foot High-Speed Tunnel on a flat plate. The model was tested at Mach numbers from 0.40 to 0.72 and at Reynolds numbers from 7.7 x 1,000,000 to 11 x 1,000,000 per meter (2.3 x 1,000,000 to 3.4 x 1,000,000 per foot), respectively. The results indicated that as orifice size increased, the pressure error also increased but that a porous metal (sintered metal) plug inserted in an orifice could greatly reduce the pressure error induced by the orifice.

  10. Evaluation of a flow direction probe and a pitot-static probe on the F-14 airplane at high angles of attack and sideslip

    NASA Technical Reports Server (NTRS)

    Larson, T. J.

    1984-01-01

    The measurement performance of a hemispherical flow-angularity probe and a fuselage-mounted pitot-static probe was evaluated at high flow angles as part of a test program on an F-14 airplane. These evaluations were performed using a calibrated pitot-static noseboom equipped with vanes for reference flow direction measurements, and another probe incorporating vanes but mounted on a pod under the fuselage nose. Data are presented for angles of attack up to 63, angles of sideslip from -22 deg to 22 deg, and for Mach numbers from approximately 0.3 to 1.3. During maneuvering flight, the hemispherical flow-angularity probe exhibited flow angle errors that exceeded 2 deg. Pressure measurements with the pitot-static probe resulted in very inaccurate data above a Mach number of 0.87 and exhibited large sensitivities with flow angle.

  11. Heat flow measurements on the southeast coast of Australia

    USGS Publications Warehouse

    Hyndman, R.D.; Jaeger, J.C.; Sass, J.H.

    1969-01-01

    Three boreholes have been drilled for the Australian National University near the southeast coast of New South Wales, Australia. The heat flows found are 1.1, 1.0, and 1.3 ??cal/cm2sec. The errors resulting from the proximity of the sea and a lake, surface temperature change, conductivity structure and water flow have been examined. The radioactive heat production in some of the intrusive rocks of the area have also been measured. The heat flows are much lower than the values of about 2.0 found elsewhere in south eastern Australia. The lower values appear to be part of a distinct heat flow province in eastern Australia. ?? 1969.

  12. Field and laboratory determination of water-surface elevation and velocity using noncontact measurements

    USGS Publications Warehouse

    Nelson, Jonathan M.; Kinzel, Paul J.; Schmeeckle, Mark Walter; McDonald, Richard R.; Minear, Justin T.

    2016-01-01

    Noncontact methods for measuring water-surface elevation and velocity in laboratory flumes and rivers are presented with examples. Water-surface elevations are measured using an array of acoustic transducers in the laboratory and using laser scanning in field situations. Water-surface velocities are based on using particle image velocimetry or other machine vision techniques on infrared video of the water surface. Using spatial and temporal averaging, results from these methods provide information that can be used to develop estimates of discharge for flows over known bathymetry. Making such estimates requires relating water-surface velocities to vertically averaged velocities; the methods here use standard relations. To examine where these relations break down, laboratory data for flows over simple bumps of three amplitudes are evaluated. As anticipated, discharges determined from surface information can have large errors where nonhydrostatic effects are large. In addition to investigating and characterizing this potential error in estimating discharge, a simple method for correction of the issue is presented. With a simple correction based on bed gradient along the flow direction, remotely sensed estimates of discharge appear to be viable.

  13. Ultrasonic Doppler measurement of renal artery blood flow

    NASA Technical Reports Server (NTRS)

    Freund, W. R.; Meindl, J. D.

    1975-01-01

    An extensive evaluation of the practical and theoretical limitations encountered in the use of totally implantable CW Doppler flowmeters is provided. Theoretical analyses, computer models, in-vitro and in-vivo calibration studies describe the sources and magnitudes of potential errors in the measurement of blood flow through the renal artery, as well as larger vessels in the circulatory system. The evaluation of new flowmeter/transducer systems and their use in physiological investigations is reported.

  14. Analysis of methods to estimate spring flows in a karst aquifer

    USGS Publications Warehouse

    Sepulveda, N.

    2009-01-01

    Hydraulically and statistically based methods were analyzed to identify the most reliable method to predict spring flows in a karst aquifer. Measured water levels at nearby observation wells, measured spring pool altitudes, and the distance between observation wells and the spring pool were the parameters used to match measured spring flows. Measured spring flows at six Upper Floridan aquifer springs in central Florida were used to assess the reliability of these methods to predict spring flows. Hydraulically based methods involved the application of the Theis, Hantush-Jacob, and Darcy-Weisbach equations, whereas the statistically based methods were the multiple linear regressions and the technology of artificial neural networks (ANNs). Root mean square errors between measured and predicted spring flows using the Darcy-Weisbach method ranged between 5% and 15% of the measured flows, lower than the 7% to 27% range for the Theis or Hantush-Jacob methods. Flows at all springs were estimated to be turbulent based on the Reynolds number derived from the Darcy-Weisbach equation for conduit flow. The multiple linear regression and the Darcy-Weisbach methods had similar spring flow prediction capabilities. The ANNs provided the lowest residuals between measured and predicted spring flows, ranging from 1.6% to 5.3% of the measured flows. The model prediction efficiency criteria also indicated that the ANNs were the most accurate method predicting spring flows in a karst aquifer. ?? 2008 National Ground Water Association.

  15. Analysis of methods to estimate spring flows in a karst aquifer.

    PubMed

    Sepúlveda, Nicasio

    2009-01-01

    Hydraulically and statistically based methods were analyzed to identify the most reliable method to predict spring flows in a karst aquifer. Measured water levels at nearby observation wells, measured spring pool altitudes, and the distance between observation wells and the spring pool were the parameters used to match measured spring flows. Measured spring flows at six Upper Floridan aquifer springs in central Florida were used to assess the reliability of these methods to predict spring flows. Hydraulically based methods involved the application of the Theis, Hantush-Jacob, and Darcy-Weisbach equations, whereas the statistically based methods were the multiple linear regressions and the technology of artificial neural networks (ANNs). Root mean square errors between measured and predicted spring flows using the Darcy-Weisbach method ranged between 5% and 15% of the measured flows, lower than the 7% to 27% range for the Theis or Hantush-Jacob methods. Flows at all springs were estimated to be turbulent based on the Reynolds number derived from the Darcy-Weisbach equation for conduit flow. The multiple linear regression and the Darcy-Weisbach methods had similar spring flow prediction capabilities. The ANNs provided the lowest residuals between measured and predicted spring flows, ranging from 1.6% to 5.3% of the measured flows. The model prediction efficiency criteria also indicated that the ANNs were the most accurate method predicting spring flows in a karst aquifer.

  16. DNAPL MAPPING AND WATER SATURATION MEASUREMENTS IN 2-D MODELS USING LIGHT TRANSMISSION VISUALIZATION (LTV) TECHNIQUE

    EPA Science Inventory

    • LTV can be used to characterize free phase PCE architecture in 2-D flow chambers without using a dye. • Results to date suggest that error in PCE detection using LTV can be less than 10% if the imaging system is optimized. • Mass balance calculations show a maximum error of 9...

  17. Effect of grid resolution on large eddy simulation of wall-bounded turbulence

    NASA Astrophysics Data System (ADS)

    Rezaeiravesh, S.; Liefvendahl, M.

    2018-05-01

    The effect of grid resolution on a large eddy simulation (LES) of a wall-bounded turbulent flow is investigated. A channel flow simulation campaign involving a systematic variation of the streamwise (Δx) and spanwise (Δz) grid resolution is used for this purpose. The main friction-velocity-based Reynolds number investigated is 300. Near the walls, the grid cell size is determined by the frictional scaling, Δx+ and Δz+, and strongly anisotropic cells, with first Δy+ ˜ 1, thus aiming for the wall-resolving LES. Results are compared to direct numerical simulations, and several quality measures are investigated, including the error in the predicted mean friction velocity and the error in cross-channel profiles of flow statistics. To reduce the total number of channel flow simulations, techniques from the framework of uncertainty quantification are employed. In particular, a generalized polynomial chaos expansion (gPCE) is used to create metamodels for the errors over the allowed parameter ranges. The differing behavior of the different quality measures is demonstrated and analyzed. It is shown that friction velocity and profiles of the velocity and Reynolds stress tensor are most sensitive to Δz+, while the error in the turbulent kinetic energy is mostly influenced by Δx+. Recommendations for grid resolution requirements are given, together with the quantification of the resulting predictive accuracy. The sensitivity of the results to the subgrid-scale (SGS) model and varying Reynolds number is also investigated. All simulations are carried out with second-order accurate finite-volume-based solver OpenFOAM. It is shown that the choice of numerical scheme for the convective term significantly influences the error portraits. It is emphasized that the proposed methodology, involving the gPCE, can be applied to other modeling approaches, i.e., other numerical methods and the choice of SGS model.

  18. Platinum-Resistor Differential Temperature Sensor

    NASA Technical Reports Server (NTRS)

    Kolbly, R. B.; Britcliffe, M. J.

    1985-01-01

    Platinum resistance elements used in bridge circuit for measuring temperature difference between two flowing liquids. Temperature errors with circuit are less than 0.01 degrees C over range of 100 degrees C.

  19. Can standard cosmological models explain the observed Abell cluster bulk flow?

    NASA Technical Reports Server (NTRS)

    Strauss, Michael A.; Cen, Renyue; Ostriker, Jeremiah P.; Laure, Tod R.; Postman, Marc

    1995-01-01

    Lauer and Postman (LP) observed that all Abell clusters with redshifts less than 15,000 km/s appear to be participating in a bulk flow of 689 km/s with respect to the cosmic microwave background. We find this result difficult to reconcile with all popular models for large-scale structure formation that assume Gaussian initial conditions. This conclusion is based on Monte Carlo realizations of the LP data, drawn from large particle-mesh N-body simulations for six different models of the initial power spectrum (standard, tilted, and Omega(sub 0) = 0.3 cold dark matter, and two variants of the primordial baryon isocurvature model). We have taken special care to treat properly the longest-wavelength components of the power spectra. The simulations are sampled, 'observed,' and analyzed as identically as possible to the LP cluster sample. Large-scale bulk flows as measured from clusters in the simulations are in excellent agreement with those measured from the grid: the clusters do not exhibit any strong velocity bias on large scales. Bulk flows with amplitude as large as that reported by LP are not uncommon in the Monte Carlo data stes; the distribution of measured bulk flows before error bias subtraction is rougly Maxwellian, with a peak around 400 km/s. However the chi squared of the observed bulk flow, taking into account the anisotropy of the error ellipsoid, is much more difficult to match in the simulations. The models examined are ruled out at confidence levels between 94% and 98%.

  20. Inverting multiple suites of thermal indicator data to constrain the heat flow history: A case study from east Kalimantan, Indonesia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mudford, B.S.

    1996-12-31

    The determination of an appropriate thermal history in an exploration area is of fundamental importance when attempting to understand the evolution of the petroleum system. In this talk we present the results of a single-well modelling study in which bottom hole temperature data, vitrinite reflectance data and three different biomarker ratio datasets were available to constrain the modelling. Previous modelling studies using biomarker ratios have been hampered by the wide variety of published kinetic parameters for biomarker evolution. Generally, these parameters have been determined either from measurements in the laboratory and extrapolation to the geological setting, or from downhole measurementsmore » where the heat flow history is assumed to be known. In the first case serious errors can arise because the heating rate is being extrapolated over many orders of magnitude, while in the second case errors can arise if the assumed heat flow history is incorrect. To circumvent these problems we carried out a parameter optimization in which the heat flow history was treated as an unknown in addition to the biomarker ratio kinetic parameters. This method enabled the heat flow history for the area to be determined together with appropriate kinetic parameters for the three measured biomarker ratios. Within the resolution of the data, the heat flow since the early Miocene has been relatively constant at levels required to yield good agreement between predicted and measured subsurface temperatures.« less

  1. Inverting multiple suites of thermal indicator data to constrain the heat flow history: A case study from east Kalimantan, Indonesia

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mudford, B.S.

    1996-01-01

    The determination of an appropriate thermal history in an exploration area is of fundamental importance when attempting to understand the evolution of the petroleum system. In this talk we present the results of a single-well modelling study in which bottom hole temperature data, vitrinite reflectance data and three different biomarker ratio datasets were available to constrain the modelling. Previous modelling studies using biomarker ratios have been hampered by the wide variety of published kinetic parameters for biomarker evolution. Generally, these parameters have been determined either from measurements in the laboratory and extrapolation to the geological setting, or from downhole measurementsmore » where the heat flow history is assumed to be known. In the first case serious errors can arise because the heating rate is being extrapolated over many orders of magnitude, while in the second case errors can arise if the assumed heat flow history is incorrect. To circumvent these problems we carried out a parameter optimization in which the heat flow history was treated as an unknown in addition to the biomarker ratio kinetic parameters. This method enabled the heat flow history for the area to be determined together with appropriate kinetic parameters for the three measured biomarker ratios. Within the resolution of the data, the heat flow since the early Miocene has been relatively constant at levels required to yield good agreement between predicted and measured subsurface temperatures.« less

  2. Challenges in the determination of the interstellar flow longitude from the pickup ion cutoff

    NASA Astrophysics Data System (ADS)

    Taut, A.; Berger, L.; Möbius, E.; Drews, C.; Heidrich-Meisner, V.; Keilbach, D.; Lee, M. A.; Wimmer-Schweingruber, R. F.

    2018-03-01

    Context. The interstellar flow longitude corresponds to the Sun's direction of movement relative to the local interstellar medium. Thus, it constitutes a fundamental parameter for our understanding of the heliosphere and, in particular, its interaction with its surroundings, which is currently investigated by the Interstellar Boundary EXplorer (IBEX). One possibility to derive this parameter is based on pickup ions (PUIs) that are former neutral ions that have been ionized in the inner heliosphere. The neutrals enter the heliosphere as an interstellar wind from the direction of the Sun's movement against the partially ionized interstellar medium. PUIs carry information about the spatial variation of their neutral parent population (density and flow vector field) in their velocity distribution function. From the symmetry of the longitudinal flow velocity distribution, the interstellar flow longitude can be derived. Aim. The aim of this paper is to identify and eliminate systematic errors that are connected to this approach of measuring the interstellar flow longitude; we want to minimize any systematic influences on the result of this analysis and give a reasonable estimate for the uncertainty. Methods: We use He+ data measured by the PLAsma and SupraThermal Ion Composition (PLASTIC) sensor on the Solar TErrestrial RElations Observatory Ahead (STEREO A) spacecraft. We analyze a recent approach, identify sources of systematic errors, and propose solutions to eliminate them. Furthermore, a method is introduced to estimate the error associated with this approach. Additionally, we investigate how the selection of interplanetary magnetic field angles, which is closely connected to the pickup ion velocity distribution function, affects the result for the interstellar flow longitude. Results: We find that the revised analysis used to address part of the expected systematic effects obtains significantly different results than presented in the previous study. In particular, the derived uncertainties are considerably larger. Furthermore, an unexpected systematic trend of the resulting interstellar flow longitude with the selection of interplanetary magnetic field orientation is uncovered.

  3. Turbine flowmeter vs. Fleisch pneumotachometer: a comparative study for exercise testing.

    PubMed

    Yeh, M P; Adams, T D; Gardner, R M; Yanowitz, F G

    1987-09-01

    The purpose of this study was to investigate the characteristics of a newly developed turbine flowmeter (Alpha Technologies, model VMM-2) for use in an exercise testing system by comparing its measurement of expiratory flow (VE), O2 uptake (VO2), and CO2 output (VCO2) with the Fleisch pneumotachometer. An IBM PC/AT-based breath-by-breath system was developed, with turbine flowmeter and dual-Fleisch pneumotachometers connected in series. A normal subject was tested twice at rest, 100-W, and 175-W of exercise. Expired gas of 24-32 breaths was collected in a Douglas bag. VE was within 4% accuracy for both flowmeter systems. The Fleisch pneumotachometer system had 5% accuracy for VO2 and VCO2 at rest and exercise. The turbine flowmeter system had up to 20% error for VO2 and VCO2 at rest. Errors decreased as work load increased. Visual observations of the flow curves revealed the turbine signal always lagged the Fleisch signal at the beginning of inspiration or expiration. At the end of inspiration or expiration, the turbine signal continued after the Fleisch signal had returned to zero. The "lag-before-start" and "spin-after-stop" effects of the turbine flowmeter resulted in larger than acceptable error for the VO2 and VCO2 measurements at low flow rates.

  4. Mean annual runoff and peak flow estimates based on channel geometry of streams in southeastern Montana

    USGS Publications Warehouse

    Omang, R.J.; Parrett, Charles; Hull, J.A.

    1983-01-01

    Equations using channel-geometry measurements were developed for estimating mean runoff and peak flows of ungaged streams in southeastern Montana. Two separate sets of esitmating equations were developed for determining mean annual runoff: one for perennial streams and one for ephemeral and intermittent streams. Data from 29 gaged sites on perennial streams and 21 gaged sites on ephemeral and intermittent streams were used in these analyses. Data from 78 gaged sites were used in the peak-flow analyses. Southeastern Montana was divided into three regions and separate multiple-regression equations for each region were developed that relate channel dimensions to peak discharge having recurrence intervals of 2, 5, 10, 25, 50, and 100 years. Channel-geometery relations were developed using measurements of the active-channel width and bankfull width. Active-channel width and bankfull width were the most significant channel features for estimating mean annual runoff for al types of streams. Use of this method requires that onsite measurements be made of channel width. The standard error of estimate for predicting mean annual runoff ranged from about 38 to 79 percent. The standard error of estimate relating active-channel width or bankfull width to peak flow ranged from about 37 to 115 percent. (USGS)

  5. Factors affecting measurement of channel thickness in asymmetrical flow field-flow fractionation.

    PubMed

    Dou, Haiyang; Jung, Euo Chang; Lee, Seungho

    2015-05-08

    Asymmetrical flow field-flow fractionation (AF4) has been considered to be a useful tool for simultaneous separation and characterization of polydisperse macromolecules or colloidal nanoparticles. AF4 analysis requires the knowledge of the channel thickness (w), which is usually measured by injecting a standard with known diffusion coefficient (D) or hydrodynamic diameter (dh). An accurate w determination is a challenge due to its uncertainties arising from the membrane's compressibility, which may vary with experimental condition. In the present study, influence of factors including the size and type of the standard on the measurement of w was systematically investigated. The results revealed that steric effect and the particles-membrane interaction by van der Waals or electrostatic force may result in an error in w measurement. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Development of an accurate portable recording peak-flow meter for the diagnosis of asthma.

    PubMed

    Hitchings, D J; Dickinson, S A; Miller, M R; Fairfax, A J

    1993-05-01

    This article describes the systematic design of an electronic recording peak expiratory flow (PEF) meter to provide accurate data for the diagnosis of occupational asthma. Traditional diagnosis of asthma relies on accurate data of PEF tests performed by the patients in their own homes and places of work. Unfortunately there are high error rates in data produced and recorded by the patient, most of these are transcription errors and some patients falsify their records. The PEF measurement itself is not effort independent, the data produced depending on the way in which the patient performs the test. Patients are taught how to perform the test giving maximal effort to the expiration being measured. If the measurement is performed incorrectly then errors will occur. Accurate data can be produced if an electronically recording PEF instrument is developed, thus freeing the patient from the task of recording the test data. It should also be capable of determining whether the PEF measurement has been correctly performed. A requirement specification for a recording PEF meter was produced. A commercially available electronic PEF meter was modified to provide the functions required for accurate serial recording of the measurements produced by the patients. This is now being used in three hospitals in the West Midlands for investigations into the diagnosis of occupational asthma. In investigating current methods of measuring PEF and other pulmonary quantities a greater understanding was obtained of the limitations of current methods of measurement, and quantities being measured.(ABSTRACT TRUNCATED AT 250 WORDS)

  7. Doppler global velocimetry: Development of a flight research instrumentation system for application to non-intrusive measurements of the flow field

    NASA Technical Reports Server (NTRS)

    Komine, Hiroshi; Brosnan, Stephen J.; Long, William H.; Stappaerts, Eddy A.

    1994-01-01

    Doppler Global Velocimetry (DGV) is a new diagnostic tool that offers potential for flow field measurements in flight by acquiring three-component velocity data in near real-time during flight maneuvers. The feasibility of implementation of a flight DGV system aboard NASA's High-Angle-of-Attack Research Vehicle (HARV) was addressed in this work by identifying the essential characteristics of a flight measurement system and by performing calibration and error tests. Results from this work were: an outline that establishes a preliminary basis for system configurations by analyzing measurement errors, installation issues, and operating requirements; measurement of the accuracy of the DGV technique using a laboratory breadboard DGV system based on a frequency-doubled Nd: YAG laser and iodine Absorption Line Filter (ALF), which showed excellent agreement between the DGV data and pilot measurements on a laminar flow jet with velocities of up to 150 m/sec; a survey of DGV system components and technologies that are relevant to the design of a flight measurement system, including a survey of cameras for the next generation DGV receivers; an assessment of the candidate lasers and absorption line filters for the flight system, resulting in a near-term recommendation of Nd: host lasers and an iodine ALF for both flight and wind tunnel applications.

  8. Investigating the limitations of single breath-hold renal artery blood flow measurements using spiral phase contrast MR with R-R interval averaging.

    PubMed

    Steeden, Jennifer A; Muthurangu, Vivek

    2015-04-01

    1) To validate an R-R interval averaged golden angle spiral phase contrast magnetic resonance (RAGS PCMR) sequence against conventional cine PCMR for assessment of renal blood flow (RBF) in normal volunteers; and 2) To investigate the effects of motion and heart rate on the accuracy of flow measurements using an in silico simulation. In 20 healthy volunteers RAGS (∼6 sec breath-hold) and respiratory-navigated cine (∼5 min) PCMR were performed in both renal arteries to assess RBF. A simulation of RAGS PCMR was used to assess the effect of heart rate (30-105 bpm), vessel expandability (0-150%) and translational motion (x1.0-4.0) on the accuracy of RBF measurements. There was good agreement between RAGS and cine PCMR in the volunteer study (bias: 0.01 L/min, limits of agreement: -0.04 to +0.06 L/min, P = 0.0001). The simulation demonstrated a positive linear relationship between heart rate and error (r = 0.9894, P < 0.0001), a negative linear relationship between vessel expansion and error (r = -0.9484, P < 0.0001), and a nonlinear, heart rate-dependent relationship between vessel translation and error. We have demonstrated that RAGS PCMR accurately measures RBF in vivo. However, the simulation reveals limitations in this technique at extreme heart rates (<40 bpm, >100 bpm), or when there is significant motion (vessel expandability: >80%, vessel translation: >x2.2). © 2014 Wiley Periodicals, Inc.

  9. How predictable is the behaviour of torrential processes: two case studies of the summer 2012

    NASA Astrophysics Data System (ADS)

    Huebl, Johannes; Eisl, Julia; Janu, Stefan; Hanspeter, Pussnig

    2013-04-01

    Debris flow hazards play an important role in the Austrian Alps since many villages are located on alluvial fans. Most of the mitigation Measures as well as Hazard Zone Maps are designed by engineers of previous generations, who know quite a lot about the torrential behaviour from their experience. But speaking in terms of recurrence intervals of 100 years or even more, human memory is restricted. On the other hand numerical modelling is a fast growing task in dealing with natural hazards. Scenarios of torrential hazards can be defined and accordant deposition pattern, flow depths and velocities are calculated. But of course, errors in the input data must lead to fatal errors in the results, consequently threaten human life in possible affected areas. Thus the need for data collection of exceptional events can help to reproduce the reality in a quite high grade, indeed, but unexpected events are still an issue and pose a challenge to engineers. In summer 2012 two debris flow events occurred in Austria with quite different behaviours, from triggering mechanism and flow behaviour through to deposition: Thunderstorms or long lasting rainfall, slope failures with subsequent channel blockage and dike breaching or linear erosion, one or more debris flows, one huge debris flow surge or a series of debris flow surges, sediments without clay or cohesive material, near channel deposition or outspread deposits. Both debris flows have been unexpected in their dimension, although mitigation measures and hazard maps exist. Both events were documented accurately, first to try to understand the torrential process occurred, second to identify the most fitting mitigation measures, ranging from permanent structures to temporary warning systems.

  10. Simulation of streamflow, evapotranspiration, and groundwater recharge in the Lower Frio River watershed, south Texas, 1961-2008

    USGS Publications Warehouse

    Lizarraga, Joy S.; Ockerman, Darwin J.

    2011-01-01

    The U.S. Geological Survey, in cooperation with the U.S. Army Corps of Engineers, Fort Worth District; the City of Corpus Christi; the Guadalupe-Blanco River Authority; the San Antonio River Authority; and the San Antonio Water System, configured, calibrated, and tested a watershed model for a study area consisting of about 5,490 mi2 of the Frio River watershed in south Texas. The purpose of the model is to contribute to the understanding of watershed processes and hydrologic conditions in the lower Frio River watershed. The model simulates streamflow, evapotranspiration (ET), and groundwater recharge by using a numerical representation of physical characteristics of the landscape, and meteorological and streamflow data. Additional time-series inputs to the model include wastewater-treatment-plant discharges, surface-water withdrawals, and estimated groundwater inflow from Leona Springs. Model simulations of streamflow, ET, and groundwater recharge were done for various periods of record depending upon available measured data for input and comparison, starting as early as 1961. Because of the large size of the study area, the lower Frio River watershed was divided into 12 subwatersheds; separate Hydrological Simulation Program-FORTRAN models were developed for each subwatershed. Simulation of the overall study area involved running simulations in downstream order. Output from the model was summarized by subwatershed, point locations, reservoir reaches, and the Carrizo-Wilcox aquifer outcrop. Four long-term U.S. Geological Survey streamflow-gaging stations and two short-term streamflow-gaging stations were used for streamflow model calibration and testing with data from 1991-2008. Calibration was based on data from 2000-08, and testing was based on data from 1991-99. Choke Canyon Reservoir stage data from 1992-2008 and monthly evaporation estimates from 1999-2008 also were used for model calibration. Additionally, 2006-08 ET data from a U.S. Geological Survey meteorological station in Medina County were used for calibration. Streamflow and ET calibration were considered good or very good. For the 2000-08 calibration period, total simulated flow volume and the flow volume of the highest 10 percent of simulated daily flows were calibrated to within about 10 percent of measured volumes at six U.S. Geological Survey streamflow-gaging stations. The flow volume of the lowest 50 percent of daily flows was not simulated as accurately but represented a small percent of the total flow volume. The model-fit efficiency for the weekly mean streamflow during the calibration periods ranged from 0.60 to 0.91, and the root mean square error ranged from 16 to 271 percent of the mean flow rate. The simulated total flow volumes during the testing periods at the long-term gaging stations exceeded the measured total flow volumes by approximately 22 to 50 percent at three stations and were within 7 percent of the measured total flow volumes at one station. For the longer 1961-2008 simulation period at the long-term stations, simulated total flow volumes were within about 3 to 18 percent of measured total flow volumes. The calibrations made by using Choke Canyon reservoir volume for 1992-2008, reservoir evaporation for 1999-2008, and ET in Medina County for 2006-08, are considered very good. Model limitations include possible errors related to model conceptualization and parameter variability, lack of data to better quantify certain model inputs, and measurement errors. Uncertainty regarding the degree to which available rainfall data represent actual rainfall is potentially the most serious source of measurement error. A sensitivity analysis was performed for the Upper San Miguel subwatershed model to show the effect of changes to model parameters on the estimated mean recharge, ET, and surface runoff from that part of the Carrizo-Wilcox aquifer outcrop. Simulated recharge was most sensitive to the changes in the lower-zone ET (LZ

  11. Flow Mapping Based on the Motion-Integration Errors of Autonomous Underwater Vehicles

    NASA Astrophysics Data System (ADS)

    Chang, D.; Edwards, C. R.; Zhang, F.

    2016-02-01

    Knowledge of a flow field is crucial in the navigation of autonomous underwater vehicles (AUVs) since the motion of AUVs is affected by ambient flow. Due to the imperfect knowledge of the flow field, it is typical to observe a difference between the actual and predicted trajectories of an AUV, which is referred to as a motion-integration error (also known as a dead-reckoning error if an AUV navigates via dead-reckoning). The motion-integration error has been essential for an underwater glider to compute its flow estimate from the travel information of the last leg and to improve navigation performance by using the estimate for the next leg. However, the estimate by nature exhibits a phase difference compared to ambient flow experienced by gliders, prohibiting its application in a flow field with strong temporal and spatial gradients. In our study, to mitigate the phase problem, we have developed a local ocean model by combining the flow estimate based on the motion-integration error with flow predictions from a tidal ocean model. Our model has been used to create desired trajectories of gliders for guidance. Our method is validated by Long Bay experiments in 2012 and 2013 in which we deployed multiple gliders on the shelf of South Atlantic Bight and near the edge of Gulf Stream. In our recent study, the application of the motion-integration error is further extended to create a spatial flow map. Considering that the motion-integration errors of AUVs accumulate along their trajectories, the motion-integration error is formulated as a line integral of ambient flow which is then reformulated into algebraic equations. By solving an inverse problem for these algebraic equations, we obtain the knowledge of such flow in near real time, allowing more effective and precise guidance of AUVs in a dynamic environment. This method is referred to as motion tomography. We provide the results of non-parametric and parametric flow mapping from both simulated and experimental data.

  12. The vertical variability of hyporheic fluxes inferred from riverbed temperature data

    NASA Astrophysics Data System (ADS)

    Cranswick, Roger H.; Cook, Peter G.; Shanafield, Margaret; Lamontagne, Sebastien

    2014-05-01

    We present detailed profiles of vertical water flux from the surface to 1.2 m beneath the Haughton River in the tropical northeast of Australia. A 1-D numerical model is used to estimate vertical flux based on raw temperature time series observations from within downwelling, upwelling, neutral, and convergent sections of the hyporheic zone. A Monte Carlo analysis is used to derive error bounds for the fluxes based on temperature measurement error and uncertainty in effective thermal diffusivity. Vertical fluxes ranged from 5.7 m d-1 (downward) to -0.2 m d-1 (upward) with the lowest relative errors for values between 0.3 and 6 m d-1. Our 1-D approach provides a useful alternative to 1-D analytical and other solutions because it does not incorporate errors associated with simplified boundary conditions or assumptions of purely vertical flow, hydraulic parameter values, or hydraulic conditions. To validate the ability of this 1-D approach to represent the vertical fluxes of 2-D flow fields, we compare our model with two simple 2-D flow fields using a commercial numerical model. These comparisons showed that: (1) the 1-D vertical flux was equivalent to the mean vertical component of flux irrespective of a changing horizontal flux; and (2) the subsurface temperature data inherently has a "spatial footprint" when the vertical flux profiles vary spatially. Thus, the mean vertical flux within a 2-D flow field can be estimated accurately without requiring the flow to be purely vertical. The temperature-derived 1-D vertical flux represents the integrated vertical component of flux along the flow path intersecting the observation point. This article was corrected on 6 JUN 2014. See the end of the full text for details.

  13. Experimental study on interfacial area transport in downward two-phase flow

    NASA Astrophysics Data System (ADS)

    Wang, Guanyi

    In view of the importance of two group interfacial area transport equations and lack of corresponding accurate downward flow database that can reveal two group interfacial area transport, a systematic database for adiabatic, air-water, vertically downward two-phase flow in a round pipe with inner diameter of 25.4 mm was collected to gain an insight of interfacial structure and provide benchmarking data for two-group interfacial area transport models. A four-sensor conductivity probe was used to measure the local two phase flow parameters and data was collected with data sampling frequency much higher than conventional data sampling frequency to ensure the accuracy. Axial development of local flow parameter profiles including void fraction, interfacial area concentration, and Sauter mean diameter were presented. Drastic inter-group transfer of void fraction and interfacial area was observed at bubbly to slug transition flow. And the wall peaked interfacial area concentration profiles were observed in churn-turbulent flow. The importance of local data about these phenomenon on flow structure prediction and interfacial area transport equation benchmark was analyzed. Bedsides, in order to investigate the effect of inlet conditions, all experiments were repeated after installing the flow straightening facility, and the results were briefly analyzed. In order to check the accuracy of current data, the experiment results were cross-checked with rotameter measurement as well as drift-flux model prediction, the averaged error is less than 15%. Current models for two-group interfacial area transport equation were evaluated using these data. The results show that two-group interfacial area transport equations with current models can predict most flow conditions with error less than 20%, except some bubbly to slug transition flow conditions and some churn-turbulent flow conditions. The disagreement between models and experiments could result from underestimate of inter-group void transfer.

  14. Comparison of base flows to selected streamflow statistics representative of 1930-2002 in West Virginia

    USGS Publications Warehouse

    Wiley, Jeffrey B.

    2012-01-01

    Base flows were compared with published streamflow statistics to assess climate variability and to determine the published statistics that can be substituted for annual and seasonal base flows of unregulated streams in West Virginia. The comparison study was done by the U.S. Geological Survey, in cooperation with the West Virginia Department of Environmental Protection, Division of Water and Waste Management. The seasons were defined as winter (January 1-March 31), spring (April 1-June 30), summer (July 1-September 30), and fall (October 1-December 31). Differences in mean annual base flows for five record sub-periods (1930-42, 1943-62, 1963-69, 1970-79, and 1980-2002) range from -14.9 to 14.6 percent when compared to the values for the period 1930-2002. Differences between mean seasonal base flows and values for the period 1930-2002 are less variable for winter and spring, -11.2 to 11.0 percent, than for summer and fall, -47.0 to 43.6 percent. Mean summer base flows (July-September) and mean monthly base flows for July, August, September, and October are approximately equal, within 7.4 percentage points of mean annual base flow. The mean of each of annual, spring, summer, fall, and winter base flows are approximately equal to the annual 50-percent (standard error of 10.3 percent), 45-percent (error of 14.6 percent), 75-percent (error of 11.8 percent), 55-percent (error of 11.2 percent), and 35-percent duration flows (error of 11.1 percent), respectively. The mean seasonal base flows for spring, summer, fall, and winter are approximately equal to the spring 50- to 55-percent (standard error of 6.8 percent), summer 45- to 50-percent (error of 6.7 percent), fall 45-percent (error of 15.2 percent), and winter 60-percent duration flows (error of 8.5 percent), respectively. Annual and seasonal base flows representative of the period 1930-2002 at unregulated streamflow-gaging stations and ungaged locations in West Virginia can be estimated using previously published values of statistics and procedures.

  15. The Study on Flow Velocity Measurement of Antarctic Krill Trawl Model Experiment in North Bay of South China Sea

    NASA Astrophysics Data System (ADS)

    Chen, Shuai; Wang, Lumin; Huang, Hongliang; Zhang, Xun

    2017-10-01

    From August 25 to 29, 2014, the project team carried out the experiment of Antarctic krill trawl in the Beihai Bay of the South China Sea. In order to understand the flow field of the network model in the course of the experiment, it is necessary to record the speed of the ship and to grasp the flow field of the ocean. Therefore, the ocean velocity is measured during the experiment. The flow rate in this experiment was measured using an acoustic Doppler flow meter (Vectoring Plus, Nortek, Norway). In order to compensate for the flow rate error caused by ship drift, the drift condition of the ship was also measured by the positioning device (Snapdragon MSM8274AB, Qualcomm, USA) used in the flow rate measurement. The results show that the actual velocity of the target sea area is in the range of 0.06-0.49 m / s and the direction is 216.17-351.70. And compared with the previous research, the influencing factors were analysed. This study proves that it is feasible to use point Doppler flow meter for velocity study in trawl model experiment.

  16. Particle Streak Anemometry: A New Method for Proximal Flow Sensing from Aircraft

    NASA Astrophysics Data System (ADS)

    Nichols, T. W.

    Accurate sensing of relative air flow direction from fixed-wing small unmanned aircraft (sUAS) is challenging with existing multi-hole pitot-static and vane systems. Sub-degree direction accuracy is generally not available on such systems and disturbances to the local flow field, induced by the airframe, introduce an additional error source. An optical imaging approach to make a relative air velocity measurement with high-directional accuracy is presented. Optical methods offer the capability to make a proximal measurement in undisturbed air outside of the local flow field without the need to place sensors on vulnerable probes extended ahead of the aircraft. Current imaging flow analysis techniques for laboratory use rely on relatively thin imaged volumes and sophisticated hardware and intensity thresholding in low-background conditions. A new method is derived and assessed using a particle streak imaging technique that can be implemented with low-cost commercial cameras and illumination systems, and can function in imaged volumes of arbitrary depth with complex background signal. The new technique, referred to as particle streak anemometry (PSA) (to differentiate from particle streak velocimetry which makes a field measurement rather than a single bulk flow measurement) utilizes a modified Canny Edge detection algorithm with a connected component analysis and principle component analysis to detect streak ends in complex imaging conditions. A linear solution for the air velocity direction is then implemented with a random sample consensus (RANSAC) solution approach. A single DOF non-linear, non-convex optimization problem is then solved for the air speed through an iterative approach. The technique was tested through simulation and wind tunnel tests yielding angular accuracies under 0.2 degrees, superior to the performance of existing commercial systems. Air speed error standard deviations varied from 1.6 to 2.2 m/s depending on the techniques of implementation. While air speed sensing is secondary to accurate flow direction measurement, the air speed results were in line with commercial pitot static systems at low speeds.

  17. An alternative method to estimate zero flow temperature differences for Granier's thermal dissipation technique.

    PubMed

    Regalado, Carlos M; Ritter, Axel

    2007-08-01

    Calibration of the Granier thermal dissipation technique for measuring stem sap flow in trees requires determination of the temperature difference (DeltaT) between a heated and an unheated probe when sap flow is zero (DeltaT(max)). Classically, DeltaT(max) has been estimated from the maximum predawn DeltaT, assuming that sap flow is negligible at nighttime. However, because sap flow may continue during the night, the maximum predawn DeltaT value may underestimate the true DeltaT(max). No alternative method has yet been proposed to estimate DeltaT(max) when sap flow is non-zero at night. A sensitivity analysis is presented showing that errors in DeltaT(max) may amplify through sap flux density computations in Granier's approach, such that small amounts of undetected nighttime sap flow may lead to large diurnal sap flux density errors, hence the need for a correct estimate of DeltaT(max). By rearranging Granier's original formula, an optimization method to compute DeltaT(max) from simultaneous measurements of diurnal DeltaT and micrometeorological variables, without assuming that sap flow is negligible at night, is presented. Some illustrative examples are shown for sap flow measurements carried out on individuals of Erica arborea L., which has needle-like leaves, and Myrica faya Ait., a broadleaf species. We show that, although DeltaT(max) values obtained by the proposed method may be similar in some instances to the DeltaT(max) predicted at night, in general the values differ. The procedure presented has the potential of being applied not only to Granier's method, but to other heat-based sap flow systems that require a zero flow calibration, such as the Cermák et al. (1973) heat balance method and the T-max heat pulse system of Green et al. (2003).

  18. Development and Assessment of a Medication Safety Measurement Program in a Long-Term Care Pharmacy.

    PubMed

    Hertig, John B; Hultgren, Kyle E; Parks, Scott; Rondinelli, Rick

    2016-02-01

    Medication errors continue to be a major issue in the health care system, including in long-term care facilities. While many hospitals and health systems have developed methods to identify, track, and prevent these errors, long-term care facilities historically have not invested in these error-prevention strategies. The objective of this study was two-fold: 1) to develop a set of medication-safety process measures for dispensing in a long-term care pharmacy, and 2) to analyze the data from those measures to determine the relative safety of the process. The study was conducted at In Touch Pharmaceuticals in Valparaiso, Indiana. To assess the safety of the medication-use system, each step was documented using a comprehensive flowchart (process flow map) tool. Once completed and validated, the flowchart was used to complete a "failure modes and effects analysis" (FMEA) identifying ways a process may fail. Operational gaps found during FMEA were used to identify points of measurement. The research identified a set of eight measures as potential areas of failure; data were then collected on each one of these. More than 133,000 medication doses (opportunities for errors) were included in the study during the research time frame (April 1, 2014, and ended on June 4, 2014). Overall, there was an approximate order-entry error rate of 15.26%, with intravenous errors at 0.37%. A total of 21 errors migrated through the entire medication-use system. These 21 errors in 133,000 opportunities resulted in a final check error rate of 0.015%. A comprehensive medication-safety measurement program was designed and assessed. This study demonstrated the ability to detect medication errors in a long-term pharmacy setting, thereby making process improvements measureable. Future, larger, multi-site studies should be completed to test this measurement program.

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, Jennifer F.; Clifton, Andrew

    Currently, cup anemometers on meteorological towers are used to measure wind speeds and turbulence intensity to make decisions about wind turbine class and site suitability; however, as modern turbine hub heights increase and wind energy expands to complex and remote sites, it becomes more difficult and costly to install meteorological towers at potential sites. As a result, remote-sensing devices (e.g., lidars) are now commonly used by wind farm managers and researchers to estimate the flow field at heights spanned by a turbine. Although lidars can accurately estimate mean wind speeds and wind directions, there is still a large amount ofmore » uncertainty surrounding the measurement of turbulence using these devices. Errors in lidar turbulence estimates are caused by a variety of factors, including instrument noise, volume averaging, and variance contamination, in which the magnitude of these factors is highly dependent on measurement height and atmospheric stability. As turbulence has a large impact on wind power production, errors in turbulence measurements will translate into errors in wind power prediction. The impact of using lidars rather than cup anemometers for wind power prediction must be understood if lidars are to be considered a viable alternative to cup anemometers.In this poster, the sensitivity of power prediction error to typical lidar turbulence measurement errors is assessed. Turbulence estimates from a vertically profiling WINDCUBE v2 lidar are compared to high-resolution sonic anemometer measurements at field sites in Oklahoma and Colorado to determine the degree of lidar turbulence error that can be expected under different atmospheric conditions. These errors are then incorporated into a power prediction model to estimate the sensitivity of power prediction error to turbulence measurement error. Power prediction models, including the standard binning method and a random forest method, were developed using data from the aeroelastic simulator FAST for a 1.5 MW turbine. The impact of lidar turbulence error on the predicted power from these different models is examined to determine the degree of turbulence measurement accuracy needed for accurate power prediction.« less

  20. Comparison of two surface temperature measurement using thermocouples and infrared camera

    NASA Astrophysics Data System (ADS)

    Michalski, Dariusz; Strąk, Kinga; Piasecka, Magdalena

    This paper compares two methods applied to measure surface temperatures at an experimental setup designed to analyse flow boiling heat transfer. The temperature measurements were performed in two parallel rectangular minichannels, both 1.7 mm deep, 16 mm wide and 180 mm long. The heating element for the fluid flowing in each minichannel was a thin foil made of Haynes-230. The two measurement methods employed to determine the surface temperature of the foil were: the contact method, which involved mounting thermocouples at several points in one minichannel, and the contactless method to study the other minichannel, where the results were provided with an infrared camera. Calculations were necessary to compare the temperature results. Two sets of measurement data obtained for different values of the heat flux were analysed using the basic statistical methods, the method error and the method accuracy. The experimental error and the method accuracy were taken into account. The comparative analysis showed that although the values and distributions of the surface temperatures obtained with the two methods were similar but both methods had certain limitations.

  1. Nonintrusive Temperature and Velocity Measurements in a Hypersonic Nozzle Flow

    NASA Technical Reports Server (NTRS)

    OByrne, S.; Danehy, P. M.; Houwing, A. F. P.

    2002-01-01

    Distributions of nitric oxide vibrational temperature, rotational temperature and velocity have been measured in the hypersonic freestream at the exit of a conical nozzle, using planar laser-induced fluorescence. Particular attention has been devoted to reducing the major sources of systematic error that can affect fluorescence tempera- ture measurements, including beam attenuation, transition saturation effects, laser mode fluctuations and transition choice. Visualization experiments have been performed to improve the uniformity of the nozzle flow. Comparisons of measured quantities with a simple one-dimensional computation are made, showing good agreement between measurements and theory given the uncertainty of the nozzle reservoir conditions and the vibrational relaxation rate.

  2. Measuring discharge with ADCPs: Inferences from synthetic velocity profiles

    USGS Publications Warehouse

    Rehmann, C.R.; Mueller, D.S.; Oberg, K.A.

    2009-01-01

    Synthetic velocity profiles are used to determine guidelines for sampling discharge with acoustic Doppler current profilers (ADCPs). The analysis allows the effects of instrument characteristics, sampling parameters, and properties of the flow to be studied systematically. For mid-section measurements, the averaging time required for a single profile measurement always exceeded the 40 s usually recommended for velocity measurements, and it increased with increasing sample interval and increasing time scale of the large eddies. Similarly, simulations of transect measurements show that discharge error decreases as the number of large eddies sampled increases. The simulations allow sampling criteria that account for the physics of the flow to be developed. ?? 2009 ASCE.

  3. Evaluation of mean velocity and turbulence measurements with ADCPs

    USGS Publications Warehouse

    Nystrom, E.A.; Rehmann, C.R.; Oberg, K.A.

    2007-01-01

    To test the ability of acoustic Doppler current profilers (ADCPs) to measure turbulence, profiles measured with two pulse-to-pulse coherent ADCPs in a laboratory flume were compared to profiles measured with an acoustic Doppler velocimeter, and time series measured in the acoustic beam of the ADCPs were examined. A four-beam ADCP was used at a downstream station, while a three-beam ADCP was used at a downstream station and an upstream station. At the downstream station, where the turbulence intensity was low, both ADCPs reproduced the mean velocity profile well away from the flume boundaries; errors near the boundaries were due to transducer ringing, flow disturbance, and sidelobe interference. At the upstream station, where the turbulence intensity was higher, errors in the mean velocity were large. The four-beam ADCP measured the Reynolds stress profile accurately away from the bottom boundary, and these measurements can be used to estimate shear velocity. Estimates of Reynolds stress with a three-beam ADCP and turbulent kinetic energy with both ADCPs cannot be computed without further assumptions, and they are affected by flow inhomogeneity. Neither ADCP measured integral time scales to within 60%. ?? 2007 ASCE.

  4. A compact x-ray system for two-phase flow measurement

    NASA Astrophysics Data System (ADS)

    Song, Kyle; Liu, Yang

    2018-02-01

    In this paper, a compact x-ray densitometry system consisting of a 50 kV, 1 mA x-ray tube and several linear detector arrays is developed for two-phase flow measurement. The system is capable of measuring void fraction and velocity distributions with a spatial resolution of 0.4 mm per pixel and a frequency of 1000 Hz. A novel measurement model has been established for the system which takes account of the energy spectrum of x-ray photons and the beam hardening effect. An improved measurement accuracy has been achieved with this model compared with the conventional log model that has been widely used in the literature. Using this system, void fraction and velocity distributions are measured for a bubbly and a slug flow in a 25.4 mm I.D. air-water two-phase flow test loop. The measured superficial gas velocities show an error within  ±4% when compared with the gas flowmeter for both conditions.

  5. A simple method for the evaluation of microfluidic architecture using flow quantitation via a multiplexed fluidic resistance measurement.

    PubMed

    Leslie, Daniel C; Melnikoff, Brett A; Marchiarullo, Daniel J; Cash, Devin R; Ferrance, Jerome P; Landers, James P

    2010-08-07

    Quality control of microdevices adds significant costs, in time and money, to any fabrication process. A simple, rapid quantitative method for the post-fabrication characterization of microchannel architecture using the measurement of flow with volumes relevant to microfluidics is presented. By measuring the mass of a dye solution passed through the device, it circumvents traditional gravimetric and interface-tracking methods that suffer from variable evaporation rates and the increased error associated with smaller volumes. The multiplexed fluidic resistance (MFR) measurement method measures flow via stable visible-wavelength dyes, a standard spectrophotometer and common laboratory glassware. Individual dyes are used as molecular markers of flow for individual channels, and in channel architectures where multiple channels terminate at a common reservoir, spectral deconvolution reveals the individual flow contributions. On-chip, this method was found to maintain accurate flow measurement at lower flow rates than the gravimetric approach. Multiple dyes are shown to allow for independent measurement of multiple flows on the same device simultaneously. We demonstrate that this technique is applicable for measuring the fluidic resistance, which is dependent on channel dimensions, in four fluidically connected channels simultaneously, ultimately determining that one chip was partially collapsed and, therefore, unusable for its intended purpose. This method is thus shown to be widely useful in troubleshooting microfluidic flow characteristics.

  6. Minimizing Artifacts and Biases in Chamber-Based Measurements of Soil Respiration

    NASA Astrophysics Data System (ADS)

    Davidson, E. A.; Savage, K.

    2001-05-01

    Soil respiration is one of the largest and most important fluxes of carbon in terrestrial ecosystems. The objectives of this paper are to review concerns about uncertainties of chamber-based measurements of CO2 emissions from soils, to evaluate the direction and magnitude of these potential errors, and to explain procedures that minimize these errors and biases. Disturbance of diffusion gradients cause underestimate of fluxes by less than 15% in most cases, and can be partially corrected for with curve fitting and/or can be minimized by using brief measurement periods. Under-pressurization or over-pressurization of the chamber caused by flow restrictions in air circulating designs can cause large errors, but can also be avoided with properly sized chamber vents and unrestricted flows. Somewhat larger pressure differentials are observed under windy conditions, and the accuracy of measurements made under such conditions needs more research. Spatial and temporal heterogeneity can be addressed with appropriate chamber sizes and numbers and frequency of sampling. For example, means of 8 randomly chosen flux measurements from a population of 36 measurements made with 300 cm2 chambers in tropical forests and pastures were within 25% of the full population mean 98% of the time and were within 10% of the full population mean 70% of the time. Comparisons of chamber-based measurements with tower-based measurements of total ecosystem respiration require analysis of the scale of variation within the purported tower footprint. In a forest at Howland, Maine, the differences in soil respiration rates among very poorly drained and well drained soils were large, but they mostly were fortuitously cancelled when evaluated for purported tower footprints of 600-2100 m length. While all of these potential sources of measurement error and sampling biases must be carefully considered, properly designed and deployed chambers provide a reliable means of accurately measuring soil respiration in terrestrial ecosystems.

  7. Noninvasive calculation of the aortic blood pressure waveform from the flow velocity waveform: a proof of concept.

    PubMed

    Vennin, Samuel; Mayer, Alexia; Li, Ye; Fok, Henry; Clapp, Brian; Alastruey, Jordi; Chowienczyk, Phil

    2015-09-01

    Estimation of aortic and left ventricular (LV) pressure usually requires measurements that are difficult to acquire during the imaging required to obtain concurrent LV dimensions essential for determination of LV mechanical properties. We describe a novel method for deriving aortic pressure from the aortic flow velocity. The target pressure waveform is divided into an early systolic upstroke, determined by the water hammer equation, and a diastolic decay equal to that in the peripheral arterial tree, interposed by a late systolic portion described by a second-order polynomial constrained by conditions of continuity and conservation of mean arterial pressure. Pulse wave velocity (PWV, which can be obtained through imaging), mean arterial pressure, diastolic pressure, and diastolic decay are required inputs for the algorithm. The algorithm was tested using 1) pressure data derived theoretically from prespecified flow waveforms and properties of the arterial tree using a single-tube 1-D model of the arterial tree, and 2) experimental data acquired from a pressure/Doppler flow velocity transducer placed in the ascending aorta in 18 patients (mean ± SD: age 63 ± 11 yr, aortic BP 136 ± 23/73 ± 13 mmHg) at the time of cardiac catheterization. For experimental data, PWV was calculated from measured pressures/flows, and mean and diastolic pressures and diastolic decay were taken from measured pressure (i.e., were assumed to be known). Pressure reconstructed from measured flow agreed well with theoretical pressure: mean ± SD root mean square (RMS) error 0.7 ± 0.1 mmHg. Similarly, for experimental data, pressure reconstructed from measured flow agreed well with measured pressure (mean RMS error 2.4 ± 1.0 mmHg). First systolic shoulder and systolic peak pressures were also accurately rendered (mean ± SD difference 1.4 ± 2.0 mmHg for peak systolic pressure). This is the first noninvasive derivation of aortic pressure based on fluid dynamics (flow and wave speed) in the aorta itself. Copyright © 2015 the American Physiological Society.

  8. Evaluation of transit-time and electromagnetic flow measurement in a chronically instrumented nonhuman primate model.

    PubMed

    Koenig, S C; Reister, C A; Schaub, J; Swope, R D; Ewert, D; Fanton, J W

    1996-01-01

    The Physiology Research Branch at Brooks AFB conducts both human and nonhuman primate experiments to determine the effects of microgravity and hypergravity on the cardiovascular system and to identify the particular mechanisms that invoke these responses. Primary investigative efforts in our nonhuman primate model require the determination of total peripheral resistance, systemic arterial compliance, and pressure-volume loop characteristics. These calculations require beat-to-beat measurement of aortic flow. This study evaluated accuracy, linearity, biocompatability, and anatomical features of commercially available electromagnetic (EMF) and transit-time flow measurement techniques. Five rhesus monkeys were instrumented with either EMF (3 subjects) or transit-time (2 subjects) flow sensors encircling the proximal ascending aorta. Cardiac outputs computed from these transducers taken over ranges of 0.5 to 2.0 L/min were compared to values obtained using thermodilution. In vivo experiments demonstrated that the EMF probe produced an average error of 15% (r = .896) and 8.6% average linearity per reading, and the transit-time flow probe produced an average error of 6% (r = .955) and 5.3% average linearity per reading. Postoperative performance and biocompatability of the probes were maintained throughout the study. The transit-time sensors provided the advantages of greater accuracy, smaller size, and lighter weight than the EMF probes. In conclusion, the characteristic features and performance of the transit-time sensors were superior to those of the EMF sensors in this study.

  9. Evaluation of transit-time and electromagnetic flow measurement in a chronically instrumented nonhuman primate model

    NASA Technical Reports Server (NTRS)

    Koenig, S. C.; Reister, C. A.; Schaub, J.; Swope, R. D.; Ewert, D.; Fanton, J. W.; Convertino, V. A. (Principal Investigator)

    1996-01-01

    The Physiology Research Branch at Brooks AFB conducts both human and nonhuman primate experiments to determine the effects of microgravity and hypergravity on the cardiovascular system and to identify the particular mechanisms that invoke these responses. Primary investigative efforts in our nonhuman primate model require the determination of total peripheral resistance, systemic arterial compliance, and pressure-volume loop characteristics. These calculations require beat-to-beat measurement of aortic flow. This study evaluated accuracy, linearity, biocompatability, and anatomical features of commercially available electromagnetic (EMF) and transit-time flow measurement techniques. Five rhesus monkeys were instrumented with either EMF (3 subjects) or transit-time (2 subjects) flow sensors encircling the proximal ascending aorta. Cardiac outputs computed from these transducers taken over ranges of 0.5 to 2.0 L/min were compared to values obtained using thermodilution. In vivo experiments demonstrated that the EMF probe produced an average error of 15% (r = .896) and 8.6% average linearity per reading, and the transit-time flow probe produced an average error of 6% (r = .955) and 5.3% average linearity per reading. Postoperative performance and biocompatability of the probes were maintained throughout the study. The transit-time sensors provided the advantages of greater accuracy, smaller size, and lighter weight than the EMF probes. In conclusion, the characteristic features and performance of the transit-time sensors were superior to those of the EMF sensors in this study.

  10. An a-posteriori finite element error estimator for adaptive grid computation of viscous incompressible flows

    NASA Astrophysics Data System (ADS)

    Wu, Heng

    2000-10-01

    In this thesis, an a-posteriori error estimator is presented and employed for solving viscous incompressible flow problems. In an effort to detect local flow features, such as vortices and separation, and to resolve flow details precisely, a velocity angle error estimator e theta which is based on the spatial derivative of velocity direction fields is designed and constructed. The a-posteriori error estimator corresponds to the antisymmetric part of the deformation-rate-tensor, and it is sensitive to the second derivative of the velocity angle field. Rationality discussions reveal that the velocity angle error estimator is a curvature error estimator, and its value reflects the accuracy of streamline curves. It is also found that the velocity angle error estimator contains the nonlinear convective term of the Navier-Stokes equations, and it identifies and computes the direction difference when the convective acceleration direction and the flow velocity direction have a disparity. Through benchmarking computed variables with the analytic solution of Kovasznay flow or the finest grid of cavity flow, it is demonstrated that the velocity angle error estimator has a better performance than the strain error estimator. The benchmarking work also shows that the computed profile obtained by using etheta can achieve the best matching outcome with the true theta field, and that it is asymptotic to the true theta variation field, with a promise of fewer unknowns. Unstructured grids are adapted by employing local cell division as well as unrefinement of transition cells. Using element class and node class can efficiently construct a hierarchical data structure which provides cell and node inter-reference at each adaptive level. Employing element pointers and node pointers can dynamically maintain the connection of adjacent elements and adjacent nodes, and thus avoids time-consuming search processes. The adaptive scheme is applied to viscous incompressible flow at different Reynolds numbers. It is found that the velocity angle error estimator can detect most flow characteristics and produce dense grids in the regions where flow velocity directions have abrupt changes. In addition, the e theta estimator makes the derivative error dilutely distribute in the whole computational domain and also allows the refinement to be conducted at regions of high error. Through comparison of the velocity angle error across the interface with neighbouring cells, it is verified that the adaptive scheme in using etheta provides an optimum mesh which can clearly resolve local flow features in a precise way. The adaptive results justify the applicability of the etheta estimator and prove that this error estimator is a valuable adaptive indicator for the automatic refinement of unstructured grids.

  11. Rain radar measurement error estimation using data assimilation in an advection-based nowcasting system

    NASA Astrophysics Data System (ADS)

    Merker, Claire; Ament, Felix; Clemens, Marco

    2017-04-01

    The quantification of measurement uncertainty for rain radar data remains challenging. Radar reflectivity measurements are affected, amongst other things, by calibration errors, noise, blocking and clutter, and attenuation. Their combined impact on measurement accuracy is difficult to quantify due to incomplete process understanding and complex interdependencies. An improved quality assessment of rain radar measurements is of interest for applications both in meteorology and hydrology, for example for precipitation ensemble generation, rainfall runoff simulations, or in data assimilation for numerical weather prediction. Especially a detailed description of the spatial and temporal structure of errors is beneficial in order to make best use of the areal precipitation information provided by radars. Radar precipitation ensembles are one promising approach to represent spatially variable radar measurement errors. We present a method combining ensemble radar precipitation nowcasting with data assimilation to estimate radar measurement uncertainty at each pixel. This combination of ensemble forecast and observation yields a consistent spatial and temporal evolution of the radar error field. We use an advection-based nowcasting method to generate an ensemble reflectivity forecast from initial data of a rain radar network. Subsequently, reflectivity data from single radars is assimilated into the forecast using the Local Ensemble Transform Kalman Filter. The spread of the resulting analysis ensemble provides a flow-dependent, spatially and temporally correlated reflectivity error estimate at each pixel. We will present first case studies that illustrate the method using data from a high-resolution X-band radar network.

  12. Simultaneous measurement of temperature and strain using four connecting wires

    NASA Technical Reports Server (NTRS)

    Parker, Allen R., Jr.

    1993-01-01

    This paper describes a new signal-conditioning technique for measuring strain and temperature which uses fewer connecting wires than conventional techniques. Simultaneous measurement of temperature and strain has been achieved by using thermocouple wire to connect strain gages to signal conditioning. This signal conditioning uses a new method for demultiplexing sampled analog signals and the Anderson current loop circuit. Theory is presented along with data to confirm that strain gage resistance change is sensed without appreciable error because of thermoelectric effects. Furthermore, temperature is sensed without appreciable error because of voltage drops caused by strain gage excitation current flowing through the gage resistance.

  13. Imaging dipole flow sources using an artificial lateral-line system made of biomimetic hair flow sensors

    PubMed Central

    Dagamseh, Ahmad; Wiegerink, Remco; Lammerink, Theo; Krijnen, Gijs

    2013-01-01

    In Nature, fish have the ability to localize prey, school, navigate, etc., using the lateral-line organ. Artificial hair flow sensors arranged in a linear array shape (inspired by the lateral-line system (LSS) in fish) have been applied to measure airflow patterns at the sensor positions. Here, we take advantage of both biomimetic artificial hair-based flow sensors arranged as LSS and beamforming techniques to demonstrate dipole-source localization in air. Modelling and measurement results show the artificial lateral-line ability to image the position of dipole sources accurately with estimation error of less than 0.14 times the array length. This opens up possibilities for flow-based, near-field environment mapping that can be beneficial to, for example, biologists and robot guidance applications. PMID:23594816

  14. Benchmarking observational uncertainties for hydrology (Invited)

    NASA Astrophysics Data System (ADS)

    McMillan, H. K.; Krueger, T.; Freer, J. E.; Westerberg, I.

    2013-12-01

    There is a pressing need for authoritative and concise information on the expected error distributions and magnitudes in hydrological data, to understand its information content. Many studies have discussed how to incorporate uncertainty information into model calibration and implementation, and shown how model results can be biased if uncertainty is not appropriately characterised. However, it is not always possible (for example due to financial or time constraints) to make detailed studies of uncertainty for every research study. Instead, we propose that the hydrological community could benefit greatly from sharing information on likely uncertainty characteristics and the main factors that control the resulting magnitude. In this presentation, we review the current knowledge of uncertainty for a number of key hydrological variables: rainfall, flow and water quality (suspended solids, nitrogen, phosphorus). We collated information on the specifics of the data measurement (data type, temporal and spatial resolution), error characteristics measured (e.g. standard error, confidence bounds) and error magnitude. Our results were primarily split by data type. Rainfall uncertainty was controlled most strongly by spatial scale, flow uncertainty was controlled by flow state (low, high) and gauging method. Water quality presented a more complex picture with many component errors. For all variables, it was easy to find examples where relative error magnitude exceeded 40%. We discuss some of the recent developments in hydrology which increase the need for guidance on typical error magnitudes, in particular when doing comparative/regionalisation and multi-objective analysis. Increased sharing of data, comparisons between multiple catchments, and storage in national/international databases can mean that data-users are far removed from data collection, but require good uncertainty information to reduce bias in comparisons or catchment regionalisation studies. Recently it has become more common for hydrologists to use multiple data types and sources within a single study. This may be driven by complex water management questions which integrate water quantity, quality and ecology; or by recognition of the value of auxiliary data to understand hydrological processes. We discuss briefly the impact of data uncertainty on the increasingly popular use of diagnostic signatures for hydrological process understanding and model development.

  15. Impact of Flow-Dependent Error Correlations and Tropospheric Chemistry on Assimilated Ozone

    NASA Technical Reports Server (NTRS)

    Wargan, K.; Stajner, I.; Hayashi, H.; Pawson, S.; Jones, D. B. A.

    2003-01-01

    The presentation compares different versions of a global three-dimensional ozone data assimilation system developed at NASA's Data Assimilation Office. The Solar Backscatter Ultraviolet/2 (SBUV/2) total and partial ozone column retrievals are the sole data assimilated in all of the experiments presented. We study the impact of changing the forecast error covariance model from a version assuming static correlations with a one that captures a short-term Lagrangian evolution of those correlations. This is further combined with a study of the impact of neglecting the tropospheric ozone production, loss and dry deposition rates, which are obtained from the Harvard GEOS-CHEM model. We compare statistical characteristics of the assimilated data and the results of validation against independent observations, obtained from WMO balloon-borne sondes and the Polar Ozone and Aerosol Measurement (POAM) III instrument. Experiments show that allowing forecast error correlations to evolve with the flow results in positive impact on assimilated ozone within the regions where data were not assimilated, particularly at high latitudes in both hemispheres. On the other hand, the main sensitivity to tropospheric chemistry is in the Tropics and sub-Tropics. The best agreement between the assimilated ozone and the in-situ sonde data is in the experiment using both flow-dependent error covariances and tropospheric chemistry.

  16. Peak-locking error reduction by birefringent optical diffusers

    NASA Astrophysics Data System (ADS)

    Kislaya, Ankur; Sciacchitano, Andrea

    2018-02-01

    The use of optical diffusers for the reduction of peak-locking errors in particle image velocimetry is investigated. The working principle of the optical diffusers is based on the concept of birefringence, where the incoming rays are subject to different deflections depending on the light direction and polarization. The performances of the diffusers are assessed via wind tunnel measurements in uniform flow and wall-bounded turbulence. Comparison with best-practice image defocusing is also conducted. It is found that the optical diffusers yield an increase of the particle image diameter up to 10 µm in the sensor plane. Comparison with reference measurements showed a reduction of both random and systematic errors by a factor of 3, even at low imaging signal-to-noise ratio.

  17. Linear error analysis of slope-area discharge determinations

    USGS Publications Warehouse

    Kirby, W.H.

    1987-01-01

    The slope-area method can be used to calculate peak flood discharges when current-meter measurements are not possible. This calculation depends on several quantities, such as water-surface fall, that are subject to large measurement errors. Other critical quantities, such as Manning's n, are not even amenable to direct measurement but can only be estimated. Finally, scour and fill may cause gross discrepancies between the observed condition of the channel and the hydraulic conditions during the flood peak. The effects of these potential errors on the accuracy of the computed discharge have been estimated by statistical error analysis using a Taylor-series approximation of the discharge formula and the well-known formula for the variance of a sum of correlated random variates. The resultant error variance of the computed discharge is a weighted sum of covariances of the various observational errors. The weights depend on the hydraulic and geometric configuration of the channel. The mathematical analysis confirms the rule of thumb that relative errors in computed discharge increase rapidly when velocity heads exceed the water-surface fall, when the flow field is expanding and when lateral velocity variation (alpha) is large. It also confirms the extreme importance of accurately assessing the presence of scour or fill. ?? 1987.

  18. Aquatic habitat mapping with an acoustic doppler current profiler: Considerations for data quality

    USGS Publications Warehouse

    Gaeuman, David; Jacobson, Robert B.

    2005-01-01

    When mounted on a boat or other moving platform, acoustic Doppler current profilers (ADCPs) can be used to map a wide range of ecologically significant phenomena, including measures of fluid shear, turbulence, vorticity, and near-bed sediment transport. However, the instrument movement necessary for mapping applications can generate significant errors, many of which have not been inadequately described. This report focuses on the mechanisms by which moving-platform errors are generated, and quantifies their magnitudes under typical habitat-mapping conditions. The potential for velocity errors caused by mis-alignment of the instrument?s internal compass are widely recognized, but has not previously been quantified for moving instruments. Numerical analyses show that even relatively minor compass mis-alignments can produce significant velocity errors, depending on the ratio of absolute instrument velocity to the target velocity and on the relative directions of instrument and target motion. A maximum absolute instrument velocity of about 1 m/s is recommended for most mapping applications. Lower velocities are appropriate when making bed velocity measurements, an emerging application that makes use of ADCP bottom-tracking to measure the velocity of sediment particles at the bed. The mechanisms by which heterogeneities in the flow velocity field generate horizontal velocities errors are also quantified, and some basic limitations in the effectiveness of standard error-detection criteria for identifying these errors are described. Bed velocity measurements may be particularly vulnerable to errors caused by spatial variability in the sediment transport field.

  19. SVM-based multisensor data fusion for phase concentration measurement in biomass-coal co-combustion

    NASA Astrophysics Data System (ADS)

    Wang, Xiaoxin; Hu, Hongli; Jia, Huiqin; Tang, Kaihao

    2018-05-01

    In this paper, the electrical method combines the electrostatic sensor and capacitance sensor to measure the phase concentration of pulverized coal/biomass/air three-phase flow through data fusion technology. In order to eliminate the effects of flow regimes and improve the accuracy of the phase concentration measurement, the mel frequency cepstrum coefficient features extracted from electrostatic signals are used to train the Continuous Gaussian Mixture Hidden Markov Model (CGHMM) for flow regime identification. Support Vector Machine (SVM) is introduced to establish the concentration information fusion model under identified flow regimes. The CGHMM models and SVM models are transplanted on digital signal processing (DSP) to realize on-line accurate measurement. The DSP flow regime identification time is 1.4 ms, and the concentration predict time is 164 μs, which can fully meet the real-time requirement. The average absolute value of the relative error of the pulverized coal is about 1.5% and that of the biomass is about 2.2%.

  20. Design and setup of intermittent-flow respirometry system for aquatic organisms.

    PubMed

    Svendsen, M B S; Bushnell, P G; Steffensen, J F

    2016-01-01

    Intermittent-flow respirometry is an experimental protocol for measuring oxygen consumption in aquatic organisms that utilizes the best features of closed (stop-flow) and flow-through respirometry while eliminating (or at least reducing) some of their inherent problems. By interspersing short periods of closed-chamber oxygen consumption measurements with regular flush periods, accurate oxygen uptake rate measurements can be made without the accumulation of waste products, particularly carbon dioxide, which may confound results. Automating the procedure with easily available hardware and software further reduces error by allowing many measurements to be made over long periods thereby minimizing animal stress due to acclimation issues. This paper describes some of the fundamental principles that need to be considered when designing and carrying out automated intermittent-flow respirometry (e.g. chamber size, flush rate, flush time, chamber mixing, measurement periods and temperature control). Finally, recent advances in oxygen probe technology and open source automation software will be discussed in the context of assembling relatively low cost and reliable measurement systems. © 2015 The Fisheries Society of the British Isles.

  1. Flow modeling and permeability estimation using borehole flow logs in heterogeneous fractured formations

    USGS Publications Warehouse

    Paillet, Frederick L.

    1998-01-01

    A numerical model of flow in the vicinity of a borehole is used to analyze flowmeter data obtained with high-resolution flowmeters. The model is designed to (1) precisely compute flow in a borehole, (2) approximate the effects of flow in surrounding aquifers on the measured borehole flow, (3) allow for an arbitrary number (N) of entry/exit points connected to M < N far-field aquifers, and (4) be consistent with the practical limitations of flowmeter measurements such as limits of resolution, typical measurement error, and finite measurement periods. The model is used in three modes: (1) a quasi-steady pumping mode where there is no ambient flow, (2) a steady flow mode where ambient differences in far-field water levels drive flow between fracture zones in the borehole, and (3) a cross-borehole test mode where pumping in an adjacent borehole drives flow in the observation borehole. The model gives estimates of transmissivity for any number of fractures in steady or quasi-steady flow experiments that agree with straddle-packer test data. Field examples show how these cross-borehole-type curves can be used to estimate the storage coefficient of fractures and bedding planes and to determine whether fractures intersecting a borehole at different locations are hydraulically connected in the surrounding rock mass.

  2. Development of an air flow thermal balance calorimeter

    NASA Technical Reports Server (NTRS)

    Sherfey, J. M.

    1972-01-01

    An air flow calorimeter, based on the idea of balancing an unknown rate of heat evolution with a known rate of heat evolution, was developed. Under restricted conditions, the prototype system is capable of measuring thermal wattages from 10 milliwatts to 1 watt, with an error no greater than 1 percent. Data were obtained which reveal system weaknesses and point to modifications which would effect significant improvements.

  3. Delivery of tidal volume from four anaesthesia ventilators during volume-controlled ventilation: a bench study.

    PubMed

    Wallon, G; Bonnet, A; Guérin, C

    2013-06-01

    Tidal volume (V(T)) must be accurately delivered by anaesthesia ventilators in the volume-controlled ventilation mode in order for lung protective ventilation to be effective. However, the impact of fresh gas flow (FGF) and lung mechanics on delivery of V(T) by the newest anaesthesia ventilators has not been reported. We measured delivered V(T) (V(TI)) from four anaesthesia ventilators (Aisys™, Flow-i™, Primus™, and Zeus™) on a pneumatic test lung set with three combinations of lung compliance (C, ml cm H2O(-1)) and resistance (R, cm H2O litre(-1) s(-2)): C60R5, C30R5, C60R20. For each CR, three FGF rates (0.5, 3, 10 litre min(-1)) were investigated at three set V(T)s (300, 500, 800 ml) and two values of PEEP (0 and 10 cm H2O). The volume error = [(V(TI) - V(Tset))/V(Tset)] ×100 was computed in body temperature and pressure-saturated conditions and compared using analysis of variance. For each CR and each set V(T), the absolute value of the volume error significantly declined from Aisys™ to Flow-i™, Zeus™, and Primus™. For C60R5, these values were 12.5% for Aisys™, 5% for Flow-i™ and Zeus™, and 0% for Primus™. With an increase in FGF, absolute values of the volume error increased only for Aisys™ and Zeus™. However, in C30R5, the volume error was minimal at mid-FGF for Aisys™. The results were similar at PEEP 10 cm H2O. Under experimental conditions, the volume error differed significantly between the four new anaesthesia ventilators tested and was influenced by FGF, although this effect may not be clinically relevant.

  4. The MEMS process of a micro friction sensor

    NASA Astrophysics Data System (ADS)

    Yuan, Ming-Quan; Lei, Qiang; Wang, Xiong

    2018-02-01

    The research and testing techniques of friction sensor is an important support for hypersonic aircraft. Compared with the conventional skin friction sensor, the MEMS skin friction sensor has the advantages of small size, high sensitivity, good stability and dynamic response. The MEMS skin friction sensor can be integrated with other flow field sensors whose process is compatible with MEMS skin friction sensor to achieve multi-physical measurement of the flow field; and the micro-friction balance sensor array enable to achieve large area and accurate measurement for the near-wall flow. A MEMS skin friction sensor structure is proposed, which sensing element not directly contacted with the flow field. The MEMS fabrication process of the sensing element is described in detail. The thermal silicon oxide is used as the mask to solve the selection ratio problem of silicon DRIE. The optimized process parameters of silicon DRIE: etching power 1600W/LF power 100 W; SF6 flux 360 sccm; C4F8 flux 300 sccm; O2 flux 300 sccm. With Cr/Au mask, etch depth of glass shallow groove can be controlled in 30°C low concentration HF solution; the spray etch and wafer rotate improve the corrosion surface quality of glass shallow groove. The MEMS skin friction sensor samples were fabricated by the above MEMS process, and results show that the error of the length and width of the elastic cantilever is within 2 μm, the depth error of the shallow groove is less than 0.03 μm, and the static capacitance error is within 0.2 pF, which satisfy the design requirements.

  5. De-biasing the dynamic mode decomposition for applied Koopman spectral analysis of noisy datasets

    NASA Astrophysics Data System (ADS)

    Hemati, Maziar S.; Rowley, Clarence W.; Deem, Eric A.; Cattafesta, Louis N.

    2017-08-01

    The dynamic mode decomposition (DMD)—a popular method for performing data-driven Koopman spectral analysis—has gained increased popularity for extracting dynamically meaningful spatiotemporal descriptions of fluid flows from snapshot measurements. Often times, DMD descriptions can be used for predictive purposes as well, which enables informed decision-making based on DMD model forecasts. Despite its widespread use and utility, DMD can fail to yield accurate dynamical descriptions when the measured snapshot data are imprecise due to, e.g., sensor noise. Here, we express DMD as a two-stage algorithm in order to isolate a source of systematic error. We show that DMD's first stage, a subspace projection step, systematically introduces bias errors by processing snapshots asymmetrically. To remove this systematic error, we propose utilizing an augmented snapshot matrix in a subspace projection step, as in problems of total least-squares, in order to account for the error present in all snapshots. The resulting unbiased and noise-aware total DMD (TDMD) formulation reduces to standard DMD in the absence of snapshot errors, while the two-stage perspective generalizes the de-biasing framework to other related methods as well. TDMD's performance is demonstrated in numerical and experimental fluids examples. In particular, in the analysis of time-resolved particle image velocimetry data for a separated flow, TDMD outperforms standard DMD by providing dynamical interpretations that are consistent with alternative analysis techniques. Further, TDMD extracts modes that reveal detailed spatial structures missed by standard DMD.

  6. Error Analysis and Performance Data from an Automated Azimuth Measuring System,

    DTIC Science & Technology

    1981-02-17

    microprocessors, tape drives, input and i NM. A detailed error analysis of the output hardware, a dual-axis tiltmeter ystem and methods to improve...performance mounted on the azimuth gimbal of each ALS, and accuracy are presented. Discussion and six tiltmeters arranged on an optical includes selected...velocity air flowing through tubes along the optical paths to each target. 1 . Introduction Temperature sensors are located in each To accurately and

  7. Air Force Operational Test and Evaluation Center, Volume 2, Number 2

    DTIC Science & Technology

    1988-01-01

    the special class of attributes arc recorded, cost or In place of the normalization ( I). we propose beliefit. the lollowins normalization NUMERICAL ...comprchcnsi\\c set of modular basic data flow to meet requirements at test tools ,. designed to provide flexible data reduction start, then building to...possible. a totlinaion ot the two position error measurement techniques arc used SLR is a methd of fitting a linear model o accumlulate a position error

  8. An Integrated Instrumentation System for Velocity, Concentration and Mass Flow Rate Measurement of Solid Particles Based on Electrostatic and Capacitance Sensors.

    PubMed

    Li, Jian; Kong, Ming; Xu, Chuanlong; Wang, Shimin; Fan, Ying

    2015-12-10

    The online and continuous measurement of velocity, concentration and mass flow rate of pneumatically conveyed solid particles for the high-efficiency utilization of energy and raw materials has become increasingly significant. In this paper, an integrated instrumentation system for the velocity, concentration and mass flow rate measurement of dense phase pneumatically conveyed solid particles based on electrostatic and capacitance sensorsis developed. The electrostatic sensors are used for particle mean velocity measurement in combination with the cross-correlation technique, while the capacitance sensor with helical surface-plate electrodes, which has relatively homogeneous sensitivity distribution, is employed for the measurement of particle concentration and its capacitance is measured by an electrostatic-immune AC-based circuit. The solid mass flow rate can be further calculated from the measured velocity and concentration. The developed instrumentation system for velocity and concentration measurement is verified and calibrated on a pulley rig and through static experiments, respectively. Finally the system is evaluated with glass beads on a gravity-fed rig. The experimental results demonstrate that the system is capable of the accurate solid mass flow rate measurement, and the relative error is within -3%-8% for glass bead mass flow rates ranging from 0.13 kg/s to 0.9 kg/s.

  9. Uncertainty based pressure reconstruction from velocity measurement with generalized least squares

    NASA Astrophysics Data System (ADS)

    Zhang, Jiacheng; Scalo, Carlo; Vlachos, Pavlos

    2017-11-01

    A method using generalized least squares reconstruction of instantaneous pressure field from velocity measurement and velocity uncertainty is introduced and applied to both planar and volumetric flow data. Pressure gradients are computed on a staggered grid from flow acceleration. The variance-covariance matrix of the pressure gradients is evaluated from the velocity uncertainty by approximating the pressure gradient error to a linear combination of velocity errors. An overdetermined system of linear equations which relates the pressure and the computed pressure gradients is formulated and then solved using generalized least squares with the variance-covariance matrix of the pressure gradients. By comparing the reconstructed pressure field against other methods such as solving the pressure Poisson equation, the omni-directional integration, and the ordinary least squares reconstruction, generalized least squares method is found to be more robust to the noise in velocity measurement. The improvement on pressure result becomes more remarkable when the velocity measurement becomes less accurate and more heteroscedastic. The uncertainty of the reconstructed pressure field is also quantified and compared across the different methods.

  10. Simultaneous Moisture Content and Mass Flow Measurements in Wood Chip Flows Using Coupled Dielectric and Impact Sensors.

    PubMed

    Pan, Pengmin; McDonald, Timothy; Fulton, John; Via, Brian; Hung, John

    2016-12-23

    An 8-electrode capacitance tomography (ECT) sensor was built and used to measure moisture content (MC) and mass flow of pine chip flows. The device was capable of directly measuring total water quantity in a sample but was sensitive to both dry matter and moisture, and therefore required a second measurement of mass flow to calculate MC. Two means of calculating the mass flow were used: the first being an impact sensor to measure total mass flow, and the second a volumetric approach based on measuring total area occupied by wood in images generated using the capacitance sensor's tomographic mode. Tests were made on 109 groups of wood chips ranging in moisture content from 14% to 120% (dry basis) and wet weight of 280 to 1100 g. Sixty groups were randomly selected as a calibration set, and the remaining were used for validation of the sensor's performance. For the combined capacitance/force transducer system, root mean square errors of prediction (RMSEP) for wet mass flow and moisture content were 13.42% and 16.61%, respectively. RMSEP using the combined volumetric mass flow/capacitance sensor for dry mass flow and moisture content were 22.89% and 24.16%, respectively. Either of the approaches was concluded to be feasible for prediction of moisture content in pine chip flows, but combining the impact and capacitance sensors was easier to implement. In situations where flows could not be impeded, however, the tomographic approach would likely be more useful.

  11. Micro-Viscometer for Measuring Shear-Varying Blood Viscosity over a Wide-Ranging Shear Rate.

    PubMed

    Kim, Byung Jun; Lee, Seung Yeob; Jee, Solkeun; Atajanov, Arslan; Yang, Sung

    2017-06-20

    In this study, a micro-viscometer is developed for measuring shear-varying blood viscosity over a wide-ranging shear rate. The micro-viscometer consists of 10 microfluidic channel arrays, each of which has a different micro-channel width. The proposed design enables the retrieval of 10 different shear rates from a single flow rate, thereby enabling the measurement of shear-varying blood viscosity with a fixed flow rate condition. For this purpose, an optimal design that guarantees accurate viscosity measurement is selected from a parametric study. The functionality of the micro-viscometer is verified by both numerical and experimental studies. The proposed micro-viscometer shows 6.8% (numerical) and 5.3% (experimental) in relative error when compared to the result from a standard rotational viscometer. Moreover, a reliability test is performed by repeated measurement (N = 7), and the result shows 2.69 ± 2.19% for the mean relative error. Accurate viscosity measurements are performed on blood samples with variations in the hematocrit (35%, 45%, and 55%), which significantly influences blood viscosity. Since the blood viscosity correlated with various physical parameters of the blood, the micro-viscometer is anticipated to be a significant advancement for realization of blood on a chip.

  12. Suspended sediment fluxes in a tidal wetland: Measurement, controlling factors, and error analysis

    USGS Publications Warehouse

    Ganju, N.K.; Schoellhamer, D.H.; Bergamaschi, B.A.

    2005-01-01

    Suspended sediment fluxes to and from tidal wetlands are of increasing concern because of habitat restoration efforts, wetland sustainability as sea level rises, and potential contaminant accumulation. We measured water and sediment fluxes through two channels on Browns Island, at the landward end of San Francisco Bay, United States, to determine the factors that control sediment fluxes on and off the island. In situ instrumentation was deployed between October 10 and November 13, 2003. Acoustic Doppler current profilers and the index velocity method were employed to calculate water fluxes. Suspended sediment concentrations (SSC) were determined with optical sensors and cross-sectional water sampling. All procedures were analyzed for their contribution to total error in the flux measurement. The inability to close the water balance and determination of constituent concentration were identified as the main sources of error; total error was 27% for net sediment flux. The water budget for the island was computed with an unaccounted input of 0.20 m 3 s-1 (22% of mean inflow), after considering channel flow, change in water storage, evapotranspiration, and precipitation. The net imbalance may be a combination of groundwater seepage, overland flow, and flow through minor channels. Change of island water storage, caused by local variations in water surface elevation, dominated the tidalty averaged water flux. These variations were mainly caused by wind and barometric pressure change, which alter regional water levels throughout the Sacramento-San Joaquin River Delta. Peak instantaneous ebb flow was 35% greater than peak flood flow, indicating an ebb-dominant system, though dominance varied with the spring-neap cycle. SSC were controlled by wind-wave resuspension adjacent to the island and local tidal currents that mobilized sediment from the channel bed. During neap tides sediment was imported onto the island but during spring tides sediment was exported because the main channel became ebb dominant Over the 34-d monitoring period 14,000 kg of suspended sediment were imported through the two channels. The water imbalance may affect the sediment balance if the unmeasured water transport pathways are capable of transporting large amounts of sediment. We estimate a maximum of 2,800 kg of sediment may have been exported through unmeasured pathways, giving a minimum net import of 11,200 kg. Sediment flux measurements provide insight on tidal to fortnightly marsh sedimentation processes, especially in complex systems where sedimentation is spatially and temporally variable. ?? 2005 Estuarine Research Federation.

  13. Analysis of uncertainties and convergence of the statistical quantities in turbulent wall-bounded flows by means of a physically based criterion

    NASA Astrophysics Data System (ADS)

    Andrade, João Rodrigo; Martins, Ramon Silva; Thompson, Roney Leon; Mompean, Gilmar; da Silveira Neto, Aristeu

    2018-04-01

    The present paper provides an analysis of the statistical uncertainties associated with direct numerical simulation (DNS) results and experimental data for turbulent channel and pipe flows, showing a new physically based quantification of these errors, to improve the determination of the statistical deviations between DNSs and experiments. The analysis is carried out using a recently proposed criterion by Thompson et al. ["A methodology to evaluate statistical errors in DNS data of plane channel flows," Comput. Fluids 130, 1-7 (2016)] for fully turbulent plane channel flows, where the mean velocity error is estimated by considering the Reynolds stress tensor, and using the balance of the mean force equation. It also presents how the residual error evolves in time for a DNS of a plane channel flow, and the influence of the Reynolds number on its convergence rate. The root mean square of the residual error is shown in order to capture a single quantitative value of the error associated with the dimensionless averaging time. The evolution in time of the error norm is compared with the final error provided by DNS data of similar Reynolds numbers available in the literature. A direct consequence of this approach is that it was possible to compare different numerical results and experimental data, providing an improved understanding of the convergence of the statistical quantities in turbulent wall-bounded flows.

  14. Overcoming spatio-temporal limitations using dynamically scaled in vitro PC-MRI - A flow field comparison to true-scale computer simulations of idealized, stented and patient-specific left main bifurcations.

    PubMed

    Beier, Susann; Ormiston, John; Webster, Mark; Cater, John; Norris, Stuart; Medrano-Gracia, Pau; Young, Alistair; Gilbert, Kathleen; Cowan, Brett

    2016-08-01

    The majority of patients with angina or heart failure have coronary artery disease. Left main bifurcations are particularly susceptible to pathological narrowing. Flow is a major factor of atheroma development, but limitations in imaging technology such as spatio-temporal resolution, signal-to-noise ratio (SNRv), and imaging artefacts prevent in vivo investigations. Computational fluid dynamics (CFD) modelling is a common numerical approach to study flow, but it requires a cautious and rigorous application for meaningful results. Left main bifurcation angles of 40°, 80° and 110° were found to represent the spread of an atlas based 100 computed tomography angiograms. Three left mains with these bifurcation angles were reconstructed with 1) idealized, 2) stented, and 3) patient-specific geometry. These were then approximately 7× scaled-up and 3D printing as large phantoms. Their flow was reproduced using a blood-analogous, dynamically scaled steady flow circuit, enabling in vitro phase-contrast magnetic resonance (PC-MRI) measurements. After threshold segmentation the image data was registered to true-scale CFD of the same coronary geometry using a coherent point drift algorithm, yielding a small covariance error (σ 2 <;5.8×10 -4 ). Natural-neighbour interpolation of the CFD data onto the PC-MRI grid enabled direct flow field comparison, showing very good agreement in magnitude (error 2-12%) and directional changes (r 2 0.87-0.91), and stent induced flow alternations were measureable for the first time. PC-MRI over-estimated velocities close to the wall, possibly due to partial voluming. Bifurcation shape determined the development of slow flow regions, which created lower SNRv regions and increased discrepancies. These can likely be minimised in future by testing different similarity parameters to reduce acquisition error and improve correlation further. It was demonstrated that in vitro large phantom acquisition correlates to true-scale coronary flow simulations when dynamically scaled, and thus can overcome current PC-MRI's spatio-temporal limitations. This novel method enables experimental assessment of stent induced flow alternations, and in future may elevate CFD coronary flow simulations by providing sophisticated boundary conditions, and enable investigations of stenosis phantoms.

  15. Empirical models to predict the volumes of debris flows generated by recently burned basins in the western U.S.

    USGS Publications Warehouse

    Gartner, J.E.; Cannon, S.H.; Santi, P.M.; deWolfe, V.G.

    2008-01-01

    Recently burned basins frequently produce debris flows in response to moderate-to-severe rainfall. Post-fire hazard assessments of debris flows are most useful when they predict the volume of material that may flow out of a burned basin. This study develops a set of empirically-based models that predict potential volumes of wildfire-related debris flows in different regions and geologic settings. The models were developed using data from 53 recently burned basins in Colorado, Utah and California. The volumes of debris flows in these basins were determined by either measuring the volume of material eroded from the channels, or by estimating the amount of material removed from debris retention basins. For each basin, independent variables thought to affect the volume of the debris flow were determined. These variables include measures of basin morphology, basin areas burned at different severities, soil material properties, rock type, and rainfall amounts and intensities for storms triggering debris flows. Using these data, multiple regression analyses were used to create separate predictive models for volumes of debris flows generated by burned basins in six separate regions or settings, including the western U.S., southern California, the Rocky Mountain region, and basins underlain by sedimentary, metamorphic and granitic rocks. An evaluation of these models indicated that the best model (the Western U.S. model) explains 83% of the variability in the volumes of the debris flows, and includes variables that describe the basin area with slopes greater than or equal to 30%, the basin area burned at moderate and high severity, and total storm rainfall. This model was independently validated by comparing volumes of debris flows reported in the literature, to volumes estimated using the model. Eighty-seven percent of the reported volumes were within two residual standard errors of the volumes predicted using the model. This model is an improvement over previous models in that it includes a measure of burn severity and an estimate of modeling errors. The application of this model, in conjunction with models for the probability of debris flows, will enable more complete and rapid assessments of debris flow hazards following wildfire.

  16. [Application of three heat pulse technique-based methods to determine the stem sap flow].

    PubMed

    Wang, Sheng; Fan, Jun

    2015-08-01

    It is of critical importance to acquire tree transpiration characters through sap flow methodology to understand tree water physiology, forest ecology and ecosystem water exchange. Tri-probe heat pulse sensors, which are widely utilized in soil thermal parameters and soil evaporation measurement, were applied to implement Salix matsudana sap flow density (Vs) measurements via heat-ratio method (HRM), T-Max method (T-Max) and single-probe heat pulse probe (SHPP) method, and comparative analysis was conducted with additional Grainer's thermal diffusion probes (TDP) measured results. The results showed that, it took about five weeks to reach a stable measurement stage after TPHP installation, Vs measured with three methods in the early stage after installation was 135%-220% higher than Vs in the stable measurement stage, and Vs estimated via HRM, T-Max and SHPP methods were significantly linearly correlated with Vs estimated via TDP method, with R2 of 0.93, 0.73 and 0.91, respectively, and R2 for Vs measured by SHPP and HRM reached 0.94. HRM had relatively higher precision in measuring low rates and reverse sap flow. SHPP method seemed to be very promising to measure sap flow for configuration simplicity and high measuring accuracy, whereas it couldn' t distinguish directions of flow. T-Max method had relatively higher error in sap flow measurement, and it couldn' t measure sap flow below 5 cm3 · cm(-2) · h(-1), thus this method could not be used alone, however it could measure thermal diffusivity for calculating sap flow when other methods were imposed. It was recommended to choose a proper method or a combination of several methods to measure stem sap flow, based on specific research purpose.

  17. Mid-infrared laser-absorption diagnostic for vapor-phase measurements in an evaporating n-decane aerosol

    NASA Astrophysics Data System (ADS)

    Porter, J. M.; Jeffries, J. B.; Hanson, R. K.

    2009-09-01

    A novel three-wavelength mid-infrared laser-based absorption/extinction diagnostic has been developed for simultaneous measurement of temperature and vapor-phase mole fraction in an evaporating hydrocarbon fuel aerosol (vapor and liquid droplets). The measurement technique was demonstrated for an n-decane aerosol with D 50˜3 μ m in steady and shock-heated flows with a measurement bandwidth of 125 kHz. Laser wavelengths were selected from FTIR measurements of the C-H stretching band of vapor and liquid n-decane near 3.4 μm (3000 cm -1), and from modeled light scattering from droplets. Measurements were made for vapor mole fractions below 2.3 percent with errors less than 10 percent, and simultaneous temperature measurements over the range 300 K< T<900 K were made with errors less than 3 percent. The measurement technique is designed to provide accurate values of temperature and vapor mole fraction in evaporating polydispersed aerosols with small mean diameters ( D 50<10 μ m), where near-infrared laser-based scattering corrections are prone to error.

  18. NASA airborne Doppler lidar program: Data characteristics of 1981

    NASA Technical Reports Server (NTRS)

    Lee, R. W.

    1982-01-01

    The first flights of the NASA/Marshall airborne CO2 Doppler lidar wind measuring system were made during the summer of 1981. Successful measurements of two-dimensional flow fields were made to ranges of 15 km from the aircraft track. The characteristics of the data obtained are examined. A study of various artifacts introduced into the data set by incomplete compensation for aircraft dynamics is summarized. Most of these artifacts can be corrected by post processing, which reduces velocity errors in the reconstructed flow field to remarkably low levels.

  19. Temporal analysis of the frequency and duration of low and high streamflow: Years of record needed to characterize streamflow variability

    USGS Publications Warehouse

    Huh, S.; Dickey, D.A.; Meador, M.R.; Ruhl, K.E.

    2005-01-01

    A temporal analysis of the number and duration of exceedences of high- and low-flow thresholds was conducted to determine the number of years required to detect a level shift using data from Virginia, North Carolina, and South Carolina. Two methods were used - ordinary least squares assuming a known error variance and generalized least squares without a known error variance. Using ordinary least squares, the mean number of years required to detect a one standard deviation level shift in measures of low-flow variability was 57.2 (28.6 on either side of the break), compared to 40.0 years for measures of high-flow variability. These means become 57.6 and 41.6 when generalized least squares is used. No significant relations between years and elevation or drainage area were detected (P>0.05). Cluster analysis did not suggest geographic patterns in years related to physiography or major hydrologic regions. Referring to the number of observations required to detect a one standard deviation shift as 'characterizing' the variability, it appears that at least 20 years of record on either side of a shift may be necessary to adequately characterize high-flow variability. A longer streamflow record (about 30 years on either side) may be required to characterize low-flow variability. ?? 2005 Elsevier B.V. All rights reserved.

  20. Evaluation of hydrocarbon flow standard facility equipped with double-wing diverter using four types of working liquids

    NASA Astrophysics Data System (ADS)

    Doihara, R.; Shimada, T.; Cheong, K. H.; Terao, Y.

    2017-06-01

    A flow calibration facility based on the gravimetric method using a double-wing diverter for hydrocarbon flows from 0.1 m3 h-1 to 15 m3 h-1 was constructed as a national measurement standard in Japan. The original working liquids were kerosene and light oil. The calibration facility was modified to calibrate flowmeters with two additional working liquids, industrial gasoline (flash point  >  40 °C) and spindle oil, to achieve calibration over a wide viscosity range at the same calibration test rig. The kinematic viscosity range is 1.2 mm2 s-1 to 24 mm2 s-1. The contributions to the measurement uncertainty due to different types of working liquids were evaluated experimentally in this study. The evaporation error was reduced by using a seal system at the weighing tank inlet. The uncertainty due to droplets from the diverter wings was reduced by a modified diverter operation. The diverter timing errors for all types of working liquids were estimated. The expanded uncertainties for the calibration facility were estimated to be 0.020% for mass flow and 0.030% for volumetric flow for all considered types of liquids. Internal comparisons with other calibration facilities were also conducted, and the agreement was confirmed to be within the claimed expanded uncertainties.

  1. Bayesian Modeling of Perceived Surface Slant from Actively-Generated and Passively-Observed Optic Flow

    PubMed Central

    Caudek, Corrado; Fantoni, Carlo; Domini, Fulvio

    2011-01-01

    We measured perceived depth from the optic flow (a) when showing a stationary physical or virtual object to observers who moved their head at a normal or slower speed, and (b) when simulating the same optic flow on a computer and presenting it to stationary observers. Our results show that perceived surface slant is systematically distorted, for both the active and the passive viewing of physical or virtual surfaces. These distortions are modulated by head translation speed, with perceived slant increasing directly with the local velocity gradient of the optic flow. This empirical result allows us to determine the relative merits of two alternative approaches aimed at explaining perceived surface slant in active vision: an “inverse optics” model that takes head motion information into account, and a probabilistic model that ignores extra-retinal signals. We compare these two approaches within the framework of the Bayesian theory. The “inverse optics” Bayesian model produces veridical slant estimates if the optic flow and the head translation velocity are measured with no error; because of the influence of a “prior” for flatness, the slant estimates become systematically biased as the measurement errors increase. The Bayesian model, which ignores the observer's motion, always produces distorted estimates of surface slant. Interestingly, the predictions of this second model, not those of the first one, are consistent with our empirical findings. The present results suggest that (a) in active vision perceived surface slant may be the product of probabilistic processes which do not guarantee the correct solution, and (b) extra-retinal signals may be mainly used for a better measurement of retinal information. PMID:21533197

  2. Background-Oriented Schlieren (BOS) for Scramjet Inlet-isolator Investigation

    NASA Astrophysics Data System (ADS)

    Che Idris, Azam; Rashdan Saad, Mohd; Hing Lo, Kin; Kontis, Konstantinos

    2018-05-01

    Background-oriented Schlieren (BOS) technique is a recently invented non-intrusive flow diagnostic method which has yet to be fully explored in its capabilities. In this paper, BOS technique has been applied for investigating the general flow field characteristics inside a generic scramjet inlet-isolator with Mach 5 flow. The difficulty in finding the delicate balance between measurement sensitivity and measurement area image focusing has been demonstrated. The differences between direct cross-correlation (DCC) and Fast Fourier Transform (FFT) raw data processing algorithm have also been demonstrated. As an exploratory study of BOS capability, this paper found that BOS is simple yet robust enough to be used to visualize complex flow in a scramjet inlet in hypersonic flow. However, in this case its quantitative data can be strongly affected by 3-dimensionality thus obscuring the density value with significant errors.

  3. Least Median of Squares Filtering of Locally Optimal Point Matches for Compressible Flow Image Registration

    PubMed Central

    Castillo, Edward; Castillo, Richard; White, Benjamin; Rojo, Javier; Guerrero, Thomas

    2012-01-01

    Compressible flow based image registration operates under the assumption that the mass of the imaged material is conserved from one image to the next. Depending on how the mass conservation assumption is modeled, the performance of existing compressible flow methods is limited by factors such as image quality, noise, large magnitude voxel displacements, and computational requirements. The Least Median of Squares Filtered Compressible Flow (LFC) method introduced here is based on a localized, nonlinear least squares, compressible flow model that describes the displacement of a single voxel that lends itself to a simple grid search (block matching) optimization strategy. Spatially inaccurate grid search point matches, corresponding to erroneous local minimizers of the nonlinear compressible flow model, are removed by a novel filtering approach based on least median of squares fitting and the forward search outlier detection method. The spatial accuracy of the method is measured using ten thoracic CT image sets and large samples of expert determined landmarks (available at www.dir-lab.com). The LFC method produces an average error within the intra-observer error on eight of the ten cases, indicating that the method is capable of achieving a high spatial accuracy for thoracic CT registration. PMID:22797602

  4. TheClinical Research Tool: a high-performance microdialysis-based system for reliably measuring interstitial fluid glucose concentration.

    PubMed

    Ocvirk, Gregor; Hajnsek, Martin; Gillen, Ralph; Guenther, Arnfried; Hochmuth, Gernot; Kamecke, Ulrike; Koelker, Karl-Heinz; Kraemer, Peter; Obermaier, Karin; Reinheimer, Cornelia; Jendrike, Nina; Freckmann, Guido

    2009-05-01

    A novel microdialysis-based continuous glucose monitoring system, the so-called Clinical Research Tool (CRT), is presented. The CRT was designed exclusively for investigational use to offer high analytical accuracy and reliability. The CRT was built to avoid signal artifacts due to catheter clogging, flow obstruction by air bubbles, and flow variation caused by inconstant pumping. For differentiation between physiological events and system artifacts, the sensor current, counter electrode and polarization voltage, battery voltage, sensor temperature, and flow rate are recorded at a rate of 1 Hz. In vitro characterization with buffered glucose solutions (c(glucose) = 0 - 26 x 10(-3) mol liter(-1)) over 120 h yielded a mean absolute relative error (MARE) of 2.9 +/- 0.9% and a recorded mean flow rate of 330 +/- 48 nl/min with periodic flow rate variation amounting to 24 +/- 7%. The first 120 h in vivo testing was conducted with five type 1 diabetes subjects wearing two systems each. A mean flow rate of 350 +/- 59 nl/min and a periodic variation of 22 +/- 6% were recorded. Utilizing 3 blood glucose measurements per day and a physical lag time of 1980 s, retrospective calibration of the 10 in vivo experiments yielded a MARE value of 12.4 +/- 5.7. Clarke error grid analysis resulted in 81.0%, 16.6%, 0.8%, 1.6%, and 0% in regions A, B, C, D, and E, respectively. The CRT demonstrates exceptional reliability of system operation and very good measurement performance. The ability to differentiate between artifacts and physiological effects suggests the use of the CRT as a reference tool in clinical investigations. 2009 Diabetes Technology Society.

  5. The Clinical Research Tool: A High-Performance Microdialysis-Based System for Reliably Measuring Interstitial Fluid Glucose Concentration

    PubMed Central

    Ocvirk, Gregor; Hajnsek, Martin; Gillen, Ralph; Guenther, Arnfried; Hochmuth, Gernot; Kamecke, Ulrike; Koelker, Karl-Heinz; Kraemer, Peter; Obermaier, Karin; Reinheimer, Cornelia; Jendrike, Nina; Freckmann, Guido

    2009-01-01

    Background A novel microdialysis-based continuous glucose monitoring system, the so-called Clinical Research Tool (CRT), is presented. The CRT was designed exclusively for investigational use to offer high analytical accuracy and reliability. The CRT was built to avoid signal artifacts due to catheter clogging, flow obstruction by air bubbles, and flow variation caused by inconstant pumping. For differentiation between physiological events and system artifacts, the sensor current, counter electrode and polarization voltage, battery voltage, sensor temperature, and flow rate are recorded at a rate of 1 Hz. Method In vitro characterization with buffered glucose solutions (cglucose = 0 - 26 × 10-3 mol liter-1) over 120 h yielded a mean absolute relative error (MARE) of 2.9 ± 0.9% and a recorded mean flow rate of 330 ± 48 nl/min with periodic flow rate variation amounting to 24 ± 7%. The first 120 h in vivo testing was conducted with five type 1 diabetes subjects wearing two systems each. A mean flow rate of 350 ± 59 nl/min and a periodic variation of 22 ± 6% were recorded. Results Utilizing 3 blood glucose measurements per day and a physical lag time of 1980 s, retrospective calibration of the 10 in vivo experiments yielded a MARE value of 12.4 ± 5.7. Clarke error grid analysis resulted in 81.0%, 16.6%, 0.8%, 1.6%, and 0% in regions A, B, C, D, and E, respectively. Conclusion The CRT demonstrates exceptional reliability of system operation and very good measurement performance. The ability to differentiate between artifacts and physiological effects suggests the use of the CRT as a reference tool in clinical investigations. PMID:20144284

  6. Specific Impulse and Mass Flow Rate Error

    NASA Technical Reports Server (NTRS)

    Gregory, Don A.

    2005-01-01

    Specific impulse is defined in words in many ways. Very early in any text on rocket propulsion a phrase similar to .specific impulse is the thrust force per unit propellant weight flow per second. will be found.(2) It is only after seeing the mathematics written down does the definition mean something physically to scientists and engineers responsible for either measuring it or using someone.s value for it.

  7. A novel data reduction technique for single slanted hot-wire measurements used to study incompressible compressor tip leakage flows

    NASA Astrophysics Data System (ADS)

    Berdanier, Reid A.; Key, Nicole L.

    2016-03-01

    The single slanted hot-wire technique has been used extensively as a method for measuring three velocity components in turbomachinery applications. The cross-flow orientation of probes with respect to the mean flow in rotating machinery results in detrimental prong interference effects when using multi-wire probes. As a result, the single slanted hot-wire technique is often preferred. Typical data reduction techniques solve a set of nonlinear equations determined by curve fits to calibration data. A new method is proposed which utilizes a look-up table method applied to a simulated triple-wire sensor with application to turbomachinery environments having subsonic, incompressible flows. Specific discussion regarding corrections for temperature and density changes present in a multistage compressor application is included, and additional consideration is given to the experimental error which accompanies each data reduction process. Hot-wire data collected from a three-stage research compressor with two rotor tip clearances are used to compare the look-up table technique with the traditional nonlinear equation method. The look-up table approach yields velocity errors of less than 5 % for test conditions deviating by more than 20 °C from calibration conditions (on par with the nonlinear solver method), while requiring less than 10 % of the computational processing time.

  8. Bulk flow in the combined 2MTF and 6dFGSv surveys

    NASA Astrophysics Data System (ADS)

    Qin, Fei; Howlett, Cullan; Staveley-Smith, Lister; Hong, Tao

    2018-07-01

    We create a combined sample of 10 904 late- and early-type galaxies from the 2MTF and 6dFGSv surveys in order to accurately measure bulk flow in the local Universe. Galaxies and groups of galaxies common between the two surveys are used to verify that the difference in zero-points is <0.02 dex. We introduce a maximum likelihood estimator (ηMLE) for bulk flow measurements that allows for more accurate measurement in the presence of non-Gaussian measurement errors. To calibrate out residual biases due to the subtle interaction of selection effects, Malmquist bias and anisotropic sky distribution, the estimator is tested on mock catalogues generated from 16 independent large-scale GiggleZ and SURFS simulations. The bulk flow of the local Universe using the combined data set, corresponding to a scale size of 40 h-1 Mpc, is 288 ± 24 km s-1 in the direction (l, b) = (296 ± 6°, 21 ± 5°). This is the most accurate bulk flow measurement to date, and the amplitude of the flow is consistent with the Λ cold dark matter expectation for similar size scales.

  9. Bulk flow in the combined 2MTF and 6dFGSv surveys

    NASA Astrophysics Data System (ADS)

    Qin, Fei; Howlett, Cullan; Staveley-Smith, Lister; Hong, Tao

    2018-04-01

    We create a combined sample of 10,904 late and early-type galaxies from the 2MTF and 6dFGSv surveys in order to accurately measure bulk flow in the local Universe. Galaxies and groups of galaxies common between the two surveys are used to verify that the difference in zero-points is <0.02 dex. We introduce a new maximum likelihood estimator (ηMLE) for bulk flow measurements which allows for more accurate measurement in the presence non-Gaussian measurement errors. To calibrate out residual biases due to the subtle interaction of selection effects, Malmquist bias and anisotropic sky distribution, the estimator is tested on mock catalogues generated from 16 independent large-scale GiggleZ and SURFS simulations. The bulk flow of the local Universe using the combined data set, corresponding to a scale size of 40 h-1 Mpc, is 288 ± 24 km s-1 in the direction (l, b) = (296 ± 6°, 21 ± 5°). This is the most accurate bulk flow measurement to date, and the amplitude of the flow is consistent with the ΛCDM expectation for similar size scales.

  10. Cross sections for H(-) and Cl(-) production from HCl by dissociative electron attachment

    NASA Technical Reports Server (NTRS)

    Orient, O. J.; Srivastava, S. K.

    1985-01-01

    A crossed target beam-electron beam collision geometry and a quadrupole mass spectrometer have been used to conduct dissociative electron attachment cross section measurements for the case of H(-) and Cl(-) production from HCl. The relative flow technique is used to determine the absolute values of cross sections. A tabulation is given of the attachment energies corresponding to various cross section maxima. Error sources contributing to total errors are also estimated.

  11. Uncertainty in sap flow-based transpiration due to xylem properties

    NASA Astrophysics Data System (ADS)

    Looker, N. T.; Hu, J.; Martin, J. T.; Jencso, K. G.

    2014-12-01

    Transpiration, the evaporative loss of water from plants through their stomata, is a key component of the terrestrial water balance, influencing streamflow as well as regional convective systems. From a plant physiological perspective, transpiration is both a means of avoiding destructive leaf temperatures through evaporative cooling and a consequence of water loss through stomatal uptake of carbon dioxide. Despite its hydrologic and ecological significance, transpiration remains a notoriously challenging process to measure in heterogeneous landscapes. Sap flow methods, which estimate transpiration by tracking the velocity of a heat pulse emitted into the tree sap stream, have proven effective for relating transpiration dynamics to climatic variables. To scale sap flow-based transpiration from the measured domain (often <5 cm of tree cross-sectional area) to the whole-tree level, researchers generally assume constancy of scale factors (e.g., wood thermal diffusivity (k), radial and azimuthal distributions of sap velocity, and conducting sapwood area (As)) through time, across space, and within species. For the widely used heat-ratio sap flow method (HRM), we assessed the sensitivity of transpiration estimates to uncertainty in k (a function of wood moisture content and density) and As. A sensitivity analysis informed by distributions of wood moisture content, wood density and As sampled across a gradient of water availability indicates that uncertainty in these variables can impart substantial error when scaling sap flow measurements to the whole tree. For species with variable wood properties, the application of the HRM assuming a spatially constant k or As may systematically over- or underestimate whole-tree transpiration rates, resulting in compounded error in ecosystem-scale estimates of transpiration.

  12. Rayleigh Scattering Diagnostic for Measurement of Temperature and Velocity in Harsh Environments

    NASA Technical Reports Server (NTRS)

    Seasholtz, Richard G.; Greer, Lawrence C., III

    1998-01-01

    A molecular Rayleigh scattering system for temperature and velocity measurements in unseeded flows is described. The system is capable of making measurements in the harsh environments commonly found in aerospace test facilities, which may have high acoustic sound levels, varying temperatures, and high vibration levels. Light from an argon-ion laser is transmitted via an optical fiber to a remote location where two flow experiments were located. One was a subsonic free air jet; the second was a low-speed heated airjet. Rayleigh scattered light from the probe volume was transmitted through another optical fiber from the remote location to a controlled environment where a Fabry-Perot interferometer and cooled CCD camera were used to analyze the Rayleigh scattered light. Good agreement between the measured velocity and the velocity calculated from isentropic flow relations was demonstrated (less than 5 m/sec). The temperature measurements, however, exhibited systematic errors on the order of 10-15%.

  13. Measurements of Reynolds stress profiles in unstratified tidal flow

    USGS Publications Warehouse

    Stacey, M.T.; Monismith, Stephen G.; Burau, J.R.

    1999-01-01

    In this paper we present a method for measuring profiles of turbulence quantities using a broadband acoustic doppler current profiler (ADCP). The method follows previous work on the continental shelf and extends the analysis to develop estimates of the errors associated with the estimation methods. ADCP data was collected in an unstratified channel and the results of the analysis are compared to theory. This comparison shows that the method provides an estimate of the Reynolds stresses, which is unbiased by Doppler noise, and an estimate of the turbulent kinetic energy (TKE) which is biased by an amount proportional to the Doppler noise. The noise in each of these quantities as well as the bias in the TKE match well with the theoretical values produced by the error analysis. The quantification of profiles of Reynolds stresses simultaneous with the measurement of mean velocity profiles allows for extensive analysis of the turbulence of the flow. In this paper, we examine the relation between the turbulence and the mean flow through the calculation of u*, the friction velocity, and Cd, the coefficient of drag. Finally, we calculate quantities of particular interest in turbulence modeling and analysis, the characteristic lengthscales, including a lengthscale which represents the stream-wise scale of the eddies which dominate the Reynolds stresses. Copyright 1999 by the American Geophysical Union.

  14. Numerical Error Estimation with UQ

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Korn, Peter; Marotzke, Jochem

    2014-05-01

    Ocean models are still in need of means to quantify model errors, which are inevitably made when running numerical experiments. The total model error can formally be decomposed into two parts, the formulation error and the discretization error. The formulation error arises from the continuous formulation of the model not fully describing the studied physical process. The discretization error arises from having to solve a discretized model instead of the continuously formulated model. Our work on error estimation is concerned with the discretization error. Given a solution of a discretized model, our general problem statement is to find a way to quantify the uncertainties due to discretization in physical quantities of interest (diagnostics), which are frequently used in Geophysical Fluid Dynamics. The approach we use to tackle this problem is called the "Goal Error Ensemble method". The basic idea of the Goal Error Ensemble method is that errors in diagnostics can be translated into a weighted sum of local model errors, which makes it conceptually based on the Dual Weighted Residual method from Computational Fluid Dynamics. In contrast to the Dual Weighted Residual method these local model errors are not considered deterministically but interpreted as local model uncertainty and described stochastically by a random process. The parameters for the random process are tuned with high-resolution near-initial model information. However, the original Goal Error Ensemble method, introduced in [1], was successfully evaluated only in the case of inviscid flows without lateral boundaries in a shallow-water framework and is hence only of limited use in a numerical ocean model. Our work consists in extending the method to bounded, viscous flows in a shallow-water framework. As our numerical model, we use the ICON-Shallow-Water model. In viscous flows our high-resolution information is dependent on the viscosity parameter, making our uncertainty measures viscosity-dependent. We will show that we can choose a sensible parameter by using the Reynolds-number as a criteria. Another topic, we will discuss is the choice of the underlying distribution of the random process. This is especially of importance in the scope of lateral boundaries. We will present resulting error estimates for different height- and velocity-based diagnostics applied to the Munk gyre experiment. References [1] F. RAUSER: Error Estimation in Geophysical Fluid Dynamics through Learning; PhD Thesis, IMPRS-ESM, Hamburg, 2010 [2] F. RAUSER, J. MAROTZKE, P. KORN: Ensemble-type numerical uncertainty quantification from single model integrations; SIAM/ASA Journal on Uncertainty Quantification, submitted

  15. Accuracy improvement of the ice flow rate measurements on Antarctic ice sheet by DInSAR method

    NASA Astrophysics Data System (ADS)

    Shiramizu, Kaoru; Doi, Koichiro; Aoyama, Yuichi

    2015-04-01

    DInSAR (Differential Interferometric Synthetic Aperture Radar) is an effective tool to measure the flow rate of slow flowing ice streams on Antarctic ice sheet with high resolution. In the flow rate measurement by DInSAR method, we use Digital Elevation Model (DEM) at two times in the estimating process. At first, we use it to remove topographic fringes from InSAR images. And then, it is used to project obtained displacements along Line-Of-Sight (LOS) direction to the actual flow direction. ASTER-GDEM widely-used for InSAR prosessing of the data of polar region has a lot of errors especially in the inland ice sheet area. Thus the errors yield irregular flow rates and directions. Therefore, quality of DEM has a substantial influence on the ice flow rate measurement. In this study, we created a new DEM (resolution 10m; hereinafter referred to as PRISM-DEM) based on ALOS/PRISM images, and compared PRISM-DEM and ASTER-GDEM. The study area is around Skallen, 90km south from Syowa Station, in the southern part of Sôya Coast, East Antarctica. For making DInSAR images, we used ALOS/PALSAR data of 13 pairs (Path633, Row 571-572), observed during the period from November 23, 2007 through January 16, 2011. PRISM-DEM covering the PALSAR scene was created from nadir and backward view images of ALOS/PRISM (Observation date: 2009/1/18) by applying stereo processing with a digital mapping equipment, and then the automatically created a primary DEM was corrected manually to make a final DEM. The number of irregular values of actual ice flow rate was reduced by applying PRISM-DEM compared with that by applying ASTER-GDEM. Additionally, an averaged displacement of approximately 0.5cm was obtained by applying PRISM-DEM over outcrop area, where no crustal displacement considered to occur during the recurrence period of ALOS/PALSAR (46days), while an averaged displacement of approximately 1.65 cm was observed by applying ASTER-GDEM. Since displacements over outcrop area are considered to be apparent ones, the average could be a measure of flow rate estimation accuracy by DInSAR. Therefore, it is concluded that the accuracy of the ice flow rate measurement can be improved by using PRISM-DEM. In this presentation, we will show the results of the estimated flow rate of ice streams in the region of interest, and discuss the additional accuracy improvement of this method.

  16. Impact of mismatched and misaligned laser light sheet profiles on PIV performance

    NASA Astrophysics Data System (ADS)

    Grayson, K.; de Silva, C. M.; Hutchins, N.; Marusic, I.

    2018-01-01

    The effect of mismatched or misaligned laser light sheet profiles on the quality of particle image velocimetry (PIV) results is considered in this study. Light sheet profiles with differing widths, shapes, or alignment can reduce the correlation between PIV images and increase experimental errors. Systematic PIV simulations isolate these behaviours to assess the sensitivity and implications of light sheet mismatch on measurements. The simulations in this work use flow fields from a turbulent boundary layer; however, the behaviours and impacts of laser profile mismatch are highly relevant to any fluid flow or PIV application. Experimental measurements from a turbulent boundary layer facility are incorporated, as well as additional simulations matched to experimental image characteristics, to validate the synthetic image analysis. Experimental laser profiles are captured using a modular laser profiling camera, designed to quantify the distribution of laser light sheet intensities and inform any corrective adjustments to an experimental configuration. Results suggest that an offset of just 1.35 standard deviations in the Gaussian light sheet intensity distributions can cause a 40% reduction in the average correlation coefficient and a 45% increase in spurious vectors. Errors in measured flow statistics are also amplified when two successive laser profiles are no longer well matched in alignment or intensity distribution. Consequently, an awareness of how laser light sheet overlap influences PIV results can guide faster setup of an experiment, as well as achieve superior experimental measurements.

  17. Droplet Sizing Research.

    DTIC Science & Technology

    1985-04-15

    studies, The measurement volume is defined by the intersection aerosol studies, flue gas desulfurization , spray drying, of apertures in front of two...identify by block numberl --A method to measure the size and velocity of individual particles in a flow is discussed. Results are presented for controlled ... controlled m0 monodisperse sprays and compared to flash photographs. Typical errors between predicted and measured sizes are less than 5%. Experimental

  18. Flow Rates Measurement and Uncertainty Analysis in Multiple-Zone Water-Injection Wells from Fluid Temperature Profiles

    PubMed Central

    Reges, José E. O.; Salazar, A. O.; Maitelli, Carla W. S. P.; Carvalho, Lucas G.; Britto, Ursula J. B.

    2016-01-01

    This work is a contribution to the development of flow sensors in the oil and gas industry. It presents a methodology to measure the flow rates into multiple-zone water-injection wells from fluid temperature profiles and estimate the measurement uncertainty. First, a method to iteratively calculate the zonal flow rates using the Ramey (exponential) model was described. Next, this model was linearized to perform an uncertainty analysis. Then, a computer program to calculate the injected flow rates from experimental temperature profiles was developed. In the experimental part, a fluid temperature profile from a dual-zone water-injection well located in the Northeast Brazilian region was collected. Thus, calculated and measured flow rates were compared. The results proved that linearization error is negligible for practical purposes and the relative uncertainty increases as the flow rate decreases. The calculated values from both the Ramey and linear models were very close to the measured flow rates, presenting a difference of only 4.58 m³/d and 2.38 m³/d, respectively. Finally, the measurement uncertainties from the Ramey and linear models were equal to 1.22% and 1.40% (for injection zone 1); 10.47% and 9.88% (for injection zone 2). Therefore, the methodology was successfully validated and all objectives of this work were achieved. PMID:27420068

  19. Study of accuracy of precipitation measurements using simulation method

    NASA Astrophysics Data System (ADS)

    Nagy, Zoltán; Lajos, Tamás; Morvai, Krisztián

    2013-04-01

    Hungarian Meteorological Service1 Budapest University of Technology and Economics2 Precipitation is one of the the most important meteorological parameters describing the state of the climate and to get correct information from trends, accurate measurements of precipitation is very important. The problem is that the precipitation measurements are affected by systematic errors leading to an underestimation of actual precipitation which errors vary by type of precipitaion and gauge type. It is well known that the wind speed is the most important enviromental factor that contributes to the underestimation of actual precipitation, especially for solid precipitation. To study and correct the errors of precipitation measurements there are two basic possibilities: · Use of results and conclusion of International Precipitation Measurements Intercomparisons; · To build standard reference gauges (DFIR, pit gauge) and make own investigation; In 1999 at the HMS we tried to achieve own investigation and built standard reference gauges But the cost-benefit ratio in case of snow (use of DFIR) was very bad (we had several winters without significant amount of snow, while the state of DFIR was continously falling) Due to the problem mentioned above there was need for new approximation that was the modelling made by Budapest University of Technology and Economics, Department of Fluid Mechanics using the FLUENT 6.2 model. The ANSYS Fluent package is featured fluid dynamics solution for modelling flow and other related physical phenomena. It provides the tools needed to describe atmospheric processes, design and optimize new equipment. The CFD package includes solvers that accurately simulate behaviour of the broad range of flows that from single-phase to multi-phase. The questions we wanted to get answer to are as follows: · How do the different types of gauges deform the airflow around themselves? · Try to give quantitative estimation of wind induced error. · How does the use of wind shield improve the accuracy of precipitation measurements? · Try to find the source of the error that can be detected at tipping bucket raingauge in winter time because of use of heating power? On our poster we would like to present the answers to the questions listed above.

  20. Simultaneous moisture content and mass flow measurements in wood chip flows using coupled dielectric and impact sensors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pan, Pengmin; McDonald, Timothy; Fulton, John

    An 8-electrode capacitance tomography (ECT) sensor was built and used to measure moisture content (MC) and mass flow of pine chip flows. The device was capable of directly measuring total water quantity in a sample but was sensitive to both dry matter and moisture, and therefore required a second measurement of mass flow to calculate MC. Two means of calculating the mass flow were used: the first being an impact sensor to measure total mass flow, and the second a volumetric approach based on measuring total area occupied by wood in images generated using the capacitance sensor’s tomographic mode. Testsmore » were made on 109 groups of wood chips ranging in moisture content from 14% to 120% (dry basis) and wet weight of 280 to 1100 g. Sixty groups were randomly selected as a calibration set, and the remaining were used for validation of the sensor’s performance. For the combined capacitance/force transducer system, root mean square errors of prediction (RMSEP) for wet mass flow and moisture content were 13.42% and 16.61%, respectively. RMSEP using the combined volumetric mass flow/capacitance sensor for dry mass flow and moisture content were 22.89% and 24.16%, respectively. Either of the approaches was concluded to be feasible for prediction of moisture content in pine chip flows, but combining the impact and capacitance sensors was easier to implement. As a result, in situations where flows could not be impeded, however, the tomographic approach would likely be more useful.« less

  1. Simultaneous moisture content and mass flow measurements in wood chip flows using coupled dielectric and impact sensors

    DOE PAGES

    Pan, Pengmin; McDonald, Timothy; Fulton, John; ...

    2016-12-23

    An 8-electrode capacitance tomography (ECT) sensor was built and used to measure moisture content (MC) and mass flow of pine chip flows. The device was capable of directly measuring total water quantity in a sample but was sensitive to both dry matter and moisture, and therefore required a second measurement of mass flow to calculate MC. Two means of calculating the mass flow were used: the first being an impact sensor to measure total mass flow, and the second a volumetric approach based on measuring total area occupied by wood in images generated using the capacitance sensor’s tomographic mode. Testsmore » were made on 109 groups of wood chips ranging in moisture content from 14% to 120% (dry basis) and wet weight of 280 to 1100 g. Sixty groups were randomly selected as a calibration set, and the remaining were used for validation of the sensor’s performance. For the combined capacitance/force transducer system, root mean square errors of prediction (RMSEP) for wet mass flow and moisture content were 13.42% and 16.61%, respectively. RMSEP using the combined volumetric mass flow/capacitance sensor for dry mass flow and moisture content were 22.89% and 24.16%, respectively. Either of the approaches was concluded to be feasible for prediction of moisture content in pine chip flows, but combining the impact and capacitance sensors was easier to implement. As a result, in situations where flows could not be impeded, however, the tomographic approach would likely be more useful.« less

  2. Simultaneous Moisture Content and Mass Flow Measurements in Wood Chip Flows Using Coupled Dielectric and Impact Sensors

    PubMed Central

    Pan, Pengmin; McDonald, Timothy; Fulton, John; Via, Brian; Hung, John

    2016-01-01

    An 8-electrode capacitance tomography (ECT) sensor was built and used to measure moisture content (MC) and mass flow of pine chip flows. The device was capable of directly measuring total water quantity in a sample but was sensitive to both dry matter and moisture, and therefore required a second measurement of mass flow to calculate MC. Two means of calculating the mass flow were used: the first being an impact sensor to measure total mass flow, and the second a volumetric approach based on measuring total area occupied by wood in images generated using the capacitance sensor’s tomographic mode. Tests were made on 109 groups of wood chips ranging in moisture content from 14% to 120% (dry basis) and wet weight of 280 to 1100 g. Sixty groups were randomly selected as a calibration set, and the remaining were used for validation of the sensor’s performance. For the combined capacitance/force transducer system, root mean square errors of prediction (RMSEP) for wet mass flow and moisture content were 13.42% and 16.61%, respectively. RMSEP using the combined volumetric mass flow/capacitance sensor for dry mass flow and moisture content were 22.89% and 24.16%, respectively. Either of the approaches was concluded to be feasible for prediction of moisture content in pine chip flows, but combining the impact and capacitance sensors was easier to implement. In situations where flows could not be impeded, however, the tomographic approach would likely be more useful. PMID:28025536

  3. Thermal and heat flow instrumentation for the space shuttle Thermal Protection System

    NASA Technical Reports Server (NTRS)

    Hartman, G. J.; Neuner, G. J.; Pavlosky, J.

    1974-01-01

    The 100 mission lifetime requirement for the space shuttle orbiter vehicle dictates a unique set of requirements for the Thermal Protection System (TPS) thermal and heat flow instrumentation. This paper describes the design and development of such instrumentation with emphasis on assessment of the accuracy of the measurements when the instrumentation is an integral part of the TPS. The temperature and heat flow sensors considered for this application are described and the optimum choices discussed. Installation techniques are explored and the resulting impact on the system error defined.

  4. Low pressure gas flow analysis through an effusive inlet using mass spectrometry

    NASA Technical Reports Server (NTRS)

    Brown, David R.; Brown, Kenneth G.

    1988-01-01

    A mass spectrometric method for analyzing flow past and through an effusive inlet designed for use on the tethered satellite and other entering vehicles is discussed. Source stream concentrations of species in a gaseous mixture are determined using a calibration of measured mass spectral intensities versus source stream pressure for standard gas mixtures and pure gases. Concentrations are shown to be accurate within experimental error. Theoretical explanations for observed mass discrimination effects as they relate to the various flow situations in the effusive inlet and the experimental apparatus are discussed.

  5. A comparison of methods for deriving solute flux rates using long-term data from streams in the mirror lake watershed

    USGS Publications Warehouse

    Bukaveckas, P.A.; Likens, G.E.; Winter, T.C.; Buso, D.C.

    1998-01-01

    Calculation of chemical flux rates for streams requires integration of continuous measurements of discharge with discrete measurements of solute concentrations. We compared two commonly used methods for interpolating chemistry data (time-averaging and flow-weighting) to determine whether discrepancies between the two methods were large relative to other sources of error in estimating flux rates. Flux rates of dissolved Si and SO42- were calculated from 10 years of data (1981-1990) for the NW inlet and Outlet of Mirror Lake and for a 40-day period (March 22 to April 30, 1993) during which we augmented our routine (weekly) chemical monitoring with collection of daily samples. The time-averaging method yielded higher estimates of solute flux during high-flow periods if no chemistry samples were collected corresponding to peak discharge. Concentration-discharge relationships should be used to interpolate stream chemistry during changing flow conditions if chemical changes are large. Caution should be used in choosing the appropriate time-scale over which data are pooled to derive the concentration-discharge regressions because the model parameters (slope and intercept) were found to be sensitive to seasonal and inter-annual variation. Both methods approximated solute flux to within 2-10% for a range of solutes that were monitored during the intensive sampling period. Our results suggest that errors arising from interpolation of stream chemistry data are small compared with other sources of error in developing watershed mass balances.

  6. Evaluating Snow Data Assimilation Framework for Streamflow Forecasting Applications Using Hindcast Verification

    NASA Astrophysics Data System (ADS)

    Barik, M. G.; Hogue, T. S.; Franz, K. J.; He, M.

    2012-12-01

    Snow water equivalent (SWE) estimation is a key factor in producing reliable streamflow simulations and forecasts in snow dominated areas. However, measuring or predicting SWE has significant uncertainty. Sequential data assimilation, which updates states using both observed and modeled data based on error estimation, has been shown to reduce streamflow simulation errors but has had limited testing for forecasting applications. In the current study, a snow data assimilation framework integrated with the National Weather System River Forecasting System (NWSRFS) is evaluated for use in ensemble streamflow prediction (ESP). Seasonal water supply ESP hindcasts are generated for the North Fork of the American River Basin (NFARB) in northern California. Parameter sets from the California Nevada River Forecast Center (CNRFC), the Differential Evolution Adaptive Metropolis (DREAM) algorithm and the Multistep Automated Calibration Scheme (MACS) are tested both with and without sequential data assimilation. The traditional ESP method considers uncertainty in future climate conditions using historical temperature and precipitation time series to generate future streamflow scenarios conditioned on the current basin state. We include data uncertainty analysis in the forecasting framework through the DREAM-based parameter set which is part of a recently developed Integrated Uncertainty and Ensemble-based data Assimilation framework (ICEA). Extensive verification of all tested approaches is undertaken using traditional forecast verification measures, including root mean square error (RMSE), Nash-Sutcliffe efficiency coefficient (NSE), volumetric bias, joint distribution, rank probability score (RPS), and discrimination and reliability plots. In comparison to the RFC parameters, the DREAM and MACS sets show significant improvement in volumetric bias in flow. Use of assimilation improves hindcasts of higher flows but does not significantly improve performance in the mid flow and low flow categories.

  7. Error and Uncertainty Quantification in the Numerical Simulation of Complex Fluid Flows

    NASA Technical Reports Server (NTRS)

    Barth, Timothy J.

    2010-01-01

    The failure of numerical simulation to predict physical reality is often a direct consequence of the compounding effects of numerical error arising from finite-dimensional approximation and physical model uncertainty resulting from inexact knowledge and/or statistical representation. In this topical lecture, we briefly review systematic theories for quantifying numerical errors and restricted forms of model uncertainty occurring in simulations of fluid flow. A goal of this lecture is to elucidate both positive and negative aspects of applying these theories to practical fluid flow problems. Finite-element and finite-volume calculations of subsonic and hypersonic fluid flow are presented to contrast the differing roles of numerical error and model uncertainty. for these problems.

  8. Optimizations for optical velocity measurements in narrow gaps

    NASA Astrophysics Data System (ADS)

    Schlüßler, Raimund; Blechschmidt, Christian; Czarske, Jürgen; Fischer, Andreas

    2013-09-01

    Measuring the flow velocity in small gaps or near a surface with a nonintrusive optical measurement technique is a challenging measurement task, as disturbing light reflections from the surface appear. However, these measurements are important, e.g., in order to understand and to design the leakage flow in the tip gap between the rotor blade end face and the housing of a turbomachine. Hence, methods to reduce the interfering light power and to correct measurement errors caused by it need to be developed and verified. Different alternatives of minimizing the interfering light power for optical flow measurements in small gaps are presented. By optimizing the beam shape of the applied illumination beam using a numerical diffraction simulation, the interfering light power is reduced by up to a factor of 100. In combination with a decrease of the reflection coefficient of the rotor blade surface, an additional reduction of the interfering light power below the used scattered light power is possible. Furthermore, a correction algorithm to decrease the measurement uncertainty of disturbed measurements is derived. These improvements enable optical three-dimensional three-component flow velocity measurements in submillimeter gaps or near a surface.

  9. 40 CFR 1065.602 - Statistics.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING... number of degrees of freedom, ν, as follows, noting that the εi are the errors (e.g., differences... a gas concentration is measured continuously from the raw exhaust of an engine, its flow-weighted...

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Harding, Samuel F.; Romero-Gomez, Pedro D. J.; Richmond, Marshall C.

    Standards provide recommendations for the best practices in the installation of current meters for measuring fluid flow in closed conduits. These include PTC-18 and IEC-41 . Both of these standards refer to the requirements of the ISO Standard 3354 for cases where the velocity distribution is assumed to be regular and the flow steady. Due to the nature of the short converging intakes of Kaplan hydroturbines, these assumptions may be invalid if current meters are intended to be used to characterize turbine flows. In this study, we examine a combination of measurement guidelines from both ISO standards by means ofmore » virtual current meters (VCM) set up over a simulated hydroturbine flow field. To this purpose, a computational fluid dynamics (CFD) model was developed to model the velocity field of a short converging intake of the Ice Harbor Dam on the Snake River, in the State of Washington. The detailed geometry and resulting wake of the submersible traveling screen (STS) at the first gate slot was of particular interest in the development of the CFD model using a detached eddy simulation (DES) turbulence solution. An array of virtual point velocity measurements were extracted from the resulting velocity field to simulate VCM at two virtual measurement (VM) locations at different distances downstream of the STS. The discharge through each bay was calculated from the VM using the graphical integration solution to the velocity-area method. This method of representing practical velocimetry techniques in a numerical flow field has been successfully used in a range of marine and conventional hydropower applications. A sensitivity analysis was performed to observe the effect of the VCM array resolution on the discharge error. The downstream VM section required 11–33% less VCM in the array than the upstream VM location to achieve a given discharge error. In general, more instruments were required to quantify the discharge at high levels of accuracy when the STS was introduced because of the increased spatial variability of the flow velocity.« less

  11. Towards designing an optical-flow based colonoscopy tracking algorithm: a comparative study

    NASA Astrophysics Data System (ADS)

    Liu, Jianfei; Subramanian, Kalpathi R.; Yoo, Terry S.

    2013-03-01

    Automatic co-alignment of optical and virtual colonoscopy images can supplement traditional endoscopic procedures, by providing more complete information of clinical value to the gastroenterologist. In this work, we present a comparative analysis of our optical flow based technique for colonoscopy tracking, in relation to current state of the art methods, in terms of tracking accuracy, system stability, and computational efficiency. Our optical-flow based colonoscopy tracking algorithm starts with computing multi-scale dense and sparse optical flow fields to measure image displacements. Camera motion parameters are then determined from optical flow fields by employing a Focus of Expansion (FOE) constrained egomotion estimation scheme. We analyze the design choices involved in the three major components of our algorithm: dense optical flow, sparse optical flow, and egomotion estimation. Brox's optical flow method,1 due to its high accuracy, was used to compare and evaluate our multi-scale dense optical flow scheme. SIFT6 and Harris-affine features7 were used to assess the accuracy of the multi-scale sparse optical flow, because of their wide use in tracking applications; the FOE-constrained egomotion estimation was compared with collinear,2 image deformation10 and image derivative4 based egomotion estimation methods, to understand the stability of our tracking system. Two virtual colonoscopy (VC) image sequences were used in the study, since the exact camera parameters(for each frame) were known; dense optical flow results indicated that Brox's method was superior to multi-scale dense optical flow in estimating camera rotational velocities, but the final tracking errors were comparable, viz., 6mm vs. 8mm after the VC camera traveled 110mm. Our approach was computationally more efficient, averaging 7.2 sec. vs. 38 sec. per frame. SIFT and Harris affine features resulted in tracking errors of up to 70mm, while our sparse optical flow error was 6mm. The comparison among egomotion estimation algorithms showed that our FOE-constrained egomotion estimation method achieved the optimal balance between tracking accuracy and robustness. The comparative study demonstrated that our optical-flow based colonoscopy tracking algorithm maintains good accuracy and stability for routine use in clinical practice.

  12. Laser Velocimeter Measurements and Analysis in Turbulent Flows with Combustion. Part 2.

    DTIC Science & Technology

    1983-07-01

    sampling error for 63 this sample size. Mean velocities and turbulence intensi- ties were found to be statistically accurate to ± 1 % and 13%, respectively...Although the statist - ical error was found to be rather small (± 1 % for mean velo- cities and 13% for turbulence intensities), there can be additional...34Computational and Experimental Study of a Captive Annular Eddy," Journal of Fluid Mechanics, Vol. 28, pt. 1 , pp. 43-63, 12 April, 1967. 152 REFERENCES (con’d

  13. Error analysis for reducing noisy wide-gap concentric cylinder rheometric data for nonlinear fluids - Theory and applications

    NASA Technical Reports Server (NTRS)

    Borgia, Andrea; Spera, Frank J.

    1990-01-01

    This work discusses the propagation of errors for the recovery of the shear rate from wide-gap concentric cylinder viscometric measurements of non-Newtonian fluids. A least-square regression of stress on angular velocity data to a system of arbitrary functions is used to propagate the errors for the series solution to the viscometric flow developed by Krieger and Elrod (1953) and Pawlowski (1953) ('power-law' approximation) and for the first term of the series developed by Krieger (1968). A numerical experiment shows that, for measurements affected by significant errors, the first term of the Krieger-Elrod-Pawlowski series ('infinite radius' approximation) and the power-law approximation may recover the shear rate with equal accuracy as the full Krieger-Elrod-Pawlowski solution. An experiment on a clay slurry indicates that the clay has a larger yield stress at rest than during shearing, and that, for the range of shear rates investigated, a four-parameter constitutive equation approximates reasonably well its rheology. The error analysis presented is useful for studying the rheology of fluids such as particle suspensions, slurries, foams, and magma.

  14. Investigating the error sources of the online state of charge estimation methods for lithium-ion batteries in electric vehicles

    NASA Astrophysics Data System (ADS)

    Zheng, Yuejiu; Ouyang, Minggao; Han, Xuebing; Lu, Languang; Li, Jianqiu

    2018-02-01

    Sate of charge (SOC) estimation is generally acknowledged as one of the most important functions in battery management system for lithium-ion batteries in new energy vehicles. Though every effort is made for various online SOC estimation methods to reliably increase the estimation accuracy as much as possible within the limited on-chip resources, little literature discusses the error sources for those SOC estimation methods. This paper firstly reviews the commonly studied SOC estimation methods from a conventional classification. A novel perspective focusing on the error analysis of the SOC estimation methods is proposed. SOC estimation methods are analyzed from the views of the measured values, models, algorithms and state parameters. Subsequently, the error flow charts are proposed to analyze the error sources from the signal measurement to the models and algorithms for the widely used online SOC estimation methods in new energy vehicles. Finally, with the consideration of the working conditions, choosing more reliable and applicable SOC estimation methods is discussed, and the future development of the promising online SOC estimation methods is suggested.

  15. Data assimilation with soil water content sensors and pedotransfer functions in soil water flow modeling

    USDA-ARS?s Scientific Manuscript database

    Soil water flow models are based on a set of simplified assumptions about the mechanisms, processes, and parameters of water retention and flow. That causes errors in soil water flow model predictions. Soil water content monitoring data can be used to reduce the errors in models. Data assimilation (...

  16. Cone-Probe Rake Design and Calibration for Supersonic Wind Tunnel Models

    NASA Technical Reports Server (NTRS)

    Won, Mark J.

    1999-01-01

    A series of experimental investigations were conducted at the NASA Langley Unitary Plan Wind Tunnel (UPWT) to calibrate cone-probe rakes designed to measure the flow field on 1-2% scale, high-speed wind tunnel models from Mach 2.15 to 2.4. The rakes were developed from a previous design that exhibited unfavorable measurement characteristics caused by a high probe spatial density and flow blockage from the rake body. Calibration parameters included Mach number, total pressure recovery, and flow angularity. Reference conditions were determined from a localized UPWT test section flow survey using a 10deg supersonic wedge probe. Test section Mach number and total pressure were determined using a novel iterative technique that accounted for boundary layer effects on the wedge surface. Cone-probe measurements were correlated to the surveyed flow conditions using analytical functions and recursive algorithms that resolved Mach number, pressure recovery, and flow angle to within +/-0.01, +/-1% and +/-0.1deg , respectively, for angles of attack and sideslip between +/-8deg. Uncertainty estimates indicated the overall cone-probe calibration accuracy was strongly influenced by the propagation of measurement error into the calculated results.

  17. Guide to Flow Measurement for Electric Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Frieman, Jason D.; Walker, Mitchell L. R.; Snyder, Steve

    2013-01-01

    In electric propulsion (EP) systems, accurate measurement of the propellant mass flow rate of gas or liquid to the thruster and external cathode is a key input in the calculation of thruster efficiency and specific impulse. Although such measurements are often achieved with commercial mass flow controllers and meters integrated into propellant feed systems, the variability in potential propellant options and flow requirements amongst the spectrum of EP power regimes and devices complicates meter selection, integration, and operation. At the direction of the Committee on Standards for Electric Propulsion Testing, a guide was jointly developed by members of the electric propulsion community to establish a unified document that contains the working principles, methods of implementation and analysis, and calibration techniques and recommendations on the use of mass flow meters in laboratory and spacecraft electric propulsion systems. The guide is applicable to EP devices of all types and power levels ranging from microthrusters to high-power ion engines and Hall effect thrusters. The establishment of a community standard on mass flow metering will help ensure the selection of the proper meter for each application. It will also improve the quality of system performance estimates by providing comprehensive information on the physical phenomena and systematic errors that must be accounted for during the analysis of flow measurement data. This paper will outline the standard methods and recommended practices described in the guide titled "Flow Measurement for Electric Propulsion Systems."

  18. Measuring skewness of red blood cell deformability distribution by laser ektacytometry

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nikitin, S Yu; Priezzhev, A V; Lugovtsov, A E

    An algorithm is proposed for measuring the parameters of red blood cell deformability distribution based on laser diffractometry of red blood cells in shear flow (ektacytometry). The algorithm is tested on specially prepared samples of rat blood. In these experiments we succeeded in measuring the mean deformability, deformability variance and skewness of red blood cell deformability distribution with errors of 10%, 15% and 35%, respectively. (laser biophotonics)

  19. Micro-Viscometer for Measuring Shear-Varying Blood Viscosity over a Wide-Ranging Shear Rate

    PubMed Central

    Kim, Byung Jun; Lee, Seung Yeob; Jee, Solkeun; Atajanov, Arslan; Yang, Sung

    2017-01-01

    In this study, a micro-viscometer is developed for measuring shear-varying blood viscosity over a wide-ranging shear rate. The micro-viscometer consists of 10 microfluidic channel arrays, each of which has a different micro-channel width. The proposed design enables the retrieval of 10 different shear rates from a single flow rate, thereby enabling the measurement of shear-varying blood viscosity with a fixed flow rate condition. For this purpose, an optimal design that guarantees accurate viscosity measurement is selected from a parametric study. The functionality of the micro-viscometer is verified by both numerical and experimental studies. The proposed micro-viscometer shows 6.8% (numerical) and 5.3% (experimental) in relative error when compared to the result from a standard rotational viscometer. Moreover, a reliability test is performed by repeated measurement (N = 7), and the result shows 2.69 ± 2.19% for the mean relative error. Accurate viscosity measurements are performed on blood samples with variations in the hematocrit (35%, 45%, and 55%), which significantly influences blood viscosity. Since the blood viscosity correlated with various physical parameters of the blood, the micro-viscometer is anticipated to be a significant advancement for realization of blood on a chip. PMID:28632151

  20. The accuracy of portable peak flow meters.

    PubMed

    Miller, M R; Dickinson, S A; Hitchings, D J

    1992-11-01

    The variability of peak expiratory flow (PEF) is now commonly used in the diagnosis and management of asthma. It is essential for PEF meters to have a linear response in order to obtain an unbiased measurement of PEF variability. As the accuracy and linearity of portable PEF meters have not been rigorously tested in recent years this aspect of their performance has been investigated. The response of several portable PEF meters was tested with absolute standards of flow generated by a computer driven, servo controlled pump and their response was compared with that of a pneumotachograph. For each device tested the readings were highly repeatable to within the limits of accuracy with which the pointer position can be assessed by eye. The between instrument variation in reading for six identical devices expressed as a 95% confidence limit was, on average across the range of flows, +/- 8.5 l/min for the Mini-Wright, +/- 7.9 l/min for the Vitalograph, and +/- 6.4 l/min for the Ferraris. PEF meters based on the Wright meter all had similar error profiles with overreading of up to 80 l/min in the mid flow range from 300 to 500 l/min. This overreading was greatest for the Mini-Wright and Ferraris devices, and less so for the original Wright and Vitalograph meters. A Micro-Medical Turbine meter was accurate up to 400 l/min and then began to underread by up to 60 l/min at 720 l/min. For the low range devices the Vitalograph device was accurate to within 10 l/min up to 200 l/min, with the Mini-Wright overreading by up to 30 l/min above 150 l/min. Although the Mini-Wright, Ferraris, and Vitalograph meters gave remarkably repeatable results their error profiles for the full range meters will lead to important errors in recording PEF variability. This may lead to incorrect diagnosis and bias in implementing strategies of asthma treatment based on PEF measurement.

  1. The accuracy of portable peak flow meters.

    PubMed Central

    Miller, M R; Dickinson, S A; Hitchings, D J

    1992-01-01

    BACKGROUND: The variability of peak expiratory flow (PEF) is now commonly used in the diagnosis and management of asthma. It is essential for PEF meters to have a linear response in order to obtain an unbiased measurement of PEF variability. As the accuracy and linearity of portable PEF meters have not been rigorously tested in recent years this aspect of their performance has been investigated. METHODS: The response of several portable PEF meters was tested with absolute standards of flow generated by a computer driven, servo controlled pump and their response was compared with that of a pneumotachograph. RESULTS: For each device tested the readings were highly repeatable to within the limits of accuracy with which the pointer position can be assessed by eye. The between instrument variation in reading for six identical devices expressed as a 95% confidence limit was, on average across the range of flows, +/- 8.5 l/min for the Mini-Wright, +/- 7.9 l/min for the Vitalograph, and +/- 6.4 l/min for the Ferraris. PEF meters based on the Wright meter all had similar error profiles with overreading of up to 80 l/min in the mid flow range from 300 to 500 l/min. This overreading was greatest for the Mini-Wright and Ferraris devices, and less so for the original Wright and Vitalograph meters. A Micro-Medical Turbine meter was accurate up to 400 l/min and then began to underread by up to 60 l/min at 720 l/min. For the low range devices the Vitalograph device was accurate to within 10 l/min up to 200 l/min, with the Mini-Wright overreading by up to 30 l/min above 150 l/min. CONCLUSION: Although the Mini-Wright, Ferraris, and Vitalograph meters gave remarkably repeatable results their error profiles for the full range meters will lead to important errors in recording PEF variability. This may lead to incorrect diagnosis and bias in implementing strategies of asthma treatment based on PEF measurement. PMID:1465746

  2. Axisymmetric Flow Properties for Magnetic Elements of Differing Strength

    NASA Technical Reports Server (NTRS)

    Rightmire-Upton, Lisa; Hathaway, David H.

    2012-01-01

    Aspects of the structure and dynamics of the flows in the Sun's surface shear layer remain uncertain and yet are critically important for understanding the observed magnetic behavior. In our previous studies of the axisymmetric transport of magnetic elements we found systematic changes in both the differential rotation and the meridional flow over the course of Solar Cycle 23. Here we examine how those flows depend upon the strength (and presumably anchoring depth) of the magnetic elements. Line of sight magnetograms obtained by the HMI instrument aboard SDO over the course of Carrington Rotation 2097 were mapped to heliographic coordinates and averaged over 12 minutes to remove the 5-min oscillations. Data masks were constructed based on the field strength of each mapped pixel to isolate magnetic elements of differing field strength. We used Local Correlation Tracking of the unmasked data (separated in time by 1- to 8-hours) to determine the longitudinal and latitudinal motions of the magnetic elements. We then calculated average flow velocities as functions of latitude and longitude from the central meridian for approx 600 image pairs over the 27-day rotation. Variations with longitude indicate and characterize systematic errors in the flow measurements associated with changes in the signal from disk center to limb. Removing these systematic errors reveals changes in the axisymmetric flow properties that reflect changes in flow properties with depth in the surface shear layer.

  3. Estimating 1970-99 average annual groundwater recharge in Wisconsin using streamflow data

    USGS Publications Warehouse

    Gebert, Warren A.; Walker, John F.; Kennedy, James L.

    2011-01-01

    Average annual recharge in Wisconsin for the period 1970-99 was estimated using streamflow data from U.S. Geological Survey continuous-record streamflow-gaging stations and partial-record sites. Partial-record sites have discharge measurements collected during low-flow conditions. The average annual base flow of a stream divided by the drainage area is a good approximation of the recharge rate; therefore, once average annual base flow is determined recharge can be calculated. Estimates of recharge for nearly 72 percent of the surface area of the State are provided. The results illustrate substantial spatial variability of recharge across the State, ranging from less than 1 inch to more than 12 inches per year. The average basin size for partial-record sites (50 square miles) was less than the average basin size for the gaging stations (305 square miles). Including results for smaller basins reveals a spatial variability that otherwise would be smoothed out using only estimates for larger basins. An error analysis indicates that the techniques used provide base flow estimates with standard errors ranging from 5.4 to 14 percent.

  4. Effects of Time-Dependent Inflow Perturbations on Turbulent Flow in a Street Canyon

    NASA Astrophysics Data System (ADS)

    Duan, G.; Ngan, K.

    2017-12-01

    Urban flow and turbulence are driven by atmospheric flows with larger horizontal scales. Since building-resolving computational fluid dynamics models typically employ steady Dirichlet boundary conditions or forcing, the accuracy of numerical simulations may be limited by the neglect of perturbations. We investigate the sensitivity of flow within a unit-aspect-ratio street canyon to time-dependent perturbations near the inflow boundary. Using large-eddy simulation, time-periodic perturbations to the streamwise velocity component are incorporated via the nudging technique. Spatial averages of pointwise differences between unperturbed and perturbed velocity fields (i.e., the error kinetic energy) show a clear dependence on the perturbation period, though spatial structures are largely insensitive to the time-dependent forcing. The response of the error kinetic energy is maximized for perturbation periods comparable to the time scale of the mean canyon circulation. Frequency spectra indicate that this behaviour arises from a resonance between the inflow forcing and the mean motion around closed streamlines. The robustness of the results is confirmed using perturbations derived from measurements of roof-level wind speed.

  5. 40 CFR 1065.602 - Statistics.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING... number of degrees of freedom, ν, as follows, noting that the εi are the errors (e.g., differences... measured continuously from the raw exhaust of an engine, its flow-weighted mean concentration is the sum of...

  6. 40 CFR 1065.602 - Statistics.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... Protection of Environment ENVIRONMENTAL PROTECTION AGENCY (CONTINUED) AIR POLLUTION CONTROLS ENGINE-TESTING... number of degrees of freedom, ν, as follows, noting that the εi are the errors (e.g., differences... measured continuously from the raw exhaust of an engine, its flow-weighted mean concentration is the sum of...

  7. Numerical performance analysis of acoustic Doppler velocity profilers in the wake of an axial-flow marine hydrokinetic turbine

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richmond, Marshall C.; Harding, Samuel F.; Romero Gomez, Pedro DJ

    The use of acoustic Doppler current profilers (ADCPs) for the characterization of flow conditions in the vicinity of both experimental and full scale marine hydrokinetic (MHK) turbines is becoming increasingly prevalent. The computation of a three dimensional velocity measurement from divergent acoustic beams requires the assumption that the flow conditions are homogeneous between all beams at a particular axial distance from the instrument. In the near wake of MHK devices, the mean fluid motion is observed to be highly spatially dependent as a result of torque generation and energy extraction. This paper examines the performance of ADCP measurements in suchmore » scenarios through the modelling of a virtual ADCP (VADCP) instrument in the velocity field in the wake of an MHK turbine resolved using unsteady computational fluid dynamics (CFD). This is achieved by sampling the CFD velocity field at equivalent locations to the sample bins of an ADCP and performing the coordinate transformation from beam coordinates to instrument coordinates and finally to global coordinates. The error in the mean velocity calculated by the VADCP relative to the reference velocity along the instrument axis is calculated for a range of instrument locations and orientations. The stream-wise velocity deficit and tangential swirl velocity caused by the rotor rotation lead to significant misrepresentation of the true flow velocity profiles by the VADCP, with the most significant errors in the transverse (cross-flow) velocity direction.« less

  8. Simulation of flow and water quality of the Arroyo Colorado, Texas, 1989-99

    USGS Publications Warehouse

    Raines, Timothy H.; Miranda, Roger M.

    2002-01-01

    A model parameter set for use with the Hydrological Simulation Program—FORTRAN watershed model was developed to simulate flow and water quality for selected properties and constituents for the Arroyo Colorado from the city of Mission to the Laguna Madre, Texas. The model simulates flow, selected water-quality properties, and constituent concentrations. The model can be used to estimate a total maximum daily load for selected properties and constituents in the Arroyo Colorado. The model was calibrated and tested for flow with data measured during 1989–99 at three streamflow-gaging stations. The errors for total flow volume ranged from -0.1 to 29.0 percent, and the errors for total storm volume ranged from -15.6 to 8.4 percent. The model was calibrated and tested for water quality for seven properties and constituents with 1989–99 data. The model was calibrated sequentially for suspended sediment, water temperature, biochemical oxygen demand, dissolved oxygen, nitrate nitrogen, ammonia nitrogen, and orthophosphate. The simulated concentrations of the selected properties and constituents generally matched the measured concentrations available for the calibration and testing periods. The model was used to simulate total point- and nonpoint-source loads for selected properties and constituents for 1989–99 for urban, natural, and agricultural land-use types. About one-third to one-half of the biochemical oxygen demand and nutrient loads are from urban point and nonpoint sources, although only 13 percent of the total land use in the basin is urban.

  9. A comparative experimental evaluation of uncertainty estimation methods for two-component PIV

    NASA Astrophysics Data System (ADS)

    Boomsma, Aaron; Bhattacharya, Sayantan; Troolin, Dan; Pothos, Stamatios; Vlachos, Pavlos

    2016-09-01

    Uncertainty quantification in planar particle image velocimetry (PIV) measurement is critical for proper assessment of the quality and significance of reported results. New uncertainty estimation methods have been recently introduced generating interest about their applicability and utility. The present study compares and contrasts current methods, across two separate experiments and three software packages in order to provide a diversified assessment of the methods. We evaluated the performance of four uncertainty estimation methods, primary peak ratio (PPR), mutual information (MI), image matching (IM) and correlation statistics (CS). The PPR method was implemented and tested in two processing codes, using in-house open source PIV processing software (PRANA, Purdue University) and Insight4G (TSI, Inc.). The MI method was evaluated in PRANA, as was the IM method. The CS method was evaluated using DaVis (LaVision, GmbH). Utilizing two PIV systems for high and low-resolution measurements and a laser doppler velocimetry (LDV) system, data were acquired in a total of three cases: a jet flow and a cylinder in cross flow at two Reynolds numbers. LDV measurements were used to establish a point validation against which the high-resolution PIV measurements were validated. Subsequently, the high-resolution PIV measurements were used as a reference against which the low-resolution PIV data were assessed for error and uncertainty. We compared error and uncertainty distributions, spatially varying RMS error and RMS uncertainty, and standard uncertainty coverages. We observed that qualitatively, each method responded to spatially varying error (i.e. higher error regions resulted in higher uncertainty predictions in that region). However, the PPR and MI methods demonstrated reduced uncertainty dynamic range response. In contrast, the IM and CS methods showed better response, but under-predicted the uncertainty ranges. The standard coverages (68% confidence interval) ranged from approximately 65%-77% for PPR and MI methods, 40%-50% for IM and near 50% for CS. These observations illustrate some of the strengths and weaknesses of the methods considered herein and identify future directions for development and improvement.

  10. Disturbance torque rejection properties of the NASA/JPL 70-meter antenna axis servos

    NASA Technical Reports Server (NTRS)

    Hill, R. E.

    1989-01-01

    Analytic methods for evaluating pointing errors caused by external disturbance torques are developed and applied to determine the effects of representative values of wind and friction torque. The expressions relating pointing errors to disturbance torques are shown to be strongly dependent upon the state estimator parameters, as well as upon the state feedback gain and the flow versus pressure characteristics of the hydraulic system. Under certain conditions, when control is derived from an uncorrected estimate of integral position error, the desired type 2 servo properties are not realized and finite steady-state position errors result. Methods for reducing these errors to negligible proportions through the proper selection of control gain and estimator correction parameters are demonstrated. The steady-state error produced by a disturbance torque is found to be directly proportional to the hydraulic internal leakage. This property can be exploited to provide a convenient method of determining system leakage from field measurements of estimator error, axis rate, and hydraulic differential pressure.

  11. Simulation of ground-water flow in glaciofluvial aquifers in the Grand Rapids area, Minnesota

    USGS Publications Warehouse

    Jones, Perry M.

    2004-01-01

    A calibrated steady-state, finite-difference, ground-waterflow model was constructed to simulate ground-water flow in three glaciofluvial aquifers, defined in this report as the upper, middle, and lower aquifers, in an area of about 114 mi2 surrounding the city of Grand Rapids in north-central Minnesota. The calibrated model will be used by Minnesota Department of Health and communities in the Grand Rapids area in the development of wellhead protection plans for their water supplies. The model was calibrated through comparison of simulated ground-water levels to measured static water levels in 351 wells, and comparison of simulated base-flow rates to estimated base-flow rates for reaches of the Mississippi and Prairie Rivers. Model statistics indicate that the model tends to overestimate ground-water levels. The root mean square errors ranged from +12.83 ft in wells completed in the upper aquifer to +19.10 ft in wells completed in the middle aquifer. Mean absolute differences between simulated and measured water levels ranged from +4.43 ft for wells completed in the upper aquifer to +9.25 ft for wells completed in the middle aquifer. Mean algebraic differences ranged from +9.35 ft for wells completed in the upper aquifer to +14.44 ft for wells completed in the middle aquifer, with the positive differences indicating that the simulated water levels were higher than the measured water levels. Percentage errors between simulated and estimated base-flow rates for the three monitored reaches all were less than 10 percent, indicating good agreement. Simulated ground-water levels were most sensitive to changes in general-head boundary conductance, indicating that this characteristic is the predominant model input variable controlling steady-state water-level conditions. Simulated groundwater flow to stream reaches was most sensitive to changes in horizontal hydraulic conductivity, indicating that this characteristic is the predominant model input variable controlling steady-state flow conditions.

  12. Evaluation of Flow Biosensor Technology in a Chronically-Instrumented Non-Human Primate Model

    NASA Technical Reports Server (NTRS)

    Koenig, S. C.; Reister, C.; Schaub, J.; Muniz, G.; Ferguson, T.; Fanton, J. W.

    1995-01-01

    The Physiology Research Branch of Brooks AFB conducts both human and non-human primate experiments to determine the effects of microgravity and hypergravity on the cardiovascular system and to indentify the particular mechanisms that invoke these responses. Primary investigative research efforts in a non-human primate model require the calculation of total peripheral resistance (TPR), systemic arterial compliance (SAC), and pressure-volume loop characteristics. These calculations require beat-to-beat measurement of aortic flow. We have evaluated commercially available electromagnetic (EMF) and transit-time flow measurement techniques. In vivo and in vitro experiments demonstrated that the average error of these techniques is less than 25 percent for EMF and less than 10 percent for transit-time.

  13. Inference of effective river properties from remotely sensed observations of water surface

    NASA Astrophysics Data System (ADS)

    Garambois, Pierre-André; Monnier, Jérôme

    2015-05-01

    The future SWOT mission (Surface Water and Ocean Topography) will provide cartographic measurements of inland water surfaces (elevation, widths and slope) at an unprecedented spatial and temporal resolution. Given synthetic SWOT like data, forward flow models of hierarchical-complexity are revisited and few inverse formulations are derived and assessed for retrieving the river low flow bathymetry, roughness and discharge (A0, K, Q) . The concept of an effective low flow bathymetry A0 (the real one being never observed) and roughness K , hence an effective river dynamics description, is introduced. The few inverse models elaborated for inferring (A0, K, Q) are analyzed in two contexts: (1) only remotely sensed observations of the water surface (surface elevation, width and slope) are available; (2) one additional water depth measurement (or estimate) is available. The inverse models elaborated are independent of data acquisition dynamics; they are assessed on 91 synthetic test cases sampling a wide range of steady-state river flows (the Froude number varying between 0.05 and 0.5 for 1 km reaches) and in the case of a flood on the Garonne River (France) characterized by large spatio-temporal variabilities. It is demonstrated that the most complete shallow-water like model allowing to separate the roughness and bathymetry terms is the so-called low Froude model. In Case (1), the resulting RMSE on infered discharges are on the order of 15% for first guess errors larger than 50%. An important feature of the present inverse methods is the fairly good accuracy of the discharge Q obtained, while the identified roughness coefficient K includes the measurement errors and the misfit of physics between the real flow and the hypothesis on which the inverse models rely; the later neglecting the unobserved temporal variations of the flow and the inertia effects. A compensation phenomena between the indentifiedvalues of K and the unobserved bathymetry A0 is highlighted, while the present inverse models lead to an effective river dynamics model that is accurate in the range of the discharge variability observed. In Case (2), the effective bathymetry profile for 80 km of the Garonne River is retrieved with 1% relative error only. Next, accurate effective topography-friction pairs and also discharge can be inferred. Finally, defining river reaches from the observation grid tends to average the river properties in each reach, hence tends to smooth the hydraulic variability.

  14. Ultrasonic flow measurements for irrigation process monitoring

    NASA Astrophysics Data System (ADS)

    Ziani, Elmostafa; Bennouna, Mustapha; Boissier, Raymond

    2004-02-01

    This paper presents the state of the art of the general principle of liquid flow measurements by ultrasonic method, and problems of flow measurements. We present an ultrasonic flowmeter designed according to smart sensors concept, for the measurement of irrigation water flowing through pipelines or open channels, using the ultrasonic transit time approach. The new flowmeter works on the principle of measuring time delay differences between sound pulses transmitted upstream and downstream in the flowing liquid. The speed of sound in the flowing medium is eliminated as a variable because the flowrate calculations are based on the reciprocals of the transmission times. The transit time difference is digitally measured by means of a suitable, microprocessor controlled logic. This type of ultrasonic flowmeter will be widely used in industry and water management, it is well studied in this work, followed by some experimental results. For pressurized channels, we use one pair of ultrasonic transducer arranged in proper positions and directions of the pipe, in this case, to determine the liquid velocity, a real time on-line analysis taking account the geometries of the hydraulic system, is applied to the obtained ultrasonic data. In the open channels, we use a single or two pairs of ultrasonic emitter-receiver according to the desired performances. Finally, the goals of this work consist in integrating the smart sensor into irrigation systems monitoring in order to evaluate potential advantages and demonstrate their performance, on the other hand, to understand and use ultrasonic approach for determining flow characteristics and improving flow measurements by reducing errors caused by disturbances of the flow profiles.

  15. Support of gas flowmeter upgrade

    NASA Technical Reports Server (NTRS)

    Waugaman, Dennis

    1996-01-01

    A project history review, literature review, and vendor search were conducted to identify a flowmeter that would improve the accuracy of gaseous flow measurements in the White Sands Test Facility (WSTF) Calibration Laboratory and the Hydrogen High Flow Facility. Both facilities currently use sonic flow nozzles to measure flowrates. The flow nozzle pressure drops combined with corresponding pressure and temperature measurements have been estimated to produce uncertainties in flowrate measurements of 2 to 5 percent. This study investigated the state of flowmeter technology to make recommendations that would reduce those uncertainties. Most flowmeters measure velocity and volume, therefore mass flow measurement must be calculated based on additional pressures and temperature measurement which contribute to the error. The two exceptions are thermal dispersion meters and Coriolis mass flowmeters. The thermal dispersion meters are accurate to 1 to 5 percent. The Coriolis meters are significantly more accurate, at least for liquids. For gases, there is evidence they may be accurate to within 0.5 percent or better of the flowrate, but there may be limitations due to inappropriate velocity, pressure, Mach number and vibration disturbances. In this report, a comparison of flowmeters is presented. Candidate Coriolis meters and a methodology to qualify the meter with tests both at WSTF and Southwest Research Institute are recommended and outlined.

  16. Numerical Optimization Strategy for Determining 3D Flow Fields in Microfluidics

    NASA Astrophysics Data System (ADS)

    Eden, Alex; Sigurdson, Marin; Mezic, Igor; Meinhart, Carl

    2015-11-01

    We present a hybrid experimental-numerical method for generating 3D flow fields from 2D PIV experimental data. An optimization algorithm is applied to a theory-based simulation of an alternating current electrothermal (ACET) micromixer in conjunction with 2D PIV data to generate an improved representation of 3D steady state flow conditions. These results can be used to investigate mixing phenomena. Experimental conditions were simulated using COMSOL Multiphysics to solve the temperature and velocity fields, as well as the quasi-static electric fields. The governing equations were based on a theoretical model for ac electrothermal flows. A Nelder-Mead optimization algorithm was used to achieve a better fit by minimizing the error between 2D PIV experimental velocity data and numerical simulation results at the measurement plane. By applying this hybrid method, the normalized RMS velocity error between the simulation and experimental results was reduced by more than an order of magnitude. The optimization algorithm altered 3D fluid circulation patterns considerably, providing a more accurate representation of the 3D experimental flow field. This method can be generalized to a wide variety of flow problems. This research was supported by the Institute for Collaborative Biotechnologies through grant W911NF-09-0001 from the U.S. Army Research Office.

  17. The Development of Point Doppler Velocimeter Data Acquisition and Processing Software

    NASA Technical Reports Server (NTRS)

    Cavone, Angelo A.

    2008-01-01

    In order to develop efficient and quiet aircraft and validate Computational Fluid Dynamic predications, aerodynamic researchers require flow parameter measurements to characterize flow fields about wind tunnel models and jet flows. A one-component Point Doppler Velocimeter (pDv), a non-intrusive, laser-based instrument, was constructed using a design/develop/test/validate/deploy approach. A primary component of the instrument is software required for system control/management and data collection/reduction. This software along with evaluation algorithms, advanced pDv from a laboratory curiosity to a production level instrument. Simultaneous pDv and pitot probe velocity measurements obtained at the centerline of a flow exiting a two-inch jet, matched within 0.4%. Flow turbulence spectra obtained with pDv and a hot-wire detected the primary and secondary harmonics with equal dynamic range produced by the fan driving the flow. Novel,hardware and software methods were developed, tested and incorporated into the system to eliminate and/or minimize error sources and improve system reliability.

  18. Deduction of two-dimensional blood flow vector by dual angle diverging waves from a cardiac sector probe

    NASA Astrophysics Data System (ADS)

    Maeda, Moe; Nagaoka, Ryo; Ikeda, Hayato; Yaegashi, So; Saijo, Yoshifumi

    2018-07-01

    Color Doppler method is widely used for noninvasive diagnosis of heart diseases. However, the method can measure one-dimensional (1D) blood flow velocity only along an ultrasonic beam. In this study, diverging waves with two different angles were irradiated from a cardiac sector probe to estimate a two-dimensional (2D) blood flow vector from each velocity measured with the angles. The feasibility of the proposed method was evaluated in experiments using flow poly(vinyl alcohol) (PVA) gel phantoms. The 2D velocity vectors obtained with the proposed method were compared with the flow vectors obtained with the particle image velocimetry (PIV) method. Root mean square errors of the axial and lateral components were 11.3 and 29.5 mm/s, respectively. The proposed method was also applied to echo data from the left ventricle of the heart. The inflow from the mitral valve in diastole and the ejection flow concentrating in the aorta in systole were visualized.

  19. Methods for estimating selected low-flow frequency statistics for unregulated streams in Kentucky

    USGS Publications Warehouse

    Martin, Gary R.; Arihood, Leslie D.

    2010-01-01

    This report provides estimates of, and presents methods for estimating, selected low-flow frequency statistics for unregulated streams in Kentucky including the 30-day mean low flows for recurrence intervals of 2 and 5 years (30Q2 and 30Q5) and the 7-day mean low flows for recurrence intervals of 5, 10, and 20 years (7Q2, 7Q10, and 7Q20). Estimates of these statistics are provided for 121 U.S. Geological Survey streamflow-gaging stations with data through the 2006 climate year, which is the 12-month period ending March 31 of each year. Data were screened to identify the periods of homogeneous, unregulated flows for use in the analyses. Logistic-regression equations are presented for estimating the annual probability of the selected low-flow frequency statistics being equal to zero. Weighted-least-squares regression equations were developed for estimating the magnitude of the nonzero 30Q2, 30Q5, 7Q2, 7Q10, and 7Q20 low flows. Three low-flow regions were defined for estimating the 7-day low-flow frequency statistics. The explicit explanatory variables in the regression equations include total drainage area and the mapped streamflow-variability index measured from a revised statewide coverage of this characteristic. The percentage of the station low-flow statistics correctly classified as zero or nonzero by use of the logistic-regression equations ranged from 87.5 to 93.8 percent. The average standard errors of prediction of the weighted-least-squares regression equations ranged from 108 to 226 percent. The 30Q2 regression equations have the smallest standard errors of prediction, and the 7Q20 regression equations have the largest standard errors of prediction. The regression equations are applicable only to stream sites with low flows unaffected by regulation from reservoirs and local diversions of flow and to drainage basins in specified ranges of basin characteristics. Caution is advised when applying the equations for basins with characteristics near the applicable limits and for basins with karst drainage features.

  20. Hydrodynamic boundary condition of water on hydrophobic surfaces.

    PubMed

    Schaeffel, David; Yordanov, Stoyan; Schmelzeisen, Marcus; Yamamoto, Tetsuya; Kappl, Michael; Schmitz, Roman; Dünweg, Burkhard; Butt, Hans-Jürgen; Koynov, Kaloian

    2013-05-01

    By combining total internal reflection fluorescence cross-correlation spectroscopy with Brownian dynamics simulations, we were able to measure the hydrodynamic boundary condition of water flowing over a smooth solid surface with exceptional accuracy. We analyzed the flow of aqueous electrolytes over glass coated with a layer of poly(dimethylsiloxane) (advancing contact angle Θ = 108°) or perfluorosilane (Θ = 113°). Within an error of better than 10 nm the slip length was indistinguishable from zero on all surfaces.

  1. Simulation of the ground-water flow system at Naval Submarine Base Bangor and vicinity, Kitsap County, Washington

    USGS Publications Warehouse

    Heeswijk, Marijke van; Smith, Daniel T.

    2002-01-01

    An evaluation of the interaction between ground-water flow on Naval Submarine Base Bangor and the regional-flow system shows that for selected alternatives of future ground-water pumping on and near the base, the risk is low that significant concentrations of on-base ground-water contamination will reach off-base public-supply wells and hypothetical wells southwest of the base. The risk is low even if worst-case conditions are considered ? no containment and remediation of on-base contamination. The evaluation also shows that future saltwater encroachment of aquifers below sea level may be possible, but this determination has considerable uncertainty associated with it. The potential effects on the ground-water flow system resulting from four hypothetical ground-water pumping alternatives were considered, including no change in 1995 pumping rates, doubling the rates, and 2020 rates estimated from population projections with two different pumping distributions. All but a continuation of 1995 pumping rates demonstrate the possibility of future saltwater encroachment in the Sea-level aquifer on Naval Submarine Base Bangor. The amount of time it would take for encroachment to occur is unknown. For all pumping alternatives, future saltwater encroachment in the Sea-level aquifer also may be possible along Puget Sound east and southeast of the base. Future saltwater encroachment in the Deep aquifer also may be possible throughout large parts of the study area. Projections of saltwater encroachment are least certain outside the boundaries of Naval Submarine Base Bangor. The potential effects of the ground-water pumping alternatives were evaluated by simulating the ground-water flow system with a three-dimensional uniform-density ground-water flow model. The model was calibrated by trial-and-error by minimizing differences between simulated and measured or estimated variables. These included water levels from prior to January 17, 1977 (termed 'predevelopment'), water-level drawdowns since predevelopment until April 15, 1995, ground-water discharge to streams in water year 1995, and residence times of ground water in different parts of the flow system that were estimated in a separate but related study. Large amounts of ground water were pumped from 1977 through 1980 from the Sea-level aquifer on Naval Submarine Base Bangor to enable the construction of an off-shore drydock. Records of the flow-system responses to the applied stresses were used to help calibrate the model. Errors in the calibrated model were significant. The poor agreement between simulated and measured values could be improved by making many local changes to hydraulic parameters but these changes were not supported by other data. Model errors may have resulted in errors in the simulated effects of ground-water pumping alternatives.

  2. Mesh refinement and numerical sensitivity analysis for parameter calibration of partial differential equations

    NASA Astrophysics Data System (ADS)

    Becker, Roland; Vexler, Boris

    2005-06-01

    We consider the calibration of parameters in physical models described by partial differential equations. This task is formulated as a constrained optimization problem with a cost functional of least squares type using information obtained from measurements. An important issue in the numerical solution of this type of problem is the control of the errors introduced, first, by discretization of the equations describing the physical model, and second, by measurement errors or other perturbations. Our strategy is as follows: we suppose that the user defines an interest functional I, which might depend on both the state variable and the parameters and which represents the goal of the computation. First, we propose an a posteriori error estimator which measures the error with respect to this functional. This error estimator is used in an adaptive algorithm to construct economic meshes by local mesh refinement. The proposed estimator requires the solution of an auxiliary linear equation. Second, we address the question of sensitivity. Applying similar techniques as before, we derive quantities which describe the influence of small changes in the measurements on the value of the interest functional. These numbers, which we call relative condition numbers, give additional information on the problem under consideration. They can be computed by means of the solution of the auxiliary problem determined before. Finally, we demonstrate our approach at hand of a parameter calibration problem for a model flow problem.

  3. Measuring Viscosities of Gases at Atmospheric Pressure

    NASA Technical Reports Server (NTRS)

    Singh, Jag J.; Mall, Gerald H.; Hoshang, Chegini

    1987-01-01

    Variant of general capillary method for measuring viscosities of unknown gases based on use of thermal mass-flowmeter section for direct measurement of pressure drops. In technique, flowmeter serves dual role, providing data for determining volume flow rates and serving as well-characterized capillary-tube section for measurement of differential pressures across it. New method simple, sensitive, and adaptable for absolute or relative viscosity measurements of low-pressure gases. Suited for very complex hydrocarbon mixtures where limitations of classical theory and compositional errors make theoretical calculations less reliable.

  4. Low-flow traveltime, longitudinal-dispersion, and reaeration characteristics of the Souris River from Lake Darling Dam to J Clark Salyer National Wildlife Refuge, North Dakota

    USGS Publications Warehouse

    Wesolowski, E.A.; Nelson, R.A.

    1987-01-01

    As part of the Sour is River water-quality assessment, traveltime, longitudinal-dispersion, and reaeration measurements were made during September 1983 on segments of the 186-mile reach of the Sour is River from Lake Darling Dam to the J. Clark Salyer National Wildlife Refuge. The primary objective was to determine traveltime, longitudinal-dispersion, and reaeration coefficients during low flow. Streamflow in the reach ranged from 10.5 to 47.0 cubic feet per second during the measurement period.On the basis of channel and hydraulic characteristics, the 186-mile reach was subdivided into five subreaches that ranged from 18 to 55 river miles in length. Within each subreach, representative test reaches that ranged from 5.0 to 9.1 river miles in length were selected for tracer injection and sample collection. Standard fluorometric techniques were used to measure traveltime and longitudinal dispersion, and a modified tracer technique that used ethylene and propane gas was used to measure reaeration. Mean test-reach velocities ranged from 0.05 to 0.30 foot per second, longitudinal-dispersion coefficients ranged from 4.2 to 61 square feet per second, and reaeration coefficients based on propane ranged from 0.39 to 1.66 per day. Predictive reaeration coefficients obtained from 18 equations (8 semiempirical and 10 empirical) were compared with each measured reaeration coefficient by use of an error-of-estimate analysis. The predictive reaeration coefficients ranged from 0.0008 to 3.4 per day. A semiempirical equation that produced coefficients most similar to the measured coefficients had the smallest absolute error of estimate (0.35). The smallest absolute error of estimate for the empirical equations was 0.41.

  5. Insights into the use of time-lapse GPR data as observations for inverse multiphase flow simulations of DNAPL migration

    USGS Publications Warehouse

    Johnson, R.H.; Poeter, E.P.

    2007-01-01

    Perchloroethylene (PCE) saturations determined from GPR surveys were used as observations for inversion of multiphase flow simulations of a PCE injection experiment (Borden 9??m cell), allowing for the estimation of optimal bulk intrinsic permeability values. The resulting fit statistics and analysis of residuals (observed minus simulated PCE saturations) were used to improve the conceptual model. These improvements included adjustment of the elevation of a permeability contrast, use of the van Genuchten versus Brooks-Corey capillary pressure-saturation curve, and a weighting scheme to account for greater measurement error with larger saturation values. A limitation in determining PCE saturations through one-dimensional GPR modeling is non-uniqueness when multiple GPR parameters are unknown (i.e., permittivity, depth, and gain function). Site knowledge, fixing the gain function, and multiphase flow simulations assisted in evaluating non-unique conceptual models of PCE saturation, where depth and layering were reinterpreted to provide alternate conceptual models. Remaining bias in the residuals is attributed to the violation of assumptions in the one-dimensional GPR interpretation (which assumes flat, infinite, horizontal layering) resulting from multidimensional influences that were not included in the conceptual model. While the limitations and errors in using GPR data as observations for inverse multiphase flow simulations are frustrating and difficult to quantify, simulation results indicate that the error and bias in the PCE saturation values are small enough to still provide reasonable optimal permeability values. The effort to improve model fit and reduce residual bias decreases simulation error even for an inversion based on biased observations and provides insight into alternate GPR data interpretations. Thus, this effort is warranted and provides information on bias in the observation data when this bias is otherwise difficult to assess. ?? 2006 Elsevier B.V. All rights reserved.

  6. A Monte-Carlo Bayesian framework for urban rainfall error modelling

    NASA Astrophysics Data System (ADS)

    Ochoa Rodriguez, Susana; Wang, Li-Pen; Willems, Patrick; Onof, Christian

    2016-04-01

    Rainfall estimates of the highest possible accuracy and resolution are required for urban hydrological applications, given the small size and fast response which characterise urban catchments. While significant progress has been made in recent years towards meeting rainfall input requirements for urban hydrology -including increasing use of high spatial resolution radar rainfall estimates in combination with point rain gauge records- rainfall estimates will never be perfect and the true rainfall field is, by definition, unknown [1]. Quantifying the residual errors in rainfall estimates is crucial in order to understand their reliability, as well as the impact that their uncertainty may have in subsequent runoff estimates. The quantification of errors in rainfall estimates has been an active topic of research for decades. However, existing rainfall error models have several shortcomings, including the fact that they are limited to describing errors associated to a single data source (i.e. errors associated to rain gauge measurements or radar QPEs alone) and to a single representative error source (e.g. radar-rain gauge differences, spatial temporal resolution). Moreover, rainfall error models have been mostly developed for and tested at large scales. Studies at urban scales are mostly limited to analyses of propagation of errors in rain gauge records-only through urban drainage models and to tests of model sensitivity to uncertainty arising from unmeasured rainfall variability. Only few radar rainfall error models -originally developed for large scales- have been tested at urban scales [2] and have been shown to fail to well capture small-scale storm dynamics, including storm peaks, which are of utmost important for urban runoff simulations. In this work a Monte-Carlo Bayesian framework for rainfall error modelling at urban scales is introduced, which explicitly accounts for relevant errors (arising from insufficient accuracy and/or resolution) in multiple data sources (in this case radar and rain gauge estimates typically available at present), while at the same time enabling dynamic combination of these data sources (thus not only quantifying uncertainty, but also reducing it). This model generates an ensemble of merged rainfall estimates, which can then be used as input to urban drainage models in order to examine how uncertainties in rainfall estimates propagate to urban runoff estimates. The proposed model is tested using as case study a detailed rainfall and flow dataset, and a carefully verified urban drainage model of a small (~9 km2) pilot catchment in North-East London. The model has shown to well characterise residual errors in rainfall data at urban scales (which remain after the merging), leading to improved runoff estimates. In fact, the majority of measured flow peaks are bounded within the uncertainty area produced by the runoff ensembles generated with the ensemble rainfall inputs. REFERENCES: [1] Ciach, G. J. & Krajewski, W. F. (1999). On the estimation of radar rainfall error variance. Advances in Water Resources, 22 (6), 585-595. [2] Rico-Ramirez, M. A., Liguori, S. & Schellart, A. N. A. (2015). Quantifying radar-rainfall uncertainties in urban drainage flow modelling. Journal of Hydrology, 528, 17-28.

  7. Space-Derived Sewer Monitor

    NASA Technical Reports Server (NTRS)

    1982-01-01

    The QuadraScan Longterm Flow Monitoring System is a second generation sewer monitor developed by American Digital Systems, Inc.'s founder Peter Petroff. Petroff, a former spacecraft instrumentation designer at Marshall Space Flight Center, used expertise based on principles acquired in Apollo and other NASA programs. QuadraScan borrows even more heavily from space technology, for example in its data acquisition and memory system derived from NASA satellites. "One-time" measurements are often plagued with substantial errors due to the flow of groundwater absorbed into the system. These system sizing errors stem from a basic informational deficiency: accurate, reliable data on how much water flows through a sewer system over a long period of time is very difficult to obtain. City officials are turning to "permanent," or long-term sewer monitoring systems. QuadraScan offers many advantages to city officials such as the early warning capability to effectively plan for city growth in order to avoid the crippling economic impact of bans on new sewer connections in effect in many cities today.

  8. Direct measurements of local bed shear stress in the presence of pressure gradients

    NASA Astrophysics Data System (ADS)

    Pujara, Nimish; Liu, Philip L.-F.

    2014-07-01

    This paper describes the development of a shear plate sensor capable of directly measuring the local mean bed shear stress in small-scale and large-scale laboratory flumes. The sensor is capable of measuring bed shear stress in the range 200 Pa with an accuracy up to 1 %. Its size, 43 mm in the flow direction, is designed to be small enough to give spatially local measurements, and its bandwidth, 75 Hz, is high enough to resolve time-varying forcing. Typically, shear plate sensors are restricted to use in zero pressure gradient flows because secondary forces on the edge of the shear plate caused by pressure gradients can introduce large errors. However, by analysis of the pressure distribution at the edges of the shear plate in mild pressure gradients, we introduce a new methodology for correcting for the pressure gradient force. The developed sensor includes pressure tappings to measure the pressure gradient in the flow, and the methodology for correction is applied to obtain accurate measurements of bed shear stress under solitary waves in a small-scale wave flume. The sensor is also validated by measurements in a turbulent flat plate boundary layer in open channel flow.

  9. A two-angle far-field microscope imaging technique for spray flows

    NASA Astrophysics Data System (ADS)

    Kourmatzis, Agisilaos; Pham, Phuong X.; Masri, Assaad R.

    2017-03-01

    Backlight imaging is frequently used for the visualization of multiphase flows, where with appropriate microscope lenses, quantitative information on the spray structure can be attained. However, a key issue resides in the nature of the measurement which relies on a single viewing angle, hence preventing imaging of all liquid structures and features, such as those located behind other fragments. This paper presents results from an extensive experimental study aimed as a step forward towards resolving this problem by using a pair of high speed cameras oriented at 90 degrees to each other, and synchronized to two high-speed diode lasers. Both cameras are used with long distance microscope lenses. The images are processed as pairs allowing for identification and classification of the same liquid structure from two perspectives at high temporal (5 kHz) and spatial resolution (∼3 μm). Using a controlled mono-disperse spray, simultaneous, time-resolved visualization of the same spherical object being focused on one plane while de-focused on the other plane 90 degrees to the first has allowed for a quantification of shot-to-shot defocused size measurement error. An extensive error analysis is performed for spheroidal structures imaged from two angles and the dual angle technique is extended to measure the volume of non-spherical fragments for the first time, by ‘discretising’ a fragment into a number of constituent ellipses. Error analysis is performed based on measuring the known volumes of solid arbitrary shapes, and volume estimates were found to be within  ∼11% of the real volume for representative ‘ligament-like’ shapes. The contribution concludes by applying the ellipsoidal method to a real spray consisting of multiple non-spherical fragments. This extended approach clearly demonstrates potential to yield novel volume weighted quantities of non-spherical objects in turbulent multiphase flow applications.

  10. The temperature measurement research for high-speed flow based on tunable diode laser absorption spectroscopy

    NASA Astrophysics Data System (ADS)

    Di, Yue; Jin, Yi; Jiang, Hong-liang; Zhai, Chao

    2013-09-01

    Due to the particularity of the high-speed flow, in order to accurately obtain its' temperature, the measurement system should has some characteristics of not interfereing with the flow, non-contact measurement and high time resolution. The traditional measurement method cannot meet the above requirements, however the measurement method based on tunable diode laser absorption spectroscopy (TDLAS) technology can meet the requirements for high-speed flow temperature measurement. When the near-infared light of a specific frequency is through the media to be measured, it will be absorbed by the water vapor molecules and then the transmission light intensity is detected by the detector. The temperature of the water vapor which is also the high-speed flow temperature, can be accurately obtained by the Beer-Lambert law. This paper focused on the research of absorption spectrum method for high speed flow temperature measurement with the scope of 250K-500K. Firstly, spectral line selection method for low temperature measurement of high-speed flow is discussed. Selected absorption lines should be isolated and have a high peak absorption within the range of 250-500K, at the same time the interference of the other lines should be avoided, so that a high measurement accuracy can be obtained. According to the near-infrared absorption spectra characteristics of water vapor, four absorption lines at the near 1395 nm and 1409 nm are selected. Secondly, a system for the temperature measurement of the water vapor in the high-speed flow is established. Room temperature are measured through two methods, direct absorption spectroscopy (DAS) and wavelength modulation spectroscopy (WMS) ,the results show that this system can realize on-line measurement of the temperature and the measurement error is about 3%. Finally, the system will be used for temperature measurement of the high-speed flow in the shock tunnel, its feasibility of measurement is analyzed.

  11. Error Estimate of the Ares I Vehicle Longitudinal Aerodynamic Characteristics Based on Turbulent Navier-Stokes Analysis

    NASA Technical Reports Server (NTRS)

    Abdol-Hamid, Khaled S.; Ghaffari, Farhad

    2011-01-01

    Numerical predictions of the longitudinal aerodynamic characteristics for the Ares I class of vehicles, along with the associated error estimate derived from an iterative convergence grid refinement, are presented. Computational results are based on the unstructured grid, Reynolds-averaged Navier-Stokes flow solver USM3D, with an assumption that the flow is fully turbulent over the entire vehicle. This effort was designed to complement the prior computational activities conducted over the past five years in support of the Ares I Project with the emphasis on the vehicle s last design cycle designated as the A106 configuration. Due to a lack of flight data for this particular design s outer mold line, the initial vehicle s aerodynamic predictions and the associated error estimates were first assessed and validated against the available experimental data at representative wind tunnel flow conditions pertinent to the ascent phase of the trajectory without including any propulsion effects. Subsequently, the established procedures were then applied to obtain the longitudinal aerodynamic predictions at the selected flight flow conditions. Sample computed results and the correlations with the experimental measurements are presented. In addition, the present analysis includes the relevant data to highlight the balance between the prediction accuracy against the grid size and, thus, the corresponding computer resource requirements for the computations at both wind tunnel and flight flow conditions. NOTE: Some details have been removed from selected plots and figures in compliance with the sensitive but unclassified (SBU) restrictions. However, the content still conveys the merits of the technical approach and the relevant results.

  12. Numerical prediction of a draft tube flow taking into account uncertain inlet conditions

    NASA Astrophysics Data System (ADS)

    Brugiere, O.; Balarac, G.; Corre, C.; Metais, O.; Flores, E.; Pleroy

    2012-11-01

    The swirling turbulent flow in a hydroturbine draft tube is computed with a non-intrusive uncertainty quantification (UQ) method coupled to Reynolds-Averaged Navier-Stokes (RANS) modelling in order to take into account in the numerical prediction the physical uncertainties existing on the inlet flow conditions. The proposed approach yields not only mean velocity fields to be compared with measured profiles, as is customary in Computational Fluid Dynamics (CFD) practice, but also variance of these quantities from which error bars can be deduced on the computed profiles, thus making more significant the comparison between experiment and computation.

  13. Post-processing of a low-flow forecasting system in the Thur basin (Switzerland)

    NASA Astrophysics Data System (ADS)

    Bogner, Konrad; Joerg-Hess, Stefanie; Bernhard, Luzi; Zappa, Massimiliano

    2015-04-01

    Low-flows and droughts are natural hazards with potentially severe impacts and economic loss or damage in a number of environmental and socio-economic sectors. As droughts develop slowly there is time to prepare and pre-empt some of these impacts. Real-time information and forecasting of a drought situation can therefore be an effective component of drought management. Although Switzerland has traditionally been more concerned with problems related to floods, in recent years some unprecedented low-flow situations have been experienced. Driven by the climate change debate a drought information platform has been developed to guide water resources management during situations where water resources drop below critical low-flow levels characterised by the indices duration (time between onset and offset), severity (cumulative water deficit) and magnitude (severity/duration). However to gain maximum benefit from such an information system it is essential to remove the bias from the meteorological forecast, to derive optimal estimates of the initial conditions, and to post-process the stream-flow forecasts. Quantile mapping methods for pre-processing the meteorological forecasts and improved data assimilation methods of snow measurements, which accounts for much of the seasonal stream-flow predictability for the majority of the basins in Switzerland, have been tested previously. The objective of this study is the testing of post-processing methods in order to remove bias and dispersion errors and to derive the predictive uncertainty of a calibrated low-flow forecast system. Therefore various stream-flow error correction methods with different degrees of complexity have been applied and combined with the Hydrological Uncertainty Processor (HUP) in order to minimise the differences between the observations and model predictions and to derive posterior probabilities. The complexity of the analysed error correction methods ranges from simple AR(1) models to methods including wavelet transformations and support vector machines. These methods have been combined with forecasts driven by Numerical Weather Prediction (NWP) systems with different temporal and spatial resolutions, lead-times and different numbers of ensembles covering short to medium to extended range forecasts (COSMO-LEPS, 10-15 days, monthly and seasonal ENS) as well as climatological forecasts. Additionally the suitability of various skill scores and efficiency measures regarding low-flow predictions will be tested. Amongst others the novel 2afc (2 alternatives forced choices) score and the quantile skill score and its decompositions will be applied to evaluate the probabilistic forecasts and the effects of post-processing. First results of the performance of the low-flow predictions of the hydrological model PREVAH initialised with different NWP's will be shown.

  14. Measurement uncertainty budget of an interferometric flow velocity sensor

    NASA Astrophysics Data System (ADS)

    Bermuske, Mike; Büttner, Lars; Czarske, Jürgen

    2017-06-01

    Flow rate measurements are a common topic for process monitoring in chemical engineering and food industry. To achieve the requested low uncertainties of 0:1% for flow rate measurements, a precise measurement of the shear layers of such flows is necessary. The Laser Doppler Velocimeter (LDV) is an established method for measuring local flow velocities. For exact estimation of the flow rate, the flow profile in the shear layer is of importance. For standard LDV the axial resolution and therefore the number of measurement points in the shear layer is defined by the length of the measurement volume. A decrease of this length is accompanied by a larger fringe distance variation along the measurement axis which results in a rise of the measurement uncertainty for the flow velocity (uncertainty relation between spatial resolution and velocity uncertainty). As a unique advantage, the laser Doppler profile sensor (LDV-PS) overcomes this problem by using two fan-like fringe systems to obtain the position of the measured particles along the measurement axis and therefore achieve a high spatial resolution while it still offers a low velocity uncertainty. With this technique, the flow rate can be estimated with one order of magnitude lower uncertainty, down to 0:05% statistical uncertainty.1 And flow profiles especially in film flows can be measured more accurately. The problem for this technique is, in contrast to laboratory setups where the system is quite stable, that for industrial applications the sensor needs a reliable and robust traceability to the SI units, meter and second. Small deviations in the calibration can, because of the highly position depending calibration function, cause large systematic errors in the measurement result. Therefore, a simple, stable and accurate tool is needed, that can easily be used in industrial surroundings to check or recalibrate the sensor. In this work, different calibration methods are presented and their influences to the measurement uncertainty budget of the sensor is discussed. Finally, generated measurement results for the film flow of an impinging jet cleaning experiment are presented.

  15. Optimization of planar PIV-based pressure estimates in laminar and turbulent wakes

    NASA Astrophysics Data System (ADS)

    McClure, Jeffrey; Yarusevych, Serhiy

    2017-05-01

    The performance of four pressure estimation techniques using Eulerian material acceleration estimates from planar, two-component Particle Image Velocimetry (PIV) data were evaluated in a bluff body wake. To allow for the ground truth comparison of the pressure estimates, direct numerical simulations of flow over a circular cylinder were used to obtain synthetic velocity fields. Direct numerical simulations were performed for Re_D = 100, 300, and 1575, spanning laminar, transitional, and turbulent wake regimes, respectively. A parametric study encompassing a range of temporal and spatial resolutions was performed for each Re_D. The effect of random noise typical of experimental velocity measurements was also evaluated. The results identified optimal temporal and spatial resolutions that minimize the propagation of random and truncation errors to the pressure field estimates. A model derived from linear error propagation through the material acceleration central difference estimators was developed to predict these optima, and showed good agreement with the results from common pressure estimation techniques. The results of the model are also shown to provide acceptable first-order approximations for sampling parameters that reduce error propagation when Lagrangian estimations of material acceleration are employed. For pressure integration based on planar PIV, the effect of flow three-dimensionality was also quantified, and shown to be most pronounced at higher Reynolds numbers downstream of the vortex formation region, where dominant vortices undergo substantial three-dimensional deformations. The results of the present study provide a priori recommendations for the use of pressure estimation techniques from experimental PIV measurements in vortex dominated laminar and turbulent wake flows.

  16. Bayesian inference of ice thickness from remote-sensing data

    NASA Astrophysics Data System (ADS)

    Werder, Mauro A.; Huss, Matthias

    2017-04-01

    Knowledge about ice thickness and volume is indispensable for studying ice dynamics, future sea-level rise due to glacier melt or their contribution to regional hydrology. Accurate measurements of glacier thickness require on-site work, usually employing radar techniques. However, these field measurements are time consuming, expensive and sometime downright impossible. Conversely, measurements of the ice surface, namely elevation and flow velocity, are becoming available world-wide through remote sensing. The model of Farinotti et al. (2009) calculates ice thicknesses based on a mass conservation approach paired with shallow ice physics using estimates of the surface mass balance. The presented work applies a Bayesian inference approach to estimate the parameters of a modified version of this forward model by fitting it to both measurements of surface flow speed and of ice thickness. The inverse model outputs ice thickness as well the distribution of the error. We fit the model to ten test glaciers and ice caps and quantify the improvements of thickness estimates through the usage of surface ice flow measurements.

  17. Virtual sensors for robust on-line monitoring (OLM) and Diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tipireddy, Ramakrishna; Lerchen, Megan E.; Ramuhalli, Pradeep

    Unscheduled shutdown of nuclear power facilities for recalibration and replacement of faulty sensors can be expensive and disruptive to grid management. In this work, we present virtual (software) sensors that can replace a faulty physical sensor for a short duration thus allowing recalibration to be safely deferred to a later time. The virtual sensor model uses a Gaussian process model to process input data from redundant and other nearby sensors. Predicted data includes uncertainty bounds including spatial association uncertainty and measurement noise and error. Using data from an instrumented cooling water flow loop testbed, the virtual sensor model has predictedmore » correct sensor measurements and the associated error corresponding to a faulty sensor.« less

  18. In vivo photoacoustic tomography of total blood flow and Doppler angle

    NASA Astrophysics Data System (ADS)

    Yao, Junjie; Maslov, Konstantin I.; Wang, Lihong V.

    2012-02-01

    As two hallmarks of cancer, angiogenesis and hypermetabolism are closely related to increased blood flow. Volumetric blood flow measurement is important to understanding the tumor microenvironment and developing new means to treat cancer. Current photoacoustic blood flow estimation methods focus on either the axial or transverse component of the flow vector. Here, we propose a method to compute the total flow speed and Doppler angle by combining the axial and transverse flow measurements. Both the components are measured in M-mode. Collating the A-lines side by side yields a 2D matrix. The columns are Hilbert transformed to compare the phases for the computation of the axial flow. The rows are Fourier transformed to quantify the bandwidth for the computation of the transverse flow. From the axial and transverse flow components, the total flow speed and Doppler angle can be derived. The method has been verified by flowing bovine blood in a plastic tube at various speeds from 0 to 7.5 mm/s and at Doppler angles from 30 to 330°. The measurement error for total flow speed was experimentally determined to be less than 0.3 mm/s; for the Doppler angle, it was less than 15°. In addition, the method was tested in vivo on a mouse ear. The advantage of this method is simplicity: No system modification or additional data acquisition is required to use our existing system. We believe that the proposed method has the potential to be used for cancer angiogenesis and hypermetabolism imaging.

  19. Real-time hydraulic interval state estimation for water transport networks: a case study

    NASA Astrophysics Data System (ADS)

    Vrachimis, Stelios G.; Eliades, Demetrios G.; Polycarpou, Marios M.

    2018-03-01

    Hydraulic state estimation in water distribution networks is the task of estimating water flows and pressures in the pipes and nodes of the network based on some sensor measurements. This requires a model of the network as well as knowledge of demand outflow and tank water levels. Due to modeling and measurement uncertainty, standard state estimation may result in inaccurate hydraulic estimates without any measure of the estimation error. This paper describes a methodology for generating hydraulic state bounding estimates based on interval bounds on the parametric and measurement uncertainties. The estimation error bounds provided by this method can be applied to determine the existence of unaccounted-for water in water distribution networks. As a case study, the method is applied to a modified transport network in Cyprus, using actual data in real time.

  20. Magnetic Nanoparticle Thermometer: An Investigation of Minimum Error Transmission Path and AC Bias Error

    PubMed Central

    Du, Zhongzhou; Su, Rijian; Liu, Wenzhong; Huang, Zhixing

    2015-01-01

    The signal transmission module of a magnetic nanoparticle thermometer (MNPT) was established in this study to analyze the error sources introduced during the signal flow in the hardware system. The underlying error sources that significantly affected the precision of the MNPT were determined through mathematical modeling and simulation. A transfer module path with the minimum error in the hardware system was then proposed through the analysis of the variations of the system error caused by the significant error sources when the signal flew through the signal transmission module. In addition, a system parameter, named the signal-to-AC bias ratio (i.e., the ratio between the signal and AC bias), was identified as a direct determinant of the precision of the measured temperature. The temperature error was below 0.1 K when the signal-to-AC bias ratio was higher than 80 dB, and other system errors were not considered. The temperature error was below 0.1 K in the experiments with a commercial magnetic fluid (Sample SOR-10, Ocean Nanotechnology, Springdale, AR, USA) when the hardware system of the MNPT was designed with the aforementioned method. PMID:25875188

  1. Accuracy of flowmeters measuring horizontal groundwater flow in an unconsolidated aquifer simulator.

    USGS Publications Warehouse

    Bayless, E.R.; Mandell, Wayne A.; Ursic, James R.

    2011-01-01

    Borehole flowmeters that measure horizontal flow velocity and direction of groundwater flow are being increasingly applied to a wide variety of environmental problems. This study was carried out to evaluate the measurement accuracy of several types of flowmeters in an unconsolidated aquifer simulator. Flowmeter response to hydraulic gradient, aquifer properties, and well-screen construction was measured during 2003 and 2005 at the U.S. Geological Survey Hydrologic Instrumentation Facility in Bay St. Louis, Mississippi. The flowmeters tested included a commercially available heat-pulse flowmeter, an acoustic Doppler flowmeter, a scanning colloidal borescope flowmeter, and a fluid-conductivity logging system. Results of the study indicated that at least one flowmeter was capable of measuring borehole flow velocity and direction in most simulated conditions. The mean error in direction measurements ranged from 15.1 degrees to 23.5 degrees and the directional accuracy of all tested flowmeters improved with increasing hydraulic gradient. The range of Darcy velocities examined in this study ranged 4.3 to 155 ft/d. For many plots comparing the simulated and measured Darcy velocity, the squared correlation coefficient (r2) exceeded 0.92. The accuracy of velocity measurements varied with well construction and velocity magnitude. The use of horizontal flowmeters in environmental studies appears promising but applications may require more than one type of flowmeter to span the range of conditions encountered in the field. Interpreting flowmeter data from field settings may be complicated by geologic heterogeneity, preferential flow, vertical flow, constricted screen openings, and nonoptimal screen orientation.

  2. Blind system identification of two-thermocouple sensor based on cross-relation method.

    PubMed

    Li, Yanfeng; Zhang, Zhijie; Hao, Xiaojian

    2018-03-01

    In dynamic temperature measurement, the dynamic characteristics of the sensor affect the accuracy of the measurement results. Thermocouples are widely used for temperature measurement in harsh conditions due to their low cost, robustness, and reliability, but because of the presence of the thermal inertia, there is a dynamic error in the dynamic temperature measurement. In order to eliminate the dynamic error, two-thermocouple sensor was used to measure dynamic gas temperature in constant velocity flow environments in this paper. Blind system identification of two-thermocouple sensor based on a cross-relation method was carried out. Particle swarm optimization algorithm was used to estimate time constants of two thermocouples and compared with the grid based search method. The method was validated on the experimental equipment built by using high temperature furnace, and the input dynamic temperature was reconstructed by using the output data of the thermocouple with small time constant.

  3. Blind system identification of two-thermocouple sensor based on cross-relation method

    NASA Astrophysics Data System (ADS)

    Li, Yanfeng; Zhang, Zhijie; Hao, Xiaojian

    2018-03-01

    In dynamic temperature measurement, the dynamic characteristics of the sensor affect the accuracy of the measurement results. Thermocouples are widely used for temperature measurement in harsh conditions due to their low cost, robustness, and reliability, but because of the presence of the thermal inertia, there is a dynamic error in the dynamic temperature measurement. In order to eliminate the dynamic error, two-thermocouple sensor was used to measure dynamic gas temperature in constant velocity flow environments in this paper. Blind system identification of two-thermocouple sensor based on a cross-relation method was carried out. Particle swarm optimization algorithm was used to estimate time constants of two thermocouples and compared with the grid based search method. The method was validated on the experimental equipment built by using high temperature furnace, and the input dynamic temperature was reconstructed by using the output data of the thermocouple with small time constant.

  4. A mathematical method for verifying the validity of measured information about the flows of energy resources based on the state estimation theory

    NASA Astrophysics Data System (ADS)

    Pazderin, A. V.; Sof'in, V. V.; Samoylenko, V. O.

    2015-11-01

    Efforts aimed at improving energy efficiency in all branches of the fuel and energy complex shall be commenced with setting up a high-tech automated system for monitoring and accounting energy resources. Malfunctions and failures in the measurement and information parts of this system may distort commercial measurements of energy resources and lead to financial risks for power supplying organizations. In addition, measurement errors may be connected with intentional distortion of measurements for reducing payment for using energy resources on the consumer's side, which leads to commercial loss of energy resource. The article presents a universal mathematical method for verifying the validity of measurement information in networks for transporting energy resources, such as electricity and heat, petroleum, gas, etc., based on the state estimation theory. The energy resource transportation network is represented by a graph the nodes of which correspond to producers and consumers, and its branches stand for transportation mains (power lines, pipelines, and heat network elements). The main idea of state estimation is connected with obtaining the calculated analogs of energy resources for all available measurements. Unlike "raw" measurements, which contain inaccuracies, the calculated flows of energy resources, called estimates, will fully satisfy the suitability condition for all state equations describing the energy resource transportation network. The state equations written in terms of calculated estimates will be already free from residuals. The difference between a measurement and its calculated analog (estimate) is called in the estimation theory an estimation remainder. The obtained large values of estimation remainders are an indicator of high errors of particular energy resource measurements. By using the presented method it is possible to improve the validity of energy resource measurements, to estimate the transportation network observability, to eliminate the energy resource flows measurement imbalances, and to filter invalid measurements at the data acquisition and processing stage in performing monitoring of an automated energy resource monitoring and accounting system.

  5. Representing radar rainfall uncertainty with ensembles based on a time-variant geostatistical error modelling approach

    NASA Astrophysics Data System (ADS)

    Cecinati, Francesca; Rico-Ramirez, Miguel Angel; Heuvelink, Gerard B. M.; Han, Dawei

    2017-05-01

    The application of radar quantitative precipitation estimation (QPE) to hydrology and water quality models can be preferred to interpolated rainfall point measurements because of the wide coverage that radars can provide, together with a good spatio-temporal resolutions. Nonetheless, it is often limited by the proneness of radar QPE to a multitude of errors. Although radar errors have been widely studied and techniques have been developed to correct most of them, residual errors are still intrinsic in radar QPE. An estimation of uncertainty of radar QPE and an assessment of uncertainty propagation in modelling applications is important to quantify the relative importance of the uncertainty associated to radar rainfall input in the overall modelling uncertainty. A suitable tool for this purpose is the generation of radar rainfall ensembles. An ensemble is the representation of the rainfall field and its uncertainty through a collection of possible alternative rainfall fields, produced according to the observed errors, their spatial characteristics, and their probability distribution. The errors are derived from a comparison between radar QPE and ground point measurements. The novelty of the proposed ensemble generator is that it is based on a geostatistical approach that assures a fast and robust generation of synthetic error fields, based on the time-variant characteristics of errors. The method is developed to meet the requirement of operational applications to large datasets. The method is applied to a case study in Northern England, using the UK Met Office NIMROD radar composites at 1 km resolution and at 1 h accumulation on an area of 180 km by 180 km. The errors are estimated using a network of 199 tipping bucket rain gauges from the Environment Agency. 183 of the rain gauges are used for the error modelling, while 16 are kept apart for validation. The validation is done by comparing the radar rainfall ensemble with the values recorded by the validation rain gauges. The validated ensemble is then tested on a hydrological case study, to show the advantage of probabilistic rainfall for uncertainty propagation. The ensemble spread only partially captures the mismatch between the modelled and the observed flow. The residual uncertainty can be attributed to other sources of uncertainty, in particular to model structural uncertainty, parameter identification uncertainty, uncertainty in other inputs, and uncertainty in the observed flow.

  6. Application of a Line Laser Scanner for Bed Form Tracking in a Laboratory Flume

    NASA Astrophysics Data System (ADS)

    de Ruijsscher, T. V.; Hoitink, A. J. F.; Dinnissen, S.; Vermeulen, B.; Hazenberg, P.

    2018-03-01

    A new measurement method for continuous detection of bed forms in movable bed laboratory experiments is presented and tested. The device consists of a line laser coupled to a 3-D camera, which makes use of triangulation. This allows to measure bed forms during morphodynamic experiments, without removing the water from the flume. A correction is applied for the effect of laser refraction at the air-water interface. We conclude that the absolute measurement error increases with increasing flow velocity, its standard deviation increases with water depth and flow velocity, and the percentage of missing values increases with water depth. Although 71% of the data is lost in a pilot moving bed experiment with sand, still high agreement between flowing water and dry bed measurements is found when a robust LOcally weighted regrESSion (LOESS) procedure is applied. This is promising for bed form tracking applications in laboratory experiments, especially when lightweight sediments like polystyrene are used, which require smaller flow velocities to achieve dynamic similarity to the prototype. This is confirmed in a moving bed experiment with polystyrene.

  7. Retrieving accurate temporal and spatial information about Taylor slug flows from non-invasive NIR photometry measurements

    NASA Astrophysics Data System (ADS)

    Helmers, Thorben; Thöming, Jorg; Mießner, Ulrich

    2017-11-01

    In this article, we introduce a novel approach to retrieve spatial- and time-resolved Taylor slug flow information from a single non-invasive photometric flow sensor. The presented approach uses disperse phase surface properties to retrieve the instantaneous velocity information from a single sensor's time-scaled signal. For this purpose, a photometric sensor system is simulated using a ray-tracing algorithm to calculate spatially resolved near-infrared transmission signals. At the signal position corresponding to the rear droplet cap, a correlation factor of the droplet's geometric properties is retrieved and used to extract the instantaneous droplet velocity from the real sensor's temporal transmission signal. Furthermore, a correlation for the rear cap geometry based on the a priori known total superficial flow velocity is developed, because the cap curvature is velocity sensitive itself. Our model for velocity derivation is validated, and measurements of a first prototype showcase the capability of the device. Long-term measurements visualize systematic fluctuations in droplet lengths, velocities, and frequencies that could otherwise, without the observation on a larger timescale, have been identified as measurement errors and not systematic phenomenas.

  8. The effect of rainfall measurement uncertainties on rainfall-runoff processes modelling.

    PubMed

    Stransky, D; Bares, V; Fatka, P

    2007-01-01

    Rainfall data are a crucial input for various tasks concerning the wet weather period. Nevertheless, their measurement is affected by random and systematic errors that cause an underestimation of the rainfall volume. Therefore, the general objective of the presented work was to assess the credibility of measured rainfall data and to evaluate the effect of measurement errors on urban drainage modelling tasks. Within the project, the methodology of the tipping bucket rain gauge (TBR) was defined and assessed in terms of uncertainty analysis. A set of 18 TBRs was calibrated and the results were compared to the previous calibration. This enables us to evaluate the ageing of TBRs. A propagation of calibration and other systematic errors through the rainfall-runoff model was performed on experimental catchment. It was found that the TBR calibration is important mainly for tasks connected with the assessment of peak values and high flow durations. The omission of calibration leads to up to 30% underestimation and the effect of other systematic errors can add a further 15%. The TBR calibration should be done every two years in order to catch up the ageing of TBR mechanics. Further, the authors recommend to adjust the dynamic test duration proportionally to generated rainfall intensity.

  9. Chemistry of groundwater discharge inferred from longitudinal river sampling

    NASA Astrophysics Data System (ADS)

    Batlle-Aguilar, J.; Harrington, G. A.; Leblanc, M.; Welch, C.; Cook, P. G.

    2014-02-01

    We present an approach for identifying groundwater discharge chemistry and quantifying spatially distributed groundwater discharge into rivers based on longitudinal synoptic sampling and flow gauging of a river. The method is demonstrated using a 450 km reach of a tropical river in Australia. Results obtained from sampling for environmental tracers, major ions, and selected trace element chemistry were used to calibrate a steady state one-dimensional advective transport model of tracer distribution along the river. The model closely reproduced river discharge and environmental tracer and chemistry composition along the study length. It provided a detailed longitudinal profile of groundwater inflow chemistry and discharge rates, revealing that regional fractured mudstones in the central part of the catchment contributed up to 40% of all groundwater discharge. Detailed analysis of model calibration errors and modeled/measured groundwater ion ratios elucidated that groundwater discharging in the top of the catchment is a mixture of local groundwater and bank storage return flow, making the method potentially useful to differentiate between local and regional sourced groundwater discharge. As the error in tracer concentration induced by a flow event applies equally to any conservative tracer, we show that major ion ratios can still be resolved with minimal error when river samples are collected during transient flow conditions. The ability of the method to infer groundwater inflow chemistry from longitudinal river sampling is particularly attractive in remote areas where access to groundwater is limited or not possible, and for identification of actual fluxes of salts and/or specific contaminant sources.

  10. In vitro validation of a Pitot-based flow meter for the measurement of respiratory volume and flow in large animal anaesthesia.

    PubMed

    Moens, Yves P S; Gootjes, Peter; Ionita, Jean-Claude; Heinonen, Erkki; Schatzmann, Urs

    2009-05-01

    To remodel and validate commercially available monitors and their Pitot tube-based flow sensors for use in large animals, using in vitro techniques. Prospective, in vitro experiment. Both the original and the remodelled sensor were studied with a reference flow generator. Measurements were taken of the static flow-pressure relationship and linearity of the flow signal. Sensor airway resistance was calculated. Following recalibration of the host monitor, volumes ranging from 1 to 7 L were generated by a calibration syringe, and bias and precision of spirometric volume was determined. Where manual recalibration was not available, a conversion factor for volume measurement was determined. The influence of gas composition mixture and peak flow on the conversion factor was studied. Both the original and the remodelled sensor showed similar static flow-pressure relationships and linearity of the flow signal. Mean bias (%) of displayed values compared with the reference volume of 3, 5 and 7 L varied between -0.4% and +2.4%, and this was significantly smaller than that for 1 L (4.8% to +5.0%). Conversion factors for 3, 5 and 7 L were very similar (mean 6.00 +/- 0.2, range 5.91-6.06) and were not significantly influenced by the gas mixture used. Increasing peak flow caused a small decrease in the conversion factor. Volume measurement error and conversion factors for inspiration and expiration were close to identity. The combination of the host monitor with the remodelled flow sensor allowed accurate in vitro measurement of flows and volumes in a range expected during large animal anaesthesia. This combination has potential as a reliable spirometric monitor for use during large animal anaesthesia.

  11. Estimating selected low-flow frequency statistics and harmonic-mean flows for ungaged, unregulated streams in Indiana

    USGS Publications Warehouse

    Martin, Gary R.; Fowler, Kathleen K.; Arihood, Leslie D.

    2016-09-06

    Information on low-flow characteristics of streams is essential for the management of water resources. This report provides equations for estimating the 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years and the harmonic-mean flow at ungaged, unregulated stream sites in Indiana. These equations were developed using the low-flow statistics and basin characteristics for 108 continuous-record streamgages in Indiana with at least 10 years of daily mean streamflow data through the 2011 climate year (April 1 through March 31). The equations were developed in cooperation with the Indiana Department of Environmental Management.Regression techniques were used to develop the equations for estimating low-flow frequency statistics and the harmonic-mean flows on the basis of drainage-basin characteristics. A geographic information system was used to measure basin characteristics for selected streamgages. A final set of 25 basin characteristics measured at all the streamgages were evaluated to choose the best predictors of the low-flow statistics.Logistic-regression equations applicable statewide are presented for estimating the probability that selected low-flow frequency statistics equal zero. These equations use the explanatory variables total drainage area, average transmissivity of the full thickness of the unconsolidated deposits within 1,000 feet of the stream network, and latitude of the basin outlet. The percentage of the streamgage low-flow statistics correctly classified as zero or nonzero using the logistic-regression equations ranged from 86.1 to 88.9 percent.Generalized-least-squares regression equations applicable statewide for estimating nonzero low-flow frequency statistics use total drainage area, the average hydraulic conductivity of the top 70 feet of unconsolidated deposits, the slope of the basin, and the index of permeability and thickness of the Quaternary surficial sediments as explanatory variables. The average standard error of prediction of these regression equations ranges from 55.7 to 61.5 percent.Regional weighted-least-squares regression equations were developed for estimating the harmonic-mean flows by dividing the State into three low-flow regions. The Northern region uses total drainage area and the average transmissivity of the entire thickness of unconsolidated deposits as explanatory variables. The Central region uses total drainage area, the average hydraulic conductivity of the entire thickness of unconsolidated deposits, and the index of permeability and thickness of the Quaternary surficial sediments. The Southern region uses total drainage area and the percent of the basin covered by forest. The average standard error of prediction for these equations ranges from 39.3 to 66.7 percent.The regional regression equations are applicable only to stream sites with low flows unaffected by regulation and to stream sites with drainage basin characteristic values within specified limits. Caution is advised when applying the equations for basins with characteristics near the applicable limits and for basins with karst drainage features and for urbanized basins. Extrapolations near and beyond the applicable basin characteristic limits will have unknown errors that may be large. Equations are presented for use in estimating the 90-percent prediction interval of the low-flow statistics estimated by use of the regression equations at a given stream site.The regression equations are to be incorporated into the U.S. Geological Survey StreamStats Web-based application for Indiana. StreamStats allows users to select a stream site on a map and automatically measure the needed basin characteristics and compute the estimated low-flow statistics and associated prediction intervals.

  12. Scaling up and error analysis of transpiration for Populus euphratica in a desert riparian forest

    NASA Astrophysics Data System (ADS)

    Si, J.; Li, W.; Feng, Q.

    2013-12-01

    Water consumption information of the forest stand is the most important factor for regional water resources management. However, water consumption of individual trees are usually measured based on the limited sample trees , so, it is an important issue how to realize eventual scaling up of data from a series of sample trees to entire stand. Estimation of sap flow flux density (Fd) and stand sapwood area (AS-stand) are among the most critical factors for determining forest stand transpiration using sap flow measurement. To estimate Fd, the various links in sap flow technology have great impact on the measurement of sap flow, to estimate AS-stand, an appropriate indirect technique for measuring each tree sapwood area (AS-tree) is required, because it is impossible to measure the AS-tree of all trees in a forest stand. In this study, Fd was measured in 2 mature P. euphratic trees at several radial depths, 0~10, 10~30mm, using sap flow sensors with the heat ratio method, the relationship model between AS-tree and stem diameter (DBH), growth model of AS-tree were established, using investigative original data of DBH, tree-age, and AS-tree. The results revealed that it can achieve scaling up of transpiration from sample trees to entire forest stand using AS-tree and Fd, however, the transpiration of forest stand (E) will be overvalued by 12.6% if using Fd of 0~10mm, and it will be underestimated by 25.3% if using Fd of 10~30mm, it implied that major uncertainties in mean stand Fd estimations are caused by radial variations in Fd. E will be obviously overvalued when the AS-stand is constant, this result imply that it is the key to improve the prediction accuracy that how to simulate the AS-stand changes in the day scale; They also showed that the potential errors in transpiration with a sample size of approximately ≥30 were almost stable for P.euphrtica, this suggests that to make an allometric equation it might be necessary to sample at least 30 trees.

  13. LV software support for supersonic flow analysis

    NASA Technical Reports Server (NTRS)

    Bell, W. A.; Lepicovsky, J.

    1992-01-01

    The software for configuring an LV counter processor system has been developed using structured design. The LV system includes up to three counter processors and a rotary encoder. The software for configuring and testing the LV system has been developed, tested, and included in an overall software package for data acquisition, analysis, and reduction. Error handling routines respond to both operator and instrument errors which often arise in the course of measuring complex, high-speed flows. The use of networking capabilities greatly facilitates the software development process by allowing software development and testing from a remote site. In addition, high-speed transfers allow graphics files or commands to provide viewing of the data from a remote site. Further advances in data analysis require corresponding advances in procedures for statistical and time series analysis of nonuniformly sampled data.

  14. LV software support for supersonic flow analysis

    NASA Technical Reports Server (NTRS)

    Bell, William A.

    1992-01-01

    The software for configuring a Laser Velocimeter (LV) counter processor system was developed using structured design. The LV system includes up to three counter processors and a rotary encoder. The software for configuring and testing the LV system was developed, tested, and included in an overall software package for data acquisition, analysis, and reduction. Error handling routines respond to both operator and instrument errors which often arise in the course of measuring complex, high-speed flows. The use of networking capabilities greatly facilitates the software development process by allowing software development and testing from a remote site. In addition, high-speed transfers allow graphics files or commands to provide viewing of the data from a remote site. Further advances in data analysis require corresponding advances in procedures for statistical and time series analysis of nonuniformly sampled data.

  15. Visualization of Concrete Slump Flow Using the Kinect Sensor

    PubMed Central

    Park, Minbeom

    2018-01-01

    Workability is regarded as one of the important parameters of high-performance concrete and monitoring it is essential in concrete quality management at construction sites. The conventional workability test methods are basically based on length and time measured by a ruler and a stopwatch and, as such, inevitably involves human error. In this paper, we propose a 4D slump test method based on digital measurement and data processing as a novel concrete workability test. After acquiring the dynamically changing 3D surface of fresh concrete using a 3D depth sensor during the slump flow test, the stream images are processed with the proposed 4D slump processing algorithm and the results are compressed into a single 4D slump image. This image basically represents the dynamically spreading cross-section of fresh concrete along the time axis. From the 4D slump image, it is possible to determine the slump flow diameter, slump flow time, and slump height at any location simultaneously. The proposed 4D slump test will be able to activate research related to concrete flow simulation and concrete rheology by providing spatiotemporal measurement data of concrete flow. PMID:29510510

  16. Visualization of Concrete Slump Flow Using the Kinect Sensor.

    PubMed

    Kim, Jung-Hoon; Park, Minbeom

    2018-03-03

    Workability is regarded as one of the important parameters of high-performance concrete and monitoring it is essential in concrete quality management at construction sites. The conventional workability test methods are basically based on length and time measured by a ruler and a stopwatch and, as such, inevitably involves human error. In this paper, we propose a 4D slump test method based on digital measurement and data processing as a novel concrete workability test. After acquiring the dynamically changing 3D surface of fresh concrete using a 3D depth sensor during the slump flow test, the stream images are processed with the proposed 4D slump processing algorithm and the results are compressed into a single 4D slump image. This image basically represents the dynamically spreading cross-section of fresh concrete along the time axis. From the 4D slump image, it is possible to determine the slump flow diameter, slump flow time, and slump height at any location simultaneously. The proposed 4D slump test will be able to activate research related to concrete flow simulation and concrete rheology by providing spatiotemporal measurement data of concrete flow.

  17. Median and Low-Flow Characteristics for Streams under Natural and Diverted Conditions, Northeast Maui, Hawaii

    USGS Publications Warehouse

    Gingerich, Stephen B.

    2005-01-01

    Flow-duration statistics under natural (undiverted) and diverted flow conditions were estimated for gaged and ungaged sites on 21 streams in northeast Maui, Hawaii. The estimates were made using the optimal combination of continuous-record gaging-station data, low-flow measurements, and values determined from regression equations developed as part of this study. Estimated 50- and 95-percent flow duration statistics for streams are presented and the analyses done to develop and evaluate the methods used in estimating the statistics are described. Estimated streamflow statistics are presented for sites where various amounts of streamflow data are available as well as for locations where no data are available. Daily mean flows were used to determine flow-duration statistics for continuous-record stream-gaging stations in the study area following U.S. Geological Survey established standard methods. Duration discharges of 50- and 95-percent were determined from total flow and base flow for each continuous-record station. The index-station method was used to adjust all of the streamflow records to a common, long-term period. The gaging station on West Wailuaiki Stream (16518000) was chosen as the index station because of its record length (1914-2003) and favorable geographic location. Adjustments based on the index-station method resulted in decreases to the 50-percent duration total flow, 50-percent duration base flow, 95-percent duration total flow, and 95-percent duration base flow computed on the basis of short-term records that averaged 7, 3, 4, and 1 percent, respectively. For the drainage basin of each continuous-record gaged site and selected ungaged sites, morphometric, geologic, soil, and rainfall characteristics were quantified using Geographic Information System techniques. Regression equations relating the non-diverted streamflow statistics to basin characteristics of the gaged basins were developed using ordinary-least-squares regression analyses. Rainfall rate, maximum basin elevation, and the elongation ratio of the basin were the basin characteristics used in the final regression equations for 50-percent duration total flow and base flow. Rainfall rate and maximum basin elevation were used in the final regression equations for the 95-percent duration total flow and base flow. The relative errors between observed and estimated flows ranged from 10 to 20 percent for the 50-percent duration total flow and base flow, and from 29 to 56 percent for the 95-percent duration total flow and base flow. The regression equations developed for this study were used to determine the 50-percent duration total flow, 50-percent duration base flow, 95-percent duration total flow, and 95-percent duration base flow at selected ungaged diverted and undiverted sites. Estimated streamflow, prediction intervals, and standard errors were determined for 48 ungaged sites in the study area and for three gaged sites west of the study area. Relative errors were determined for sites where measured values of 95-percent duration discharge of total flow were available. East of Keanae Valley, the 95-percent duration discharge equation generally underestimated flow, and within and west of Keanae Valley, the equation generally overestimated flow. Reduction in 50- and 95-percent flow-duration values in stream reaches affected by diversions throughout the study area average 58 to 60 percent.

  18. Fabrication of rigid and flexible refractive-index-matched flow phantoms for flow visualisation and optical flow measurements

    NASA Astrophysics Data System (ADS)

    Geoghegan, P. H.; Buchmann, N. A.; Spence, C. J. T.; Moore, S.; Jermy, M.

    2012-05-01

    A method for the construction of both rigid and compliant (flexible) transparent flow phantoms of biological flow structures, suitable for PIV and other optical flow methods with refractive-index-matched working fluid is described in detail. Methods for matching the in vivo compliance and elastic wave propagation wavelength are presented. The manipulation of MRI and CT scan data through an investment casting mould is described. A method for the casting of bubble-free phantoms in silicone elastomer is given. The method is applied to fabricate flexible phantoms of the carotid artery (with and without stenosis), the carotid artery bifurcation (idealised and patient-specific) and the human upper airway (nasal cavity). The fidelity of the phantoms to the original scan data is measured, and it is shown that the cross-sectional error is less than 5% for phantoms of simple shape but up to 16% for complex cross-sectional shapes such as the nasal cavity. This error is mainly due to the application of a PVA coating to the inner mould and can be reduced by shrinking the digital model. Sixteen per cent variation in area is less than the natural patient to patient variation of the physiological geometries. The compliance of the phantom walls is controlled within physiologically realistic ranges, by choice of the wall thickness, transmural pressure and Young's modulus of the elastomer. Data for the dependence of Young's modulus on curing temperature are given for Sylgard 184. Data for the temperature dependence of density, viscosity and refractive index of the refractive-index-matched working liquid (i.e. water-glycerol mixtures) are also presented.

  19. Estimating the Magnitude and Frequency of Floods in Small Urban Streams in South Carolina, 2001

    USGS Publications Warehouse

    Feaster, Toby D.; Guimaraes, Wladimir B.

    2004-01-01

    The magnitude and frequency of floods at 20 streamflowgaging stations on small, unregulated urban streams in or near South Carolina were estimated by fitting the measured wateryear peak flows to a log-Pearson Type-III distribution. The period of record (through September 30, 2001) for the measured water-year peak flows ranged from 11 to 25 years with a mean and median length of 16 years. The drainage areas of the streamflow-gaging stations ranged from 0.18 to 41 square miles. Based on the flood-frequency estimates from the 20 streamflow-gaging stations (13 in South Carolina; 4 in North Carolina; and 3 in Georgia), generalized least-squares regression was used to develop regional regression equations. These equations can be used to estimate the 2-, 5-, 10-, 25-, 50-, 100-, 200-, and 500-year recurrence-interval flows for small urban streams in the Piedmont, upper Coastal Plain, and lower Coastal Plain physiographic provinces of South Carolina. The most significant explanatory variables from this analysis were mainchannel length, percent impervious area, and basin development factor. Mean standard errors of prediction for the regression equations ranged from -25 to 33 percent for the 10-year recurrence-interval flows and from -35 to 54 percent for the 100-year recurrence-interval flows. The U.S. Geological Survey has developed a Geographic Information System application called StreamStats that makes the process of computing streamflow statistics at ungaged sites faster and more consistent than manual methods. This application was developed in the Massachusetts District and ongoing work is being done in other districts to develop a similar application using streamflow statistics relative to those respective States. Considering the future possibility of implementing StreamStats in South Carolina, an alternative set of regional regression equations was developed using only main channel length and impervious area. This was done because no digital coverages are currently available for basin development factor and, therefore, it could not be included in the StreamStats application. The average mean standard error of prediction for the alternative equations was 2 to 5 percent larger than the standard errors for the equations that contained basin development factor. For the urban streamflow-gaging stations in South Carolina, measured water-year peak flows were compared with those from an earlier urban flood-frequency investigation. The peak flows from the earlier investigation were computed using a rainfall-runoff model. At many of the sites, graphical comparisons indicated that the variance of the measured data was much less than the variance of the simulated data. Several statistical tests were applied to compare the variances and the means of the measured and simulated data for each site. The results indicated that the variances were significantly different for 11 of the 13 South Carolina streamflow-gaging stations. For one streamflow-gaging station, the test for normality, which is one of the assumptions of the data when comparing variances, indicated that neither the measured data nor the simulated data were distributed normally; therefore, the test for differences in the variances was not used for that streamflow-gaging station. Another statistical test was used to test for statistically significant differences in the means of the measured and simulated data. The results indicated that for 5 of the 13 urban streamflowgaging stations in South Carolina there was a statistically significant difference in the means of the two data sets. For comparison purposes and to test the hypothesis that there may have been climatic differences between the period in which the measured peak-flow data were measured and the period for which historic rainfall data were used to compute the simulated peak flows, 16 rural streamflow-gaging stations with long-term records were reviewed using similar techniques as those used for the measured an

  20. Confidence intervals in Flow Forecasting by using artificial neural networks

    NASA Astrophysics Data System (ADS)

    Panagoulia, Dionysia; Tsekouras, George

    2014-05-01

    One of the major inadequacies in implementation of Artificial Neural Networks (ANNs) for flow forecasting is the development of confidence intervals, because the relevant estimation cannot be implemented directly, contrasted to the classical forecasting methods. The variation in the ANN output is a measure of uncertainty in the model predictions based on the training data set. Different methods for uncertainty analysis, such as bootstrap, Bayesian, Monte Carlo, have already proposed for hydrologic and geophysical models, while methods for confidence intervals, such as error output, re-sampling, multi-linear regression adapted to ANN have been used for power load forecasting [1-2]. The aim of this paper is to present the re-sampling method for ANN prediction models and to develop this for flow forecasting of the next day. The re-sampling method is based on the ascending sorting of the errors between real and predicted values for all input vectors. The cumulative sample distribution function of the prediction errors is calculated and the confidence intervals are estimated by keeping the intermediate value, rejecting the extreme values according to the desired confidence levels, and holding the intervals symmetrical in probability. For application of the confidence intervals issue, input vectors are used from the Mesochora catchment in western-central Greece. The ANN's training algorithm is the stochastic training back-propagation process with decreasing functions of learning rate and momentum term, for which an optimization process is conducted regarding the crucial parameters values, such as the number of neurons, the kind of activation functions, the initial values and time parameters of learning rate and momentum term etc. Input variables are historical data of previous days, such as flows, nonlinearly weather related temperatures and nonlinearly weather related rainfalls based on correlation analysis between the under prediction flow and each implicit input variable of different ANN structures [3]. The performance of each ANN structure is evaluated by the voting analysis based on eleven criteria, which are the root mean square error (RMSE), the correlation index (R), the mean absolute percentage error (MAPE), the mean percentage error (MPE), the mean percentage error (ME), the percentage volume in errors (VE), the percentage error in peak (MF), the normalized mean bias error (NMBE), the normalized root mean bias error (NRMSE), the Nash-Sutcliffe model efficiency coefficient (E) and the modified Nash-Sutcliffe model efficiency coefficient (E1). The next day flow for the test set is calculated using the best ANN structure's model. Consequently, the confidence intervals of various confidence levels for training, evaluation and test sets are compared in order to explore the generalisation dynamics of confidence intervals from training and evaluation sets. [1] H.S. Hippert, C.E. Pedreira, R.C. Souza, "Neural networks for short-term load forecasting: A review and evaluation," IEEE Trans. on Power Systems, vol. 16, no. 1, 2001, pp. 44-55. [2] G. J. Tsekouras, N.E. Mastorakis, F.D. Kanellos, V.T. Kontargyri, C.D. Tsirekis, I.S. Karanasiou, Ch.N. Elias, A.D. Salis, P.A. Kontaxis, A.A. Gialketsi: "Short term load forecasting in Greek interconnected power system using ANN: Confidence Interval using a novel re-sampling technique with corrective Factor", WSEAS International Conference on Circuits, Systems, Electronics, Control & Signal Processing, (CSECS '10), Vouliagmeni, Athens, Greece, December 29-31, 2010. [3] D. Panagoulia, I. Trichakis, G. J. Tsekouras: "Flow Forecasting via Artificial Neural Networks - A Study for Input Variables conditioned on atmospheric circulation", European Geosciences Union, General Assembly 2012 (NH1.1 / AS1.16 - Extreme meteorological and hydrological events induced by severe weather and climate change), Vienna, Austria, 22-27 April 2012.

  1. Error of the modelled peak flow of the hydraulically reconstructed 1907 flood of the Ebro River in Xerta (NE Iberian Peninsula)

    NASA Astrophysics Data System (ADS)

    Lluís Ruiz-Bellet, Josep; Castelltort, Xavier; Carles Balasch, J.; Tuset, Jordi

    2016-04-01

    The estimation of the uncertainty of the results of the hydraulic modelling has been deeply analysed, but no clear methodological procedures as to its determination have been formulated when applied to historical hydrology. The main objective of this study was to calculate the uncertainty of the resulting peak flow of a typical historical flood reconstruction. The secondary objective was to identify the input variables that influenced the result the most and their contribution to peak flow total error. The uncertainty of 21-23 October 1907 flood of the Ebro River (NE Iberian Peninsula) in the town of Xerta (83,000 km2) was calculated with a series of local sensitivity analyses of the main variables affecting the resulting peak flow. Besides, in order to see to what degree the result depended on the chosen model, the HEC-RAS resulting peak flow was compared to the ones obtained with the 2D model Iber and with Manning's equation. The peak flow of 1907 flood in the Ebro River in Xerta, reconstructed with HEC-RAS, was 11500 m3·s-1 and its total error was ±31%. The most influential input variable over HEC-RAS peak flow results was water height; however, the one that contributed the most to peak flow error was Manning's n, because its uncertainty was far greater than water height's. The main conclusion is that, to ensure the lowest peak flow error, the reliability and precision of the flood mark should be thoroughly assessed. The peak flow was 12000 m3·s-1 when calculated with the 2D model Iber and 11500 m3·s-1 when calculated with the Manning equation.

  2. Numerical modeling of the divided bar measurements

    NASA Astrophysics Data System (ADS)

    LEE, Y.; Keehm, Y.

    2011-12-01

    The divided-bar technique has been used to measure thermal conductivity of rocks and fragments in heat flow studies. Though widely used, divided-bar measurements can have errors, which are not systematically quantified yet. We used an FEM and performed a series of numerical studies to evaluate various errors in divided-bar measurements and to suggest more reliable measurement techniques. A divided-bar measurement should be corrected against lateral heat loss on the sides of rock samples, and the thermal resistance at the contacts between the rock sample and the bar. We first investigated how the amount of these corrections would change by the thickness and thermal conductivity of rock samples through numerical modeling. When we fixed the sample thickness as 10 mm and varied thermal conductivity, errors in the measured thermal conductivity ranges from 2.02% for 1.0 W/m/K to 7.95% for 4.0 W/m/K. While we fixed thermal conductivity as 1.38 W/m/K and varied the sample thickness, we found that the error ranges from 2.03% for the 30 mm-thick sample to 11.43% for the 5 mm-thick sample. After corrections, a variety of error analyses for divided-bar measurements were conducted numerically. Thermal conductivity of two thin standard disks (2 mm in thickness) located at the top and the bottom of the rock sample slightly affects the accuracy of thermal conductivity measurements. When the thermal conductivity of a sample is 3.0 W/m/K and that of two standard disks is 0.2 W/m/K, the relative error in measured thermal conductivity is very small (~0.01%). However, the relative error would reach up to -2.29% for the same sample when thermal conductivity of two disks is 0.5 W/m/K. The accuracy of thermal conductivity measurements strongly depends on thermal conductivity and the thickness of thermal compound that is applied to reduce thermal resistance at contacts between the rock sample and the bar. When the thickness of thermal compound (0.29 W/m/K) is 0.03 mm, we found that the relative error in measured thermal conductivity is 4.01%, while the relative error can be very significant (~12.2%) if the thickness increases to 0.1 mm. Then, we fixed the thickness (0.03 mm) and varied thermal conductivity of the thermal compound. We found that the relative error with an 1.0 W/m/K compound is 1.28%, and the relative error with a 0.29 W/m/K is 4.06%. When we repeated this test with a different thickness of the thermal compound (0.1 mm), the relative error with an 1.0 W/m/K compound is 3.93%, and that with a 0.29 W/m/K is 12.2%. In addition, the cell technique by Sass et al.(1971), which is widely used to measure thermal conductivity of rock fragments, was evaluated using the FEM modeling. A total of 483 isotropic and homogeneous spherical rock fragments in the sample holder were used to test numerically the accuracy of the cell technique. The result shows the relative error of -9.61% for rock fragments with the thermal conductivity of 2.5 W/m/K. In conclusion, we report quantified errors in the divided-bar and the cell technique for thermal conductivity measurements for rocks and fragments. We found that the FEM modeling can accurately mimic these measurement techniques and can help us to estimate measurement errors quantitatively.

  3. Simulating water and nitrogen loss from an irrigated paddy field under continuously flooded condition with Hydrus-1D model.

    PubMed

    Yang, Rui; Tong, Juxiu; Hu, Bill X; Li, Jiayun; Wei, Wenshuo

    2017-06-01

    Agricultural non-point source pollution is a major factor in surface water and groundwater pollution, especially for nitrogen (N) pollution. In this paper, an experiment was conducted in a direct-seeded paddy field under traditional continuously flooded irrigation (CFI). The water movement and N transport and transformation were simulated via the Hydrus-1D model, and the model was calibrated using field measurements. The model had a total water balance error of 0.236 cm and a relative error (error/input total water) of 0.23%. For the solute transport model, the N balance error and relative error (error/input total N) were 0.36 kg ha -1 and 0.40%, respectively. The study results indicate that the plow pan plays a crucial role in vertical water movement in paddy fields. Water flow was mainly lost through surface runoff and underground drainage, with proportions to total input water of 32.33 and 42.58%, respectively. The water productivity in the study was 0.36 kg m -3 . The simulated N concentration results revealed that ammonia was the main form in rice uptake (95% of total N uptake), and its concentration was much larger than for nitrate under CFI. Denitrification and volatilization were the main losses, with proportions to total consumption of 23.18 and 14.49%, respectively. Leaching (10.28%) and surface runoff loss (2.05%) were the main losses of N pushed out of the system by water. Hydrus-1D simulation was an effective method to predict water flow and N concentrations in the three different forms. The study provides results that could be used to guide water and fertilization management and field results for numerical studies of water flow and N transport and transformation in the future.

  4. Determination of fractional flow reserve (FFR) based on scaling laws: a simulation study

    NASA Astrophysics Data System (ADS)

    Wong, Jerry T.; Molloi, Sabee

    2008-07-01

    Fractional flow reserve (FFR) provides an objective physiological evaluation of stenosis severity. A technique that can measure FFR using only angiographic images would be a valuable tool in the cardiac catheterization laboratory. To perform this, the diseased blood flow can be measured with a first pass distribution analysis and the theoretical normal blood flow can be estimated from the total coronary arterial volume based on scaling laws. A computer simulation of the coronary arterial network was used to gain a better understanding of how hemodynamic conditions and coronary artery disease can affect blood flow, arterial volume and FFR estimation. Changes in coronary arterial flow and volume due to coronary stenosis, aortic pressure and venous pressure were examined to evaluate the potential use of flow and volume for FFR determination. This study showed that FFR can be estimated using arterial volume and a scaling coefficient corrected for aortic pressure. However, variations in venous pressure were found to introduce some error in FFR estimation. A relative form of FFR was introduced and was found to cancel out the influence of pressure on coronary flow, arterial volume and FFR estimation. The use of coronary flow and arterial volume for FFR determination appears promising.

  5. Multiple Velocity Profile Measurements in Hypersonic Flows using Sequentially-Imaged Fluorescence Tagging

    NASA Technical Reports Server (NTRS)

    Bathel, Brett F.; Danehy, Paul M.; Inmian, Jennifer A.; Jones, Stephen B.; Ivey, Christopher B.; Goyne, Christopher P.

    2010-01-01

    Nitric-oxide planar laser-induced fluorescence (NO PLIF) was used to perform velocity measurements in hypersonic flows by generating multiple tagged lines which fluoresce as they convect downstream. For each laser pulse, a single interline, progressive scan intensified CCD camera was used to obtain separate images of the initial undelayed and delayed NO molecules that had been tagged by the laser. The CCD configuration allowed for sub-microsecond acquisition of both images, resulting in sub-microsecond temporal resolution as well as sub-mm spatial resolution (0.5-mm x 0.7-mm). Determination of axial velocity was made by application of a cross-correlation analysis of the horizontal shift of individual tagged lines. Quantification of systematic errors, the contribution of gating/exposure duration errors, and influence of collision rate on fluorescence to temporal uncertainty were made. Quantification of the spatial uncertainty depended upon the analysis technique and signal-to-noise of the acquired profiles. This investigation focused on two hypersonic flow experiments: (1) a reaction control system (RCS) jet on an Orion Crew Exploration Vehicle (CEV) wind tunnel model and (2) a 10-degree half-angle wedge containing a 2-mm tall, 4-mm wide cylindrical boundary layer trip. The experiments were performed at the NASA Langley Research Center's 31-inch Mach 10 wind tunnel.

  6. A Fast Surrogate-facilitated Data-driven Bayesian Approach to Uncertainty Quantification of a Regional Groundwater Flow Model with Structural Error

    NASA Astrophysics Data System (ADS)

    Xu, T.; Valocchi, A. J.; Ye, M.; Liang, F.

    2016-12-01

    Due to simplification and/or misrepresentation of the real aquifer system, numerical groundwater flow and solute transport models are usually subject to model structural error. During model calibration, the hydrogeological parameters may be overly adjusted to compensate for unknown structural error. This may result in biased predictions when models are used to forecast aquifer response to new forcing. In this study, we extend a fully Bayesian method [Xu and Valocchi, 2015] to calibrate a real-world, regional groundwater flow model. The method uses a data-driven error model to describe model structural error and jointly infers model parameters and structural error. In this study, Bayesian inference is facilitated using high performance computing and fast surrogate models. The surrogate models are constructed using machine learning techniques to emulate the response simulated by the computationally expensive groundwater model. We demonstrate in the real-world case study that explicitly accounting for model structural error yields parameter posterior distributions that are substantially different from those derived by the classical Bayesian calibration that does not account for model structural error. In addition, the Bayesian with error model method gives significantly more accurate prediction along with reasonable credible intervals.

  7. The Surface Water and Ocean Topography Satellite Mission - An Assessment of Swath Altimetry Measurements of River Hydrodynamics

    NASA Technical Reports Server (NTRS)

    Wilson, Matthew D.; Durand, Michael; Alsdorf, Douglas; Chul-Jung, Hahn; Andreadis, Konstantinos M.; Lee, Hyongki

    2012-01-01

    The Surface Water and Ocean Topography (SWOT) satellite mission, scheduled for launch in 2020 with development commencing in 2015, will provide a step-change improvement in the measurement of terrestrial surface water storage and dynamics. In particular, it will provide the first, routine two-dimensional measurements of water surface elevations, which will allow for the estimation of river and floodplain flows via the water surface slope. In this paper, we characterize the measurements which may be obtained from SWOT and illustrate how they may be used to derive estimates of river discharge. In particular, we show (i) the spatia-temporal sampling scheme of SWOT, (ii) the errors which maybe expected in swath altimetry measurements of the terrestrial surface water, and (iii) the impacts such errors may have on estimates of water surface slope and river discharge, We illustrate this through a "virtual mission" study for a approximately 300 km reach of the central Amazon river, using a hydraulic model to provide water surface elevations according to the SWOT spatia-temporal sampling scheme (orbit with 78 degree inclination, 22 day repeat and 140 km swath width) to which errors were added based on a two-dimension height error spectrum derived from the SWOT design requirements. Water surface elevation measurements for the Amazon mainstem as may be observed by SWOT were thereby obtained. Using these measurements, estimates of river slope and discharge were derived and compared to those which may be obtained without error, and those obtained directly from the hydraulic model. It was found that discharge can be reproduced highly accurately from the water height, without knowledge of the detailed channel bathymetry using a modified Manning's equation, if friction, depth, width and slope are known. Increasing reach length was found to be an effective method to reduce systematic height error in SWOT measurements.

  8. Stratospheric wind errors, initial states and forecast skill in the GLAS general circulation model

    NASA Technical Reports Server (NTRS)

    Tenenbaum, J.

    1983-01-01

    Relations between stratospheric wind errors, initial states and 500 mb skill are investigated using the GLAS general circulation model initialized with FGGE data. Erroneous stratospheric winds are seen in all current general circulation models, appearing also as weak shear above the subtropical jet and as cold polar stratospheres. In this study it is shown that the more anticyclonic large-scale flows are correlated with large forecast stratospheric winds. In addition, it is found that for North America the resulting errors are correlated with initial state jet stream accelerations while for East Asia the forecast winds are correlated with initial state jet strength. Using 500 mb skill scores over Europe at day 5 to measure forecast performance, it is found that both poor forecast skill and excessive stratospheric winds are correlated with more anticyclonic large-scale flows over North America. It is hypothesized that the resulting erroneous kinetic energy contributes to the poor forecast skill, and that the problem is caused by a failure in the modeling of the stratospheric energy cycle in current general circulation models independent of vertical resolution.

  9. Data Assimilation in a Solar Dynamo Model Using Ensemble Kalman Filters: Sensitivity and Robustness in Reconstruction of Meridional Flow Speed

    NASA Astrophysics Data System (ADS)

    Dikpati, Mausumi; Anderson, Jeffrey L.; Mitra, Dhrubaditya

    2016-09-01

    We implement an Ensemble Kalman Filter procedure using the Data Assimilation Research Testbed for assimilating “synthetic” meridional flow-speed data in a Babcock-Leighton-type flux-transport solar dynamo model. By performing several “observing system simulation experiments,” we reconstruct time variation in meridional flow speed and analyze sensitivity and robustness of reconstruction. Using 192 ensemble members including 10 observations, each with 4% error, we find that flow speed is reconstructed best if observations of near-surface poloidal fields from low latitudes and tachocline toroidal fields from midlatitudes are assimilated. If observations include a mixture of poloidal and toroidal fields from different latitude locations, reconstruction is reasonably good for ≤slant 40 % error in low-latitude data, even if observational error in polar region data becomes 200%, but deteriorates when observational error increases in low- and midlatitude data. Solar polar region observations are known to contain larger errors than those in low latitudes; our forward operator (a flux-transport dynamo model here) can sustain larger errors in polar region data, but is more sensitive to errors in low-latitude data. An optimal reconstruction is obtained if an assimilation interval of 15 days is used; 10- and 20-day assimilation intervals also give reasonably good results. Assimilation intervals \\lt 5 days do not produce faithful reconstructions of flow speed, because the system requires a minimum time to develop dynamics to respond to flow variations. Reconstruction also deteriorates if an assimilation interval \\gt 45 days is used, because the system’s inherent memory interferes with its short-term dynamics during a substantially long run without updating.

  10. The Use of a Laser Doppler Velocimeter in a Standard Flammability Tube

    NASA Technical Reports Server (NTRS)

    Strehlow, R. A.; Flynn, E. M.

    1985-01-01

    The use of the Laser Doppler Velocimeter, (LDV), to measure the flow associated with the passage of a flame through a standard flammability limit tube (SFLT) was studied. Four major results are presented: (1) it is shown that by using standard ray tracing calculations, the displacement of the LDV volume and the fringe rotation within the experimental error of measurement can be predicted; (2) the flow velocity vector field associated with passage of an upward propagating flame in an SFLT is determined; (3) it is determined that the use of a light interruption technique to track particles is not feasible; and (4) it is shown that a 25 mW laser is adequate for LDV measurements in the Shuttle or Spacelab.

  11. Corrigendum to "A semi-empirical airfoil stall noise model based on surface pressure measurements" [J. Sound Vib. 387 (2017) 127-162

    NASA Astrophysics Data System (ADS)

    Bertagnolio, Franck; Madsen, Helge Aa.; Fischer, Andreas; Bak, Christian

    2018-06-01

    In the above-mentioned paper, two model formulae were tuned to fit experimental data of surface pressure spectra measured in various wind tunnels. They correspond to high and low Reynolds number flow scalings, respectively. It turns out that there exist typographical errors in both formulae numbered (9) and (10) in the original paper. There, these formulae read:

  12. Development and evaluation of virtual refrigerant mass flow sensors for fault detection and diagnostics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kim, Woohyun; Braun, J.

    Refrigerant mass flow rate is an important measurement for monitoring equipment performance and enabling fault detection and diagnostics. However, a traditional mass flow meter is expensive to purchase and install. A virtual refrigerant mass flow sensor (VRMF) uses a mathematical model to estimate flow rate using low-cost measurements and can potentially be implemented at low cost. This study evaluates three VRMFs for estimating refrigerant mass flow rate. The first model uses a compressor map that relates refrigerant flow rate to measurements of inlet and outlet pressure, and inlet temperature measurements. The second model uses an energy-balance method on the compressormore » that uses a compressor map for power consumption, which is relatively independent of compressor faults that influence mass flow rate. The third model is developed using an empirical correlation for an electronic expansion valve (EEV) based on an orifice equation. The three VRMFs are shown to work well in estimating refrigerant mass flow rate for various systems under fault-free conditions with less than 5% RMS error. Each of the three mass flow rate estimates can be utilized to diagnose and track the following faults: 1) loss of compressor performance, 2) fouled condenser or evaporator filter, 3) faulty expansion device, respectively. For example, a compressor refrigerant flow map model only provides an accurate estimation when the compressor operates normally. When a compressor is not delivering the expected flow due to a leaky suction or discharge valve or other internal fault, the energy-balance or EEV model can provide accurate flow estimates. In this paper, the flow differences provide an indication of loss of compressor performance and can be used for fault detection and diagnostics.« less

  13. Regional cerebral blood flow measurement with intravenous ( sup 15 O)water bolus and ( sup 18 F)fluoromethane inhalation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Herholz, K.; Pietrzyk, U.; Wienhard, K.

    1989-09-01

    In 20 patients with ischemic cerebrovascular disease, classic migraine, or angiomas, we compared paired dynamic positron emission tomographic measurements of regional cerebral blood flow using both ({sup 15}O)water and ({sup 18}F)fluoromethane as tracers. Cerebral blood flow was also determined according to the autoradiographic technique with a bolus injection of ({sup 15}O)water. There were reasonable overall correlations between dynamic ({sup 15}O)water and ({sup 18}F)fluoromethane values for cerebral blood flow (r = 0.82) and between dynamic and autoradiographic ({sup 15}O)water values for cerebral blood flow (r = 0.83). We found a close correspondence between abnormal pathologic findings and visually evaluated cerebral bloodmore » flow tomograms obtained with the two tracers. On average, dynamic ({sup 15}O)water cerebral blood flow was 6% lower than that measured with ({sup 18}F)fluoromethane. There also was a general trend toward a greater underestimation with ({sup 15}O)water in high-flow areas, particularly in hyperemic areas, probably due to incomplete first-pass extraction of ({sup 15}O)water. Underestimation was not detected in low-flow areas or in the cerebellum. Absolute cerebral blood flow values were less closely correlated between tracers and techniques than cerebral blood flow patterns. The variability of the relation between absolute flow values was probably caused by confounding effects of the variation in the circulatory delay time. The autoradiographic technique was most sensitive to this type error.« less

  14. Groundwater recharge in Wisconsin--Annual estimates for 1970-99 using streamflow data

    USGS Publications Warehouse

    Gebert, Warren A.; Walker, John F.; Hunt, Randall J.

    2011-01-01

    The groundwater component of streamflow is important because it is indicative of the sustained flow of a stream during dry periods, is often of better quality, and has a smaller range of temperatures, than surface contributions to streamflow. All three of these characteristics are important to the health of aquatic life in a stream. If recharge to the aquifers is to be preserved or enhanced, it is important to understand the present partitioning of total streamflow into base flow and stormflow. Additionally, an estimate of groundwater recharge is important for understanding the flows within a groundwater system-information important for water availability/sustainability or other assessments. The U.S. Geological Survey operates numerous continuous-record streamflow-gaging stations (Hirsch and Norris, 2001), which can be used to provide estimates of average annual base flow. In addition to these continuous record sites, Gebert and others (2007) showed that having a few streamflow measurements in a basin can appreciably reduce the error in a base-flow estimate for that basin. Therefore, in addition to the continuous-record gaging stations, a substantial number of low-flow partial-record sites (6 to 15 discharge measurements) and miscellaneous-measurement sites (1 to 3 discharge measurements) that were operated during 1964-90 throughout the State were included in this work to provide additional insight into spatial distribution of annual base flow and, in turn, groundwater recharge.

  15. Dry calibration of electromagnetic flowmeters based on numerical models combining multiple physical phenomena (multiphysics)

    NASA Astrophysics Data System (ADS)

    Fu, X.; Hu, L.; Lee, K. M.; Zou, J.; Ruan, X. D.; Yang, H. Y.

    2010-10-01

    This paper presents a method for dry calibration of an electromagnetic flowmeter (EMF). This method, which determines the voltage induced in the EMF as conductive liquid flows through a magnetic field, numerically solves a coupled set of multiphysical equations with measured boundary conditions for the magnetic, electric, and flow fields in the measuring pipe of the flowmeter. Specifically, this paper details the formulation of dry calibration and an efficient algorithm (that adaptively minimizes the number of measurements and requires only the normal component of the magnetic flux density as boundary conditions on the pipe surface to reconstruct the magnetic field involved) for computing the sensitivity of EMF. Along with an in-depth discussion on factors that could significantly affect the final precision of a dry calibrated EMF, the effects of flow disturbance on measuring errors have been experimentally studied by installing a baffle at the inflow port of the EMF. Results of the dry calibration on an actual EMF were compared against flow-rig calibration; excellent agreements (within 0.3%) between dry calibration and flow-rig tests verify the multiphysical computation of the fields and the robustness of the method. As requiring no actual flow, the dry calibration is particularly useful for calibrating large-diameter EMFs where conventional flow-rig methods are often costly and difficult to implement.

  16. Sonic flow distortion experiment

    NASA Astrophysics Data System (ADS)

    Peters, Gerhard; Kirtzel, Hans-Jürgen; Radke, Jürgen

    2017-04-01

    We will present results from a field experiment with multiple sonic anemometers, and will address the question about residual errors of wind tunnel based calibrations that are transferred to atmospheric measurements. Ultrasonic anemometers have become standard components of high quality in-situ instrumentations, because of the long term calibration stability, fast response, wide dynamic range, and various options of built in quality control. On the downside of this technology is the fact that the sound transducers and the carrying structure represent obstacles in the flow causing systematic deviations of the measured flow from the free flow. Usually, the correction schemes are based on wind tunnel observations of the sonic-response as function of angle of attack under stationary conditions. Since the natural atmospheric flow shows turbulence intensities and scales, which cannot be mimicked in a wind tunnel, it is suspected that the wind-tunnel based corrections may be not (fully) applicable to field data. The wide spread use of sonic anemometers in eddy flux instrumentations for example in the frame of EuroFlux, AmeriFlux or other international observation programs has therefore prompted a - still controversial - discussion of the significance of residual flow errors. In an attempt to quantify the flow distortion in free field conditions, 12 identical 3-component sonics with 120 degree head symmetry were operated at the north margin of an abandoned airfield. The sonics were installed in a straight line in WE-direction at 2.6 m height with a mutual distance of 3 meters and with an azimuth increment of the individual sonics of 11 degrees. Synchronous raw data were recorded with 20 Hz sample rate. Data of about 12 hours with southerly winds (from the relatively flat airfield) were analyzed. Statistical homogeneity of the wind field in the range of the instruments line was assumed, but a variable finite turbulent decay constant was accounted for, which was estimated from the data. The free field flow distortion estimates will be discussed in comparison with wind tunnel observations.

  17. Report of secondary flows, boundary layers, turbulence and wave team, report 1

    NASA Technical Reports Server (NTRS)

    Scoggins, J. R.; Fitzjarrald, D.; Doviak, R.; Cliff, W.

    1980-01-01

    General criteria for a flight test option are that: (1) there be a good opportunity for comparison with other measurement techniques; (2) the flow to be measured is of considerable scientific or practical interest; and (3) the airborne laser Doppler system is well suited to measure the required quantities. The requirement for comparison, i.e., ground truth, is particularly important because this is the first year of operation for the system. It is necessary to demonstrate that the system does actually measure the winds and compare the results with other methods to provide a check on the system error analysis. The uniqueness of the laser Doppler system precludes any direct comparison, but point measurements from tower mounted wind sensors and two dimensional fields obtained from radars with substantially different sampling volumes are quite useful.

  18. Estimation of genetic connectedness diagnostics based on prediction errors without the prediction error variance-covariance matrix.

    PubMed

    Holmes, John B; Dodds, Ken G; Lee, Michael A

    2017-03-02

    An important issue in genetic evaluation is the comparability of random effects (breeding values), particularly between pairs of animals in different contemporary groups. This is usually referred to as genetic connectedness. While various measures of connectedness have been proposed in the literature, there is general agreement that the most appropriate measure is some function of the prediction error variance-covariance matrix. However, obtaining the prediction error variance-covariance matrix is computationally demanding for large-scale genetic evaluations. Many alternative statistics have been proposed that avoid the computational cost of obtaining the prediction error variance-covariance matrix, such as counts of genetic links between contemporary groups, gene flow matrices, and functions of the variance-covariance matrix of estimated contemporary group fixed effects. In this paper, we show that a correction to the variance-covariance matrix of estimated contemporary group fixed effects will produce the exact prediction error variance-covariance matrix averaged by contemporary group for univariate models in the presence of single or multiple fixed effects and one random effect. We demonstrate the correction for a series of models and show that approximations to the prediction error matrix based solely on the variance-covariance matrix of estimated contemporary group fixed effects are inappropriate in certain circumstances. Our method allows for the calculation of a connectedness measure based on the prediction error variance-covariance matrix by calculating only the variance-covariance matrix of estimated fixed effects. Since the number of fixed effects in genetic evaluation is usually orders of magnitudes smaller than the number of random effect levels, the computational requirements for our method should be reduced.

  19. Quantitative Functional Imaging Using Dynamic Positron Computed Tomography and Rapid Parameter Estimation Techniques

    NASA Astrophysics Data System (ADS)

    Koeppe, Robert Allen

    Positron computed tomography (PCT) is a diagnostic imaging technique that provides both three dimensional imaging capability and quantitative measurements of local tissue radioactivity concentrations in vivo. This allows the development of non-invasive methods that employ the principles of tracer kinetics for determining physiological properties such as mass specific blood flow, tissue pH, and rates of substrate transport or utilization. A physiologically based, two-compartment tracer kinetic model was derived to mathematically describe the exchange of a radioindicator between blood and tissue. The model was adapted for use with dynamic sequences of data acquired with a positron tomograph. Rapid estimation techniques were implemented to produce functional images of the model parameters by analyzing each individual pixel sequence of the image data. A detailed analysis of the performance characteristics of three different parameter estimation schemes was performed. The analysis included examination of errors caused by statistical uncertainties in the measured data, errors in the timing of the data, and errors caused by violation of various assumptions of the tracer kinetic model. Two specific radioindicators were investigated. ('18)F -fluoromethane, an inert freely diffusible gas, was used for local quantitative determinations of both cerebral blood flow and tissue:blood partition coefficient. A method was developed that did not require direct sampling of arterial blood for the absolute scaling of flow values. The arterial input concentration time course was obtained by assuming that the alveolar or end-tidal expired breath radioactivity concentration is proportional to the arterial blood concentration. The scale of the input function was obtained from a series of venous blood concentration measurements. The method of absolute scaling using venous samples was validated in four studies, performed on normal volunteers, in which directly measured arterial concentrations were compared to those predicted from the expired air and venous blood samples. The glucose analog ('18)F-3-deoxy-3-fluoro-D -glucose (3-FDG) was used for quantitating the membrane transport rate of glucose. The measured data indicated that the phosphorylation rate of 3-FDG was low enough to allow accurate estimation of the transport rate using a two compartment model.

  20. Comparison of acoustic travel-time measurements of solar meridional circulation from SDO/HMI and SOHO/MDI

    NASA Astrophysics Data System (ADS)

    Liang, Zhi-Chao; Birch, Aaron C.; Duvall, Thomas L., Jr.; Gizon, Laurent; Schou, Jesper

    2017-05-01

    Context. Time-distance helioseismology is one of the primary tools for studying the solar meridional circulation, especially in the lower convection zone. However, travel-time measurements of the subsurface meridional flow suffer from a variety of systematic errors, such as a center-to-limb variation and an offset due to the position angle (P-angle) uncertainty of solar images. It has been suggested that the center-to-limb variation can be removed by subtracting east-west from south-north travel-time measurements. This ad hoc method for the removal of the center-to-limb effect has been adopted widely but not tested for travel distances corresponding to the lower convection zone. Aims: We explore the effects of two major sources of the systematic errors, the P-angle error arising from the instrumental misalignment and the center-to-limb variation, on the acoustic travel-time measurements in the south-north direction. Methods: We apply the time-distance technique to contemporaneous medium-degree Dopplergrams produced by SOHO/MDI and SDO/HMI to obtain the travel-time difference caused by meridional circulation throughout the solar convection zone. The P-angle offset in MDI images is measured by cross-correlating MDI and HMI images. The travel-time measurements in the south-north and east-west directions are averaged over the same observation period (May 2010 to Apr. 2011) for the two data sets and then compared to examine the consistency of MDI and HMI travel times after applying the above-mentioned corrections. Results: The offsets in the south-north travel-time difference from MDI data induced by the P-angle error gradually diminish with increasing travel distance. However, these offsets become noisy for travel distances corresponding to waves that reach the base of the convection zone. This suggests that a careful treatment of the P-angle problem is required when studying a deep meridional flow. After correcting the P-angle and the removal of the center-to-limb effect, the travel-time measurements from MDI and HMI are consistent within the error bars for meridional circulation covering the entire convection zone. The fluctuations observed in both data sets are highly correlated and thus indicate their solar origin rather than an instrumental origin. Although our results demonstrate that the ad hoc correction is capable of reducing the wide discrepancy in the travel-time measurements from MDI and HMI, we cannot exclude the possibility that there exist other systematic effects acting on the two data sets in the same way.

  1. Link Performance Analysis and monitoring - A unified approach to divergent requirements

    NASA Astrophysics Data System (ADS)

    Thom, G. A.

    Link Performance Analysis and real-time monitoring are generally covered by a wide range of equipment. Bit Error Rate testers provide digital link performance measurements but are not useful during real-time data flows. Real-time performance monitors utilize the fixed overhead content but vary widely from format to format. Link quality information is also present from signal reconstruction equipment in the form of receiver AGC, bit synchronizer AGC, and bit synchronizer soft decision level outputs, but no general approach to utilizing this information exists. This paper presents an approach to link tests, real-time data quality monitoring, and results presentation that utilizes a set of general purpose modules in a flexible architectural environment. The system operates over a wide range of bit rates (up to 150 Mbs) and employs several measurement techniques, including P/N code errors or fixed PCM format errors, derived real-time BER from frame sync errors, and Data Quality Analysis derived by counting significant sync status changes. The architecture performs with a minimum of elements in place to permit a phased update of the user's unit in accordance with his needs.

  2. Uncertainty of the peak flow reconstruction of the 1907 flood in the Ebro River in Xerta (NE Iberian Peninsula)

    NASA Astrophysics Data System (ADS)

    Ruiz-Bellet, Josep Lluís; Castelltort, Xavier; Balasch, J. Carles; Tuset, Jordi

    2017-02-01

    There is no clear, unified and accepted method to estimate the uncertainty of hydraulic modelling results. In historical floods reconstruction, due to the lower precision of input data, the magnitude of this uncertainty could reach a high value. With the objectives of giving an estimate of the peak flow error of a typical historical flood reconstruction with the model HEC-RAS and of providing a quick, simple uncertainty assessment that an end user could easily apply, the uncertainty of the reconstructed peak flow of a major flood in the Ebro River (NE Iberian Peninsula) was calculated with a set of local sensitivity analyses on six input variables. The peak flow total error was estimated at ±31% and water height was found to be the most influential variable on peak flow, followed by Manning's n. However, the latter, due to its large uncertainty, was the greatest contributor to peak flow total error. Besides, the HEC-RAS resulting peak flow was compared to the ones obtained with the 2D model Iber and with Manning's equation; all three methods gave similar peak flows. Manning's equation gave almost the same result than HEC-RAS. The main conclusion is that, to ensure the lowest peak flow error, the reliability and precision of the flood mark should be thoroughly assessed.

  3. The Analysis of Turbulent Flow by Hot Wire Signals. Ph.D. Thesis - Physikalische Ingenieurvissenschaft der Technischen Univ., 1981

    NASA Technical Reports Server (NTRS)

    Bartenwerfer, M.

    1982-01-01

    When measuring velocities in turbulent gas flow, approximation signal analysis with hot wire anemometers having one and two wire probes are used. A numeric test of standard analyses shows the resulting systemmatic error increases quickly with increasing turbulent intensity. Since it also depends on the turbulence structure, it cannot be corrected. The use of such probes is thus restricted to low turbulence. By means of three wire probes (in two dimensional flows with X wire probes) in principle, instantaneous values of velocity can be determined, and an asymmetric arrangement of wires has a theoretical advantage.

  4. Application of AFINCH as a tool for evaluating the effects of streamflow-gaging-network size and composition on the accuracy and precision of streamflow estimates at ungaged locations in the southeast Lake Michigan hydrologic subregion

    USGS Publications Warehouse

    Koltun, G.F.; Holtschlag, David J.

    2010-01-01

    Bootstrapping techniques employing random subsampling were used with the AFINCH (Analysis of Flows In Networks of CHannels) model to gain insights into the effects of variation in streamflow-gaging-network size and composition on the accuracy and precision of streamflow estimates at ungaged locations in the 0405 (Southeast Lake Michigan) hydrologic subregion. AFINCH uses stepwise-regression techniques to estimate monthly water yields from catchments based on geospatial-climate and land-cover data in combination with available streamflow and water-use data. Calculations are performed on a hydrologic-subregion scale for each catchment and stream reach contained in a National Hydrography Dataset Plus (NHDPlus) subregion. Water yields from contributing catchments are multiplied by catchment areas and resulting flow values are accumulated to compute streamflows in stream reaches which are referred to as flow lines. AFINCH imposes constraints on water yields to ensure that observed streamflows are conserved at gaged locations.  Data from the 0405 hydrologic subregion (referred to as Southeast Lake Michigan) were used for the analyses. Daily streamflow data were measured in the subregion for 1 or more years at a total of 75 streamflow-gaging stations during the analysis period which spanned water years 1971–2003. The number of streamflow gages in operation each year during the analysis period ranged from 42 to 56 and averaged 47. Six sets (one set for each censoring level), each composed of 30 random subsets of the 75 streamflow gages, were created by censoring (removing) approximately 10, 20, 30, 40, 50, and 75 percent of the streamflow gages (the actual percentage of operating streamflow gages censored for each set varied from year to year, and within the year from subset to subset, but averaged approximately the indicated percentages).Streamflow estimates for six flow lines each were aggregated by censoring level, and results were analyzed to assess (a) how the size and composition of the streamflow-gaging network affected the average apparent errors and variability of the estimated flows and (b) whether results for certain months were more variable than for others. The six flow lines were categorized into one of three types depending upon their network topology and position relative to operating streamflow-gaging stations.    Statistical analysis of the model results indicates that (1) less precise (that is, more variable) estimates resulted from smaller streamflow-gaging networks as compared to larger streamflow-gaging networks, (2) precision of AFINCH flow estimates at an ungaged flow line is improved by operation of one or more streamflow gages upstream and (or) downstream in the enclosing basin, (3) no consistent seasonal trend in estimate variability was evident, and (4) flow lines from ungaged basins appeared to exhibit the smallest absolute apparent percent errors (APEs) and smallest changes in average APE as a function of increasing censoring level. The counterintuitive results described in item (4) above likely reflect both the nature of the base-streamflow estimate from which the errors were computed and insensitivity in the average model-derived estimates to changes in the streamflow-gaging-network size and composition. Another analysis demonstrated that errors for flow lines in ungaged basins have the potential to be much larger than indicated by their APEs if measured relative to their true (but unknown) flows.     “Missing gage” analyses, based on examination of censoring subset results where the streamflow gage of interest was omitted from the calibration data set, were done to better understand the true error characteristics for ungaged flow lines as a function of network size. Results examined for 2 water years indicated that the probability of computing a monthly streamflow estimate within 10 percent of the true value with AFINCH decreased from greater than 0.9 at about a 10-percent network-censoring level to less than 0.6 as the censoring level approached 75 percent. In addition, estimates for typically dry months tended to be characterized by larger percent errors than typically wetter months.

  5. Time dependent wind fields

    NASA Technical Reports Server (NTRS)

    Chelton, D. B.

    1986-01-01

    Two tasks were performed: (1) determination of the accuracy of Seasat scatterometer, altimeter, and scanning multichannel microwave radiometer measurements of wind speed; and (2) application of Seasat altimeter measurements of sea level to study the spatial and temporal variability of geostrophic flow in the Antarctic Circumpolar Current. The results of the first task have identified systematic errors in wind speeds estimated by all three satellite sensors. However, in all cases the errors are correctable and corrected wind speeds agree between the three sensors to better than 1 ms sup -1 in 96-day 2 deg. latitude by 6 deg. longitude averages. The second task has resulted in development of a new technique for using altimeter sea level measurements to study the temporal variability of large scale sea level variations. Application of the technique to the Antarctic Circumpolar Current yielded new information about the ocean circulation in this region of the ocean that is poorly sampled by conventional ship-based measurements.

  6. The Significance of the Record Length in Flood Frequency Analysis

    NASA Astrophysics Data System (ADS)

    Senarath, S. U.

    2013-12-01

    Of all of the potential natural hazards, flood is the most costly in many regions of the world. For example, floods cause over a third of Europe's average annual catastrophe losses and affect about two thirds of the people impacted by natural catastrophes. Increased attention is being paid to determining flow estimates associated with pre-specified return periods so that flood-prone areas can be adequately protected against floods of particular magnitudes or return periods. Flood frequency analysis, which is conducted by using an appropriate probability density function that fits the observed annual maximum flow data, is frequently used for obtaining these flow estimates. Consequently, flood frequency analysis plays an integral role in determining the flood risk in flood prone watersheds. A long annual maximum flow record is vital for obtaining accurate estimates of discharges associated with high return period flows. However, in many areas of the world, flood frequency analysis is conducted with limited flow data or short annual maximum flow records. These inevitably lead to flow estimates that are subject to error. This is especially the case with high return period flow estimates. In this study, several statistical techniques are used to identify errors caused by short annual maximum flow records. The flow estimates used in the error analysis are obtained by fitting a log-Pearson III distribution to the flood time-series. These errors can then be used to better evaluate the return period flows in data limited streams. The study findings, therefore, have important implications for hydrologists, water resources engineers and floodplain managers.

  7. Methods for estimating streamflow at mountain fronts in southern New Mexico

    USGS Publications Warehouse

    Waltemeyer, S.D.

    1994-01-01

    The infiltration of streamflow is potential recharge to alluvial-basin aquifers at or near mountain fronts in southern New Mexico. Data for 13 streamflow-gaging stations were used to determine a relation between mean annual stream- flow and basin and climatic conditions. Regression analysis was used to develop an equation that can be used to estimate mean annual streamflow on the basis of drainage areas and mean annual precipi- tation. The average standard error of estimate for this equation is 46 percent. Regression analysis also was used to develop an equation to estimate mean annual streamflow on the basis of active- channel width. Measurements of the width of active channels were determined for 6 of the 13 gaging stations. The average standard error of estimate for this relation is 29 percent. Stream- flow estimates made using a regression equation based on channel geometry are considered more reliable than estimates made from an equation based on regional relations of basin and climatic conditions. The sample size used to develop these relations was small, however, and the reported standard error of estimate may not represent that of the entire population. Active-channel-width measurements were made at 23 ungaged sites along the Rio Grande upstream from Elephant Butte Reservoir. Data for additional sites would be needed for a more comprehensive assessment of mean annual streamflow in southern New Mexico.

  8. Stably stratified canopy flow in complex terrain

    NASA Astrophysics Data System (ADS)

    Xu, X.; Yi, C.; Kutter, E.

    2015-07-01

    Stably stratified canopy flow in complex terrain has been considered a difficult condition for measuring net ecosystem-atmosphere exchanges of carbon, water vapor, and energy. A long-standing advection error in eddy-flux measurements is caused by stably stratified canopy flow. Such a condition with strong thermal gradient and less turbulent air is also difficult for modeling. To understand the challenging atmospheric condition for eddy-flux measurements, we use the renormalized group (RNG) k-ϵ turbulence model to investigate the main characteristics of stably stratified canopy flows in complex terrain. In this two-dimensional simulation, we imposed persistent constant heat flux at ground surface and linearly increasing cooling rate in the upper-canopy layer, vertically varying dissipative force from canopy drag elements, buoyancy forcing induced from thermal stratification and the hill terrain. These strong boundary effects keep nonlinearity in the two-dimensional Navier-Stokes equations high enough to generate turbulent behavior. The fundamental characteristics of nighttime canopy flow over complex terrain measured by the small number of available multi-tower advection experiments can be reproduced by this numerical simulation, such as (1) unstable layer in the canopy and super-stable layers associated with flow decoupling in deep canopy and near the top of canopy; (2) sub-canopy drainage flow and drainage flow near the top of canopy in calm night; (3) upward momentum transfer in canopy, downward heat transfer in upper canopy and upward heat transfer in deep canopy; and (4) large buoyancy suppression and weak shear production in strong stability.

  9. Inflow-weighted pulmonary perfusion: comparison between dynamic contrast-enhanced MRI versus perfusion scintigraphy in complex pulmonary circulation

    PubMed Central

    2013-01-01

    Background Due to the different properties of the contrast agents, the lung perfusion maps as measured by 99mTc-labeled macroaggregated albumin perfusion scintigraphy (PS) are not uncommonly discrepant from those measured by dynamic contrast-enhanced MRI (DCE-MRI) using indicator-dilution analysis in complex pulmonary circulation. Since PS offers the pre-capillary perfusion of the first-pass transit, we hypothesized that an inflow-weighted perfusion model of DCE-MRI could simulate the result by PS. Methods 22 patients underwent DCE-MRI at 1.5T and also PS. Relative perfusion contributed by the left lung was calculated by PS (PSL%), by DCE-MRI using conventional indicator dilution theory for pulmonary blood volume (PBVL%) and pulmonary blood flow (PBFL%) and using our proposed inflow-weighted pulmonary blood volume (PBViwL%). For PBViwL%, the optimal upper bound of the inflow-weighted integration range was determined by correlation coefficient analysis. Results The time-to-peak of the normal lung parenchyma was the optimal upper bound in the inflow-weighted perfusion model. Using PSL% as a reference, PBVL% showed error of 49.24% to −40.37% (intraclass correlation coefficient RI = 0.55) and PBFL% had error of 34.87% to −27.76% (RI = 0.80). With the inflow-weighted model, PBViwL% had much less error of 12.28% to −11.20% (RI = 0.98) from PSL%. Conclusions The inflow-weighted DCE-MRI provides relative perfusion maps similar to that by PS. The discrepancy between conventional indicator-dilution and inflow-weighted analysis represents a mixed-flow component in which pathological flow such as shunting or collaterals might have participated. PMID:23448679

  10. Inflow-weighted pulmonary perfusion: comparison between dynamic contrast-enhanced MRI versus perfusion scintigraphy in complex pulmonary circulation.

    PubMed

    Lin, Yi-Ru; Tsai, Shang-Yueh; Huang, Teng-Yi; Chung, Hsiao-Wen; Huang, Yi-Luan; Wu, Fu-Zong; Lin, Chu-Chuan; Peng, Nan-Jing; Wu, Ming-Ting

    2013-02-28

    Due to the different properties of the contrast agents, the lung perfusion maps as measured by 99mTc-labeled macroaggregated albumin perfusion scintigraphy (PS) are not uncommonly discrepant from those measured by dynamic contrast-enhanced MRI (DCE-MRI) using indicator-dilution analysis in complex pulmonary circulation. Since PS offers the pre-capillary perfusion of the first-pass transit, we hypothesized that an inflow-weighted perfusion model of DCE-MRI could simulate the result by PS. 22 patients underwent DCE-MRI at 1.5T and also PS. Relative perfusion contributed by the left lung was calculated by PS (PS(L%)), by DCE-MRI using conventional indicator dilution theory for pulmonary blood volume (PBV(L%)) and pulmonary blood flow (PBFL%) and using our proposed inflow-weighted pulmonary blood volume (PBV(iw)(L%)). For PBViw(L%), the optimal upper bound of the inflow-weighted integration range was determined by correlation coefficient analysis. The time-to-peak of the normal lung parenchyma was the optimal upper bound in the inflow-weighted perfusion model. Using PSL% as a reference, PBV(L%) showed error of 49.24% to -40.37% (intraclass correlation coefficient R(I) = 0.55) and PBF(L%) had error of 34.87% to -27.76% (R(I) = 0.80). With the inflow-weighted model, PBV(iw)(L%) had much less error of 12.28% to -11.20% (R(I) = 0.98) from PS(L%). The inflow-weighted DCE-MRI provides relative perfusion maps similar to that by PS. The discrepancy between conventional indicator-dilution and inflow-weighted analysis represents a mixed-flow component in which pathological flow such as shunting or collaterals might have participated.

  11. A Simple Method for Decreasing the Liquid Junction Potential in a Flow-through-Type Differential pH Sensor Probe Consisting of pH-FETs by Exerting Spatiotemporal Control of the Liquid Junction

    PubMed Central

    Yamada, Akira; Mohri, Satoshi; Nakamura, Michihiro; Naruse, Keiji

    2015-01-01

    The liquid junction potential (LJP), the phenomenon that occurs when two electrolyte solutions of different composition come into contact, prevents accurate measurements in potentiometry. The effect of the LJP is usually remarkable in measurements of diluted solutions with low buffering capacities or low ion concentrations. Our group has constructed a simple method to eliminate the LJP by exerting spatiotemporal control of a liquid junction (LJ) formed between two solutions, a sample solution and a baseline solution (BLS), in a flow-through-type differential pH sensor probe. The method was contrived based on microfluidics. The sensor probe is a differential measurement system composed of two ion-sensitive field-effect transistors (ISFETs) and one Ag/AgCl electrode. With our new method, the border region of the sample solution and BLS is vibrated in order to mix solutions and suppress the overshoot after the sample solution is suctioned into the sensor probe. Compared to the conventional method without vibration, our method shortened the settling time from over two min to 15 s and reduced the measurement error by 86% to within 0.060 pH. This new method will be useful for improving the response characteristics and decreasing the measurement error of many apparatuses that use LJs. PMID:25835300

  12. Sharing Vital Signs between mobile phone applications.

    PubMed

    Karlen, Walter; Dumont, Guy A; Scheffer, Cornie

    2014-01-01

    We propose a communication library, ShareVitalSigns, for the standardized exchange of vital sign information between health applications running on mobile platforms. The library allows an application to request one or multiple vital signs from independent measurement applications on the Android OS. Compatible measurement applications are automatically detected and can be launched from within the requesting application, simplifying the work flow for the user and reducing typing errors. Data is shared between applications using intents, a passive data structure available on Android OS. The library is accompanied by a test application which serves as a demonstrator. The secure exchange of vital sign information using a standardized library like ShareVitalSigns will facilitate the integration of measurement applications into diagnostic and other high level health monitoring applications and reduce errors due to manual entry of information.

  13. Pulse wave propagation in a model human arterial network: Assessment of 1-D visco-elastic simulations against in vitro measurements.

    PubMed

    Alastruey, Jordi; Khir, Ashraf W; Matthys, Koen S; Segers, Patrick; Sherwin, Spencer J; Verdonck, Pascal R; Parker, Kim H; Peiró, Joaquim

    2011-08-11

    The accuracy of the nonlinear one-dimensional (1-D) equations of pressure and flow wave propagation in Voigt-type visco-elastic arteries was tested against measurements in a well-defined experimental 1:1 replica of the 37 largest conduit arteries in the human systemic circulation. The parameters required by the numerical algorithm were directly measured in the in vitro setup and no data fitting was involved. The inclusion of wall visco-elasticity in the numerical model reduced the underdamped high-frequency oscillations obtained using a purely elastic tube law, especially in peripheral vessels, which was previously reported in this paper [Matthys et al., 2007. Pulse wave propagation in a model human arterial network: Assessment of 1-D numerical simulations against in vitro measurements. J. Biomech. 40, 3476-3486]. In comparison to the purely elastic model, visco-elasticity significantly reduced the average relative root-mean-square errors between numerical and experimental waveforms over the 70 locations measured in the in vitro model: from 3.0% to 2.5% (p<0.012) for pressure and from 15.7% to 10.8% (p<0.002) for the flow rate. In the frequency domain, average relative errors between numerical and experimental amplitudes from the 5th to the 20th harmonic decreased from 0.7% to 0.5% (p<0.107) for pressure and from 7.0% to 3.3% (p<10(-6)) for the flow rate. These results provide additional support for the use of 1-D reduced modelling to accurately simulate clinically relevant problems at a reasonable computational cost. Copyright © 2011 Elsevier Ltd. All rights reserved.

  14. CFD research on runaway transient of pumped storage power station caused by pumping power failure

    NASA Astrophysics Data System (ADS)

    Zhang, L. G.; Zhou, D. Q.

    2013-12-01

    To study runaway transient of pumped storage power station caused by pumping power failure, three dimensional unsteady numerical simulations were executed on geometrical model of the whole flow system. Through numerical calculation, the changeable flow configuration and variation law of some parameters such as unit rotate speed,flow rate and static pressure of measurement points were obtained and compared with experimental data. Numerical results show that runaway speed agrees well with experimental date and its error was 3.7%. The unit undergoes pump condition, brake condition, turbine condition and runaway condition with flow characteristic changing violently. In runaway condition, static pressure in passage pulses very strongly which frequency is related to runaway speed.

  15. Coherent Doppler Lidar for Boundary Layer Studies and Wind Energy

    NASA Astrophysics Data System (ADS)

    Choukulkar, Aditya

    This thesis outlines the development of a vector retrieval technique, based on data assimilation, for a coherent Doppler LIDAR (Light Detection and Ranging). A detailed analysis of the Optimal Interpolation (OI) technique for vector retrieval is presented. Through several modifications to the OI technique, it is shown that the modified technique results in significant improvement in velocity retrieval accuracy. These modifications include changes to innovation covariance portioning, covariance binning, and analysis increment calculation. It is observed that the modified technique is able to make retrievals with better accuracy, preserves local information better, and compares well with tower measurements. In order to study the error of representativeness and vector retrieval error, a lidar simulator was constructed. Using the lidar simulator a thorough sensitivity analysis of the lidar measurement process and vector retrieval is carried out. The error of representativeness as a function of scales of motion and sensitivity of vector retrieval to look angle is quantified. Using the modified OI technique, study of nocturnal flow in Owens' Valley, CA was carried out to identify and understand uncharacteristic events on the night of March 27th 2006. Observations from 1030 UTC to 1230 UTC (0230 hr local time to 0430 hr local time) on March 27 2006 are presented. Lidar observations show complex and uncharacteristic flows such as sudden bursts of westerly cross-valley wind mixing with the dominant up-valley wind. Model results from Coupled Ocean/Atmosphere Mesoscale Prediction System (COAMPS RTM) and other in-situ instrumentations are used to corroborate and complement these observations. The modified OI technique is used to identify uncharacteristic and extreme flow events at a wind development site. Estimates of turbulence and shear from this technique are compared to tower measurements. A formulation for equivalent wind speed in the presence of variations in wind speed and direction, combined with shear is developed and used to determine wind energy content in presence of turbulence.

  16. A novel anthropomorphic flow phantom for the quantitative evaluation of prostate DCE-MRI acquisition techniques

    NASA Astrophysics Data System (ADS)

    Knight, Silvin P.; Browne, Jacinta E.; Meaney, James F.; Smith, David S.; Fagan, Andrew J.

    2016-10-01

    A novel anthropomorphic flow phantom device has been developed, which can be used for quantitatively assessing the ability of magnetic resonance imaging (MRI) scanners to accurately measure signal/concentration time-intensity curves (CTCs) associated with dynamic contrast-enhanced (DCE) MRI. Modelling of the complex pharmacokinetics of contrast agents as they perfuse through the tumour capillary network has shown great promise for cancer diagnosis and therapy monitoring. However, clinical adoption has been hindered by methodological problems, resulting in a lack of consensus regarding the most appropriate acquisition and modelling methodology to use and a consequent wide discrepancy in published data. A heretofore overlooked source of such discrepancy may arise from measurement errors of tumour CTCs deriving from the imaging pulse sequence itself, while the effects on the fidelity of CTC measurement of using rapidly-accelerated sequences such as parallel imaging and compressed sensing remain unknown. The present work aimed to investigate these features by developing a test device in which ‘ground truth’ CTCs were generated and presented to the MRI scanner for measurement, thereby allowing for an assessment of the DCE-MRI protocol to accurately measure this curve shape. The device comprised a four-pump flow system wherein CTCs derived from prior patient prostate data were produced in measurement chambers placed within the imaged volume. The ground truth was determined as the mean of repeat measurements using an MRI-independent, custom-built optical imaging system. In DCE-MRI experiments, significant discrepancies between the ground truth and measured CTCs were found for both tumorous and healthy tissue-mimicking curve shapes. Pharmacokinetic modelling revealed errors in measured K trans, v e and k ep values of up to 42%, 31%, and 50% respectively, following a simple variation of the parallel imaging factor and number of signal averages in the acquisition protocol. The device allows for the quantitative assessment and standardisation of DCE-MRI protocols (both existing and emerging).

  17. Measuring peak expiratory flow in general practice: comparison of mini Wright peak flow meter and turbine spirometer.

    PubMed Central

    Jones, K P; Mullee, M A

    1990-01-01

    OBJECTIVE--To compare measurements of the peak expiratory flow rate taken by the mini Wright peak flow meter and the turbine spirometer. DESIGN--Pragmatic study with randomised order of use of recording instruments. Phase 1 compared a peak expiratory flow type expiration recorded by the mini Wright peak flow meter with an expiration to forced vital capacity recorded by the turbine spirometer. Phase 2 compared peak expiratory flow type expirations recorded by both meters. Reproducibility was assessed separately. SETTING--Routine surgeries at Aldermoor Health Centre, Southampton. SUBJECTS--212 Patients aged 4 to 78 presenting with asthma or obstructive airways disease. Each patient contributed only once to each phase (105 in phase 1, 107 in phase 2), but some entered both phases on separate occasions. Reproducibility was tested on a further 31 patients. MAIN OUTCOME MEASURE--95% Limits of agreement between measurements on the two meters. RESULTS--208 (98%) Of the readings taken by the mini Wright meter were higher than the corresponding readings taken by the turbine spirometer, but the 95% limits of agreement (mean difference (2 SD] were wide (1 to 173 l/min). Differences due to errors in reproducibility were not sufficient to predict this level of disagreement. Analysis by age, sex, order of use, and the type of expiration did not detect any significant differences. CONCLUSIONS--The two methods of measuring peak expiratory flow rate were not comparable. The mini Wright meter is likely to remain the preferred instrument in general practice. PMID:2142611

  18. Shear flow control of cold and heated rectangular jets by mechanical tabs. Volume 2: Tabulated data

    NASA Technical Reports Server (NTRS)

    Brown, W. H.; Ahuja, K. K.

    1989-01-01

    The effects of mechanical protrusions on the jet mixing characteristics of rectangular nozzles for heated and unheated subsonic and supersonic jet plumes were studied. The characteristics of a rectangular nozzle of aspect ratio 4 without the mechanical protrusions were first investigated. Intrusive probes were used to make the flow measurements. Possible errors introduced by intrusive probes in making shear flow measurements were also examined. Several scaled sizes of mechanical tabs were then tested, configured around the perimeter of the rectangular jet. Both the number and the location of the tabs were varied. From this, the best configuration was selected. This volume contains tabulated data for each of the data runs cited in Volume 1. Baseline characteristics, mixing modifications (subsonic and supersonic, heated and unheated) and miscellaneous charts are included.

  19. Quantifying equation-of-state and opacity errors using integrated supersonic diffusive radiation flow experiments on the National Ignition Facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guymer, T. M., E-mail: Thomas.Guymer@awe.co.uk; Moore, A. S.; Morton, J.

    A well diagnosed campaign of supersonic, diffusive radiation flow experiments has been fielded on the National Ignition Facility. These experiments have used the accurate measurements of delivered laser energy and foam density to enable an investigation into SESAME's tabulated equation-of-state values and CASSANDRA's predicted opacity values for the low-density C{sub 8}H{sub 7}Cl foam used throughout the campaign. We report that the results from initial simulations under-predicted the arrival time of the radiation wave through the foam by ≈22%. A simulation study was conducted that artificially scaled the equation-of-state and opacity with the intended aim of quantifying the systematic offsets inmore » both CASSANDRA and SESAME. Two separate hypotheses which describe these errors have been tested using the entire ensemble of data, with one being supported by these data.« less

  20. In Vivo Validation of Numerical Prediction for Turbulence Intensity in an Aortic Coarctation

    PubMed Central

    Arzani, Amirhossein; Dyverfeldt, Petter; Ebbers, Tino; Shadden, Shawn C.

    2013-01-01

    This paper compares numerical predictions of turbulence intensity with in vivo measurement. Magnetic resonance imaging (MRI) was carried out on a 60-year-old female with a restenosed aortic coarctation. Time-resolved three-directional phase-contrast (PC) MRI data was acquired to enable turbulence intensity estimation. A contrast-enhanced MR angiography (MRA) and a time-resolved 2D PCMRI measurement were also performed to acquire data needed to perform subsequent image-based computational fluid dynamics (CFD) modeling. A 3D model of the aortic coarctation and surrounding vasculature was constructed from the MRA data, and physiologic boundary conditions were modeled to match 2D PCMRI and pressure pulse measurements. Blood flow velocity data was subsequently obtained by numerical simulation. Turbulent kinetic energy (TKE) was computed from the resulting CFD data. Results indicate relative agreement (error ≈10%) between the in vivo measurements and the CFD predictions of TKE. The discrepancies in modeled vs. measured TKE values were within expectations due to modeling and measurement errors. PMID:22016327

  1. 40 CFR Appendix B to Part 50 - Reference Method for the Determination of Suspended Particulate Matter in the Atmosphere (High...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... conditions (25 °C, 760 mm Hg [101 kPa]), is determined from the measured flow rate and the sampling time. The... conveniently. c. Preclude leaks that would cause error in the measurement of the air volume passing through the... through the filter. b. Be rectangular in shape with a gabled roof, similar to the design shown in Figure 1...

  2. 40 CFR Appendix B to Part 50 - Reference Method for the Determination of Suspended Particulate Matter in the Atmosphere (High...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... conditions (25 °C, 760 mm Hg [101 kPa]), is determined from the measured flow rate and the sampling time. The... conveniently. c. Preclude leaks that would cause error in the measurement of the air volume passing through the... through the filter. b. Be rectangular in shape with a gabled roof, similar to the design shown in Figure 1...

  3. SPECIFIC HEAT INDICATOR

    DOEpatents

    Horn, F.L.; Binns, J.E.

    1961-05-01

    Apparatus for continuously and automatically measuring and computing the specific heat of a flowing solution is described. The invention provides for the continuous measurement of all the parameters required for the mathematical solution of this characteristic. The parameters are converted to logarithmic functions which are added and subtracted in accordance with the solution and a null-seeking servo reduces errors due to changing voltage drops to a minimum. Logarithmic potentiometers are utilized in a unique manner to accomplish these results.

  4. Using Redundancy To Reduce Errors in Magnetometer Readings

    NASA Technical Reports Server (NTRS)

    Kulikov, Igor; Zak, Michail

    2004-01-01

    A method of reducing errors in noisy magnetic-field measurements involves exploitation of redundancy in the readings of multiple magnetometers in a cluster. By "redundancy"is meant that the readings are not entirely independent of each other because the relationships among the magnetic-field components that one seeks to measure are governed by the fundamental laws of electromagnetism as expressed by Maxwell's equations. Assuming that the magnetometers are located outside a magnetic material, that the magnetic field is steady or quasi-steady, and that there are no electric currents flowing in or near the magnetometers, the applicable Maxwell 's equations are delta x B = 0 and delta(raised dot) B = 0, where B is the magnetic-flux-density vector. By suitable algebraic manipulation, these equations can be shown to impose three independent constraints on the values of the components of B at the various magnetometer positions. In general, the problem of reducing the errors in noisy measurements is one of finding a set of corrected values that minimize an error function. In the present method, the error function is formulated as (1) the sum of squares of the differences between the corrected and noisy measurement values plus (2) a sum of three terms, each comprising the product of a Lagrange multiplier and one of the three constraints. The partial derivatives of the error function with respect to the corrected magnetic-field component values and the Lagrange multipliers are set equal to zero, leading to a set of equations that can be put into matrix.vector form. The matrix can be inverted to solve for a vector that comprises the corrected magnetic-field component values and the Lagrange multipliers.

  5. Initial testing of a 3D printed perfusion phantom using digital subtraction angiography

    NASA Astrophysics Data System (ADS)

    Wood, Rachel P.; Khobragade, Parag; Ying, Leslie; Snyder, Kenneth; Wack, David; Bednarek, Daniel R.; Rudin, Stephen; Ionita, Ciprian N.

    2015-03-01

    Perfusion imaging is the most applied modality for the assessment of acute stroke. Parameters such as Cerebral Blood Flow (CBF), Cerebral Blood volume (CBV) and Mean Transit Time (MTT) are used to distinguish the tissue infarct core and ischemic penumbra. Due to lack of standardization these parameters vary significantly between vendors and software even when provided with the same data set. There is a critical need to standardize the systems and make them more reliable. We have designed a uniform phantom to test and verify the perfusion systems. We implemented a flow loop with different flow rates (250, 300, 350 ml/min) and injected the same amount of contrast. The images of the phantom were acquired using a Digital Angiographic system. Since this phantom is uniform, projection images obtained using DSA is sufficient for initial validation. To validate the phantom we measured the contrast concentration at three regions of interest (arterial input, venous output, perfused area) and derived time density curves (TDC). We then calculated the maximum slope, area under the TDCs and flow. The maximum slope calculations were linearly increasing with increase in flow rate, the area under the curve decreases with increase in flow rate. There was 25% error between the calculated flow and measured flow. The derived TDCs were clinically relevant and the calculated flow, maximum slope and areas under the curve were sensitive to the measured flow. We have created a systematic way to calibrate existing perfusion systems and assess their reliability.

  6. Direct process estimation from tomographic data using artificial neural systems

    NASA Astrophysics Data System (ADS)

    Mohamad-Saleh, Junita; Hoyle, Brian S.; Podd, Frank J.; Spink, D. M.

    2001-07-01

    The paper deals with the goal of component fraction estimation in multicomponent flows, a critical measurement in many processes. Electrical capacitance tomography (ECT) is a well-researched sensing technique for this task, due to its low-cost, non-intrusion, and fast response. However, typical systems, which include practicable real-time reconstruction algorithms, give inaccurate results, and existing approaches to direct component fraction measurement are flow-regime dependent. In the investigation described, an artificial neural network approach is used to directly estimate the component fractions in gas-oil, gas-water, and gas-oil-water flows from ECT measurements. A 2D finite- element electric field model of a 12-electrode ECT sensor is used to simulate ECT measurements of various flow conditions. The raw measurements are reduced to a mutually independent set using principal components analysis and used with their corresponding component fractions to train multilayer feed-forward neural networks (MLFFNNs). The trained MLFFNNs are tested with patterns consisting of unlearned ECT simulated and plant measurements. Results included in the paper have a mean absolute error of less than 1% for the estimation of various multicomponent fractions of the permittivity distribution. They are also shown to give improved component fraction estimation compared to a well known direct ECT method.

  7. Fluorescence Imaging of Rotational and Vibrational Temperature in a Shock Tunnel Nozzle Flow

    NASA Technical Reports Server (NTRS)

    Palma, Philip C.; Danehy, Paul M.; Houwing, A. F. P.

    2003-01-01

    Two-dimensional rotational and vibrational temperature measurements were made at the nozzle exit of a free-piston shock tunnel using planar laser-induced fluorescence. The Mach 7 flow consisted predominantly of nitrogen with a trace quantity of nitric oxide. Nitric oxide was employed as the probe species and was excited at 225 nm. Nonuniformities in the distribution of nitric oxide in the test gas were observed and were concluded to be due to contamination of the test gas by driver gas or cold test gas.The nozzle-exit rotational temperature was measured and is in reasonable agreement with computational modeling. Nonlinearities in the detection system were responsible for systematic errors in the measurements. The vibrational temperature was measured to be constant with distance from the nozzle exit, indicating it had frozen during the nozzle expansion.

  8. Adaptive finite element method for turbulent flow near a propeller

    NASA Astrophysics Data System (ADS)

    Pelletier, Dominique; Ilinca, Florin; Hetu, Jean-Francois

    1994-11-01

    This paper presents an adaptive finite element method based on remeshing to solve incompressible turbulent free shear flow near a propeller. Solutions are obtained in primitive variables using a highly accurate finite element approximation on unstructured grids. Turbulence is modeled by a mixing length formulation. Two general purpose error estimators, which take into account swirl and the variation of the eddy viscosity, are presented and applied to the turbulent wake of a propeller. Predictions compare well with experimental measurements. The proposed adaptive scheme is robust, reliable and cost effective.

  9. A novel fiber-optic measurement system for the evaluation of performances of neonatal pulmonary ventilators

    NASA Astrophysics Data System (ADS)

    Battista, L.; Scorza, A.; Botta, F.; Sciuto, S. A.

    2016-02-01

    Published standards for the performance evaluation of pulmonary ventilators are mainly directed to manufacturers rather than to end-users and often considered inadequate or not comprehensive. In order to contribute to overcome the problems above, a novel measurement system was proposed and tested with waveforms of mechanical ventilation by means of experimental trials carried out with infant ventilators typically used in neonatal intensive care units: the main quantities of mechanical ventilation in newborns are monitored, i.e. air flow rate, differential pressure and volume from infant ventilator are measured by means of two novel fiber-optic sensors (OFSs) developed and characterized by the authors, while temperature and relative humidity of air mass are obtained by two commercial transducers. The proposed fiber-optic sensors (flow sensor Q-OFS, pressure sensor P-OFS) showed measurement ranges of air flow and pressure typically encountered in neonatal mechanical ventilation, i.e. the air flow rate Q ranged from 3 l min-1 to 18 l min-1 (inspiratory) and from  -3 l min-1 to  -18 l min-1 (expiratory), the differential pressure ΔP ranged from  -15 cmH2O to 15 cmH2O. In each experimental trial carried out with different settings of the ventilator, outputs of the OFSs are compared with data from two reference sensors (reference flow sensor RF, reference pressure sensor RP) and results are found consistent: flow rate Q showed a maximum error between Q-OFS and RF up to 13 percent, with an output ratio Q RF/Q OFS of not more than 1.06  ±  0.09 (least square estimation, 95 percent confidence level, R 2 between 0.9822 and 0.9931). On the other hand the maximum error between P-OFS and RP on differential pressure ΔP was lower than 10 percent, with an output ratio ΔP RP/ΔP OFS between 0.977  ±  0.022 and 1.0  ±  0.8 (least square estimation, 95 percent confidence level, R 2 between 0.9864 and 0.9876). Despite the possible improvements, results were encouraging and suggested the proposed measurement system can be considered suitable for performances evaluation of neonatal ventilators and useful for both end-users and manufacturers.

  10. Decadal-scale sensitivity of Northeast Greenland ice flow to errors in surface mass balance using ISSM

    NASA Astrophysics Data System (ADS)

    Schlegel, N.-J.; Larour, E.; Seroussi, H.; Morlighem, M.; Box, J. E.

    2013-06-01

    The behavior of the Greenland Ice Sheet, which is considered a major contributor to sea level changes, is best understood on century and longer time scales. However, on decadal time scales, its response is less predictable due to the difficulty of modeling surface climate, as well as incomplete understanding of the dynamic processes responsible for ice flow. Therefore, it is imperative to understand how modeling advancements, such as increased spatial resolution or more comprehensive ice flow equations, might improve projections of ice sheet response to climatic trends. Here we examine how a finely resolved climate forcing influences a high-resolution ice stream model that considers longitudinal stresses. We simulate ice flow using a two-dimensional Shelfy-Stream Approximation implemented within the Ice Sheet System Model (ISSM) and use uncertainty quantification tools embedded within the model to calculate the sensitivity of ice flow within the Northeast Greenland Ice Stream to errors in surface mass balance (SMB) forcing. Our results suggest that the model tends to smooth ice velocities even when forced with extreme errors in SMB. Indeed, errors propagate linearly through the model, resulting in discharge uncertainty of 16% or 1.9 Gt/yr. We find that mass flux is most sensitive to local errors but is also affected by errors hundreds of kilometers away; thus, an accurate SMB map of the entire basin is critical for realistic simulation. Furthermore, sensitivity analyses indicate that SMB forcing needs to be provided at a resolution of at least 40 km.

  11. An Error-Reduction Algorithm to Improve Lidar Turbulence Estimates for Wind Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Newman, Jennifer F.; Clifton, Andrew

    2016-08-01

    Currently, cup anemometers on meteorological (met) towers are used to measure wind speeds and turbulence intensity to make decisions about wind turbine class and site suitability. However, as modern turbine hub heights increase and wind energy expands to complex and remote sites, it becomes more difficult and costly to install met towers at potential sites. As a result, remote sensing devices (e.g., lidars) are now commonly used by wind farm managers and researchers to estimate the flow field at heights spanned by a turbine. While lidars can accurately estimate mean wind speeds and wind directions, there is still a largemore » amount of uncertainty surrounding the measurement of turbulence with lidars. This uncertainty in lidar turbulence measurements is one of the key roadblocks that must be overcome in order to replace met towers with lidars for wind energy applications. In this talk, a model for reducing errors in lidar turbulence estimates is presented. Techniques for reducing errors from instrument noise, volume averaging, and variance contamination are combined in the model to produce a corrected value of the turbulence intensity (TI), a commonly used parameter in wind energy. In the next step of the model, machine learning techniques are used to further decrease the error in lidar TI estimates.« less

  12. Seepage investigation and selected hydrologic data for the Escalante River drainage basin, Garfield and Kane Counties, Utah, 1909-2002

    USGS Publications Warehouse

    Wilberg, Dale E.; Stolp, Bernard J.

    2005-01-01

    This report contains the results of an October 2001 seepage investigation conducted along a reach of the Escalante River in Utah extending from the U.S. Geological Survey streamflow-gaging station near Escalante to the mouth of Stevens Canyon. Discharge was measured at 16 individual sites along 15 consecutive reaches. Total reach length was about 86 miles. A reconnaissance-level sampling of water for tritium and chlorofluorcarbons was also done. In addition, hydrologic and water-quality data previously collected and published by the U.S. Geological Survey for the 2,020-square-mile Escalante River drainage basin was compiled and is presented in 12 tables. These data were collected from 64 surface-water sites and 28 springs from 1909 to 2002.None of the 15 consecutive reaches along the Escalante River had a measured loss or gain that exceeded the measurement error. All discharge measurements taken during the seepage investigation were assigned a qualitative rating of accuracy that ranged from 5 percent to greater than 8 percent of the actual flow. Summing the potential error for each measurement and dividing by the maximum of either the upstream discharge and any tributary inflow, or the downstream discharge, determined the normalized error for a reach. This was compared to the computed loss or gain that also was normalized to the maximum discharge. A loss or gain for a specified reach is considered significant when the loss or gain (normalized percentage difference) is greater than the measurement error (normalized percentage error). The percentage difference and percentage error were normalized to allow comparison between reaches with different amounts of discharge.The plate that accompanies the report is 36" by 40" and can be printed in 16 tiles, 8.5 by 11 inches. An index for the tiles is located on the lower left-hand side of the plate. Using Adobe Acrobat, the plate can be viewed independent of the report; all Acrobat functions are available.

  13. Methods to Evaluate Influence of Onsite Septic Wastewater-Treatment Systems on Base Flow in Selected Watersheds in Gwinnett County, Georgia, October 2007

    USGS Publications Warehouse

    Landers, Mark N.; Ankcorn, Paul D.

    2008-01-01

    The influence of onsite septic wastewater-treatment systems (OWTS) on base-flow quantity needs to be understood to evaluate consumptive use of surface-water resources by OWTS. If the influence of OWTS on stream base flow can be measured and if the inflow to OWTS is known from water-use data, then water-budget approaches can be used to evaluate consumptive use. This report presents a method to evaluate the influence of OWTS on ground-water recharge and base-flow quantity. Base flow was measured in Gwinnett County, Georgia, during an extreme drought in October 2007 in 12 watersheds that have low densities of OWTS (22 to 96 per square mile) and 12 watersheds that have high densities (229 to 965 per square mile) of OWTS. Mean base-flow yield in the high-density OWTS watersheds is 90 percent greater than in the low-density OWTS watersheds. The density of OWTS is statistically significant (p-value less than 0.01) in relation to base-flow yield as well as specific conductance. Specific conductance of base flow increases with OWTS density, which may indicate influence from treated wastewater. The study results indicate considerable unexplained variation in measured base-flow yield for reasons that may include: unmeasured processes, a limited dataset, and measurement errors. Ground-water recharge from a high density of OWTS is assumed to be steady state from year to year so that the annual amount of increase in base flow from OWTS is expected to be constant. In dry years, however, OWTS contributions represent a larger percentage of natural base flow than in wet years. The approach of this study could be combined with water-use data and analyses to estimate consumptive use of OWTS.

  14. μ-PIV measurements of the ensemble flow fields surrounding a migrating semi-infinite bubble.

    PubMed

    Yamaguchi, Eiichiro; Smith, Bradford J; Gaver, Donald P

    2009-08-01

    Microscale particle image velocimetry (μ-PIV) measurements of ensemble flow fields surrounding a steadily-migrating semi-infinite bubble through the novel adaptation of a computer controlled linear motor flow control system. The system was programmed to generate a square wave velocity input in order to produce accurate constant bubble propagation repeatedly and effectively through a fused glass capillary tube. We present a novel technique for re-positioning of the coordinate axis to the bubble tip frame of reference in each instantaneous field through the analysis of the sudden change of standard deviation of centerline velocity profiles across the bubble interface. Ensemble averages were then computed in this bubble tip frame of reference. Combined fluid systems of water/air, glycerol/air, and glycerol/Si-oil were used to investigate flows comparable to computational simulations described in Smith and Gaver (2008) and to past experimental observations of interfacial shape. Fluorescent particle images were also analyzed to measure the residual film thickness trailing behind the bubble. The flow fields and film thickness agree very well with the computational simulations as well as existing experimental and analytical results. Particle accumulation and migration associated with the flow patterns near the bubble tip after long experimental durations are discussed as potential sources of error in the experimental method.

  15. Effects of upstream-biased third-order space correction terms on multidimensional Crowley advection schemes

    NASA Technical Reports Server (NTRS)

    Schlesinger, R. E.

    1985-01-01

    The impact of upstream-biased corrections for third-order spatial truncation error on the stability and phase error of the two-dimensional Crowley combined advective scheme with the cross-space term included is analyzed, putting primary emphasis on phase error reduction. The various versions of the Crowley scheme are formally defined, and their stability and phase error characteristics are intercompared using a linear Fourier component analysis patterned after Fromm (1968, 1969). The performances of the schemes under prototype simulation conditions are tested using time-dependent numerical experiments which advect an initially cone-shaped passive scalar distribution in each of three steady nondivergent flows. One such flow is solid rotation, while the other two are diagonal uniform flow and a strongly deformational vortex.

  16. Combustor air flow control method for fuel cell apparatus

    DOEpatents

    Clingerman, Bruce J.; Mowery, Kenneth D.; Ripley, Eugene V.

    2001-01-01

    A method for controlling the heat output of a combustor in a fuel cell apparatus to a fuel processor where the combustor has dual air inlet streams including atmospheric air and fuel cell cathode effluent containing oxygen depleted air. In all operating modes, an enthalpy balance is provided by regulating the quantity of the air flow stream to the combustor to support fuel cell processor heat requirements. A control provides a quick fast forward change in an air valve orifice cross section in response to a calculated predetermined air flow, the molar constituents of the air stream to the combustor, the pressure drop across the air valve, and a look up table of the orifice cross sectional area and valve steps. A feedback loop fine tunes any error between the measured air flow to the combustor and the predetermined air flow.

  17. Stochastic goal-oriented error estimation with memory

    NASA Astrophysics Data System (ADS)

    Ackmann, Jan; Marotzke, Jochem; Korn, Peter

    2017-11-01

    We propose a stochastic dual-weighted error estimator for the viscous shallow-water equation with boundaries. For this purpose, previous work on memory-less stochastic dual-weighted error estimation is extended by incorporating memory effects. The memory is introduced by describing the local truncation error as a sum of time-correlated random variables. The random variables itself represent the temporal fluctuations in local truncation errors and are estimated from high-resolution information at near-initial times. The resulting error estimator is evaluated experimentally in two classical ocean-type experiments, the Munk gyre and the flow around an island. In these experiments, the stochastic process is adapted locally to the respective dynamical flow regime. Our stochastic dual-weighted error estimator is shown to provide meaningful error bounds for a range of physically relevant goals. We prove, as well as show numerically, that our approach can be interpreted as a linearized stochastic-physics ensemble.

  18. Selected Streamflow Statistics and Regression Equations for Predicting Statistics at Stream Locations in Monroe County, Pennsylvania

    USGS Publications Warehouse

    Thompson, Ronald E.; Hoffman, Scott A.

    2006-01-01

    A suite of 28 streamflow statistics, ranging from extreme low to high flows, was computed for 17 continuous-record streamflow-gaging stations and predicted for 20 partial-record stations in Monroe County and contiguous counties in north-eastern Pennsylvania. The predicted statistics for the partial-record stations were based on regression analyses relating inter-mittent flow measurements made at the partial-record stations indexed to concurrent daily mean flows at continuous-record stations during base-flow conditions. The same statistics also were predicted for 134 ungaged stream locations in Monroe County on the basis of regression analyses relating the statistics to GIS-determined basin characteristics for the continuous-record station drainage areas. The prediction methodology for developing the regression equations used to estimate statistics was developed for estimating low-flow frequencies. This study and a companion study found that the methodology also has application potential for predicting intermediate- and high-flow statistics. The statistics included mean monthly flows, mean annual flow, 7-day low flows for three recurrence intervals, nine flow durations, mean annual base flow, and annual mean base flows for two recurrence intervals. Low standard errors of prediction and high coefficients of determination (R2) indicated good results in using the regression equations to predict the statistics. Regression equations for the larger flow statistics tended to have lower standard errors of prediction and higher coefficients of determination (R2) than equations for the smaller flow statistics. The report discusses the methodologies used in determining the statistics and the limitations of the statistics and the equations used to predict the statistics. Caution is indicated in using the predicted statistics for small drainage area situations. Study results constitute input needed by water-resource managers in Monroe County for planning purposes and evaluation of water-resources availability.

  19. Development of a new instrument for direct skin friction measurements

    NASA Technical Reports Server (NTRS)

    Vakili, A. D.; Wu, J. M.

    1986-01-01

    A device developed for the direct measurement of wall shear stress generated by flows is described. Simple and symmetric in design with optional small moving mass and no internal friction, the features employed in the design eliminate most of the difficulties associated with the traditional floating element balances. The device is basically small and can be made in various sizes. Vibration problems associated with the floating element skin friction balances were found to be minimized due to the design symmetry and optional damping provided. The design eliminates or reduces the errors associated with conventional floating element devices: such as errors due to gaps, pressure gradient, acceleration, heat transfer, and temperature change. The instrument is equipped with various sensing systems and the output signal is a linear function of the wall shear stress. Dynamic measurements could be made in a limited range and measurements in liquids could be performed readily. Measurement made in the three different tunnels show excellent agreement with data obtained by the floating element devices and other techniques.

  20. Assessing potential errors of MRI-based measurements of pulmonary blood flow using a detailed network flow model

    PubMed Central

    Buxton, R. B.; Prisk, G. K.

    2012-01-01

    MRI images of pulmonary blood flow using arterial spin labeling (ASL) measure the delivery of magnetically tagged blood to an image plane during one systolic ejection period. However, the method potentially suffers from two problems, each of which may depend on the imaging plane location: 1) the inversion plane is thicker than the imaging plane, resulting in a gap that blood must cross to be detected in the image; and 2) ASL includes signal contributions from tagged blood in conduit vessels (arterial and venous). By using an in silico model of the pulmonary circulation we found the gap reduced the ASL signal to 64–74% of that in the absence of a gap in the sagittal plane and 53–84% in the coronal. The contribution of the conduit vessels varied markedly as a function of image plane ranging from ∼90% of the overall signal in image planes that encompass the central hilar vessels to <20% in peripheral image planes. A threshold cutoff removing voxels with intensities >35% of maximum reduced the conduit vessel contribution to the total ASL signal to ∼20% on average; however, planes with large contributions from conduit vessels underestimate acinar flow due to a high proportion of in-plane flow, making ASL measurements of perfusion impractical. In other image planes, perfusion dominated the resulting ASL images with good agreement between ASL and acinar flow. Similarly, heterogeneity of the ASL signal as measured by relative dispersion is a reliable measure of heterogeneity of the acinar flow distribution in the same image planes. PMID:22539167

  1. Assessing potential errors of MRI-based measurements of pulmonary blood flow using a detailed network flow model.

    PubMed

    Burrowes, K S; Buxton, R B; Prisk, G K

    2012-07-01

    MRI images of pulmonary blood flow using arterial spin labeling (ASL) measure the delivery of magnetically tagged blood to an image plane during one systolic ejection period. However, the method potentially suffers from two problems, each of which may depend on the imaging plane location: 1) the inversion plane is thicker than the imaging plane, resulting in a gap that blood must cross to be detected in the image; and 2) ASL includes signal contributions from tagged blood in conduit vessels (arterial and venous). By using an in silico model of the pulmonary circulation we found the gap reduced the ASL signal to 64-74% of that in the absence of a gap in the sagittal plane and 53-84% in the coronal. The contribution of the conduit vessels varied markedly as a function of image plane ranging from ∼90% of the overall signal in image planes that encompass the central hilar vessels to <20% in peripheral image planes. A threshold cutoff removing voxels with intensities >35% of maximum reduced the conduit vessel contribution to the total ASL signal to ∼20% on average; however, planes with large contributions from conduit vessels underestimate acinar flow due to a high proportion of in-plane flow, making ASL measurements of perfusion impractical. In other image planes, perfusion dominated the resulting ASL images with good agreement between ASL and acinar flow. Similarly, heterogeneity of the ASL signal as measured by relative dispersion is a reliable measure of heterogeneity of the acinar flow distribution in the same image planes.

  2. Extraction of diffuse correlation spectroscopy flow index by integration of Nth-order linear model with Monte Carlo simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shang, Yu; Lin, Yu; Yu, Guoqiang, E-mail: guoqiang.yu@uky.edu

    2014-05-12

    Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αD{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αD{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo strokemore » model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αD{sub B} (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: −5.3% to −18.0%) for different tissue models. Although adding random noises to DCS data resulted in αD{sub B} variations, the mean values of errors in extracting αD{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αD{sub B} using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.« less

  3. Aerodynamic Design of Axial-flow Compressors. Volume III

    NASA Technical Reports Server (NTRS)

    Johnson, Irving A; Bullock, Robert O; Graham, Robert W; Costilow, Eleanor L; Huppert, Merle C; Benser, William A; Herzig, Howard Z; Hansen, Arthur G; Jackson, Robert J; Yohner, Peggy L; hide

    1956-01-01

    Chapters XI to XIII concern the unsteady compressor operation arising when compressor blade elements stall. The fields of compressor stall and surge are reviewed in Chapters XI and XII, respectively. The part-speed operating problem in high-pressure-ratio multistage axial-flow compressors is analyzed in Chapter XIII. Chapter XIV summarizes design methods and theories that extend beyond the simplified two-dimensional approach used previously in the report. Chapter XV extends this three-dimensional treatment by summarizing the literature on secondary flows and boundary layer effects. Charts for determining the effects of errors in design parameters and experimental measurements on compressor performance are given in Chapters XVI. Chapter XVII reviews existing literature on compressor and turbine matching techniques.

  4. Experimental study of overland flow resistance coefficient model of grassland based on BP neural network

    NASA Astrophysics Data System (ADS)

    Jiao, Peng; Yang, Er; Ni, Yong Xin

    2018-06-01

    The overland flow resistance on grassland slope of 20° was studied by using simulated rainfall experiments. Model of overland flow resistance coefficient was established based on BP neural network. The input variations of model were rainfall intensity, flow velocity, water depth, and roughness of slope surface, and the output variations was overland flow resistance coefficient. Model was optimized by Genetic Algorithm. The results show that the model can be used to calculate overland flow resistance coefficient, and has high simulation accuracy. The average prediction error of the optimized model of test set is 8.02%, and the maximum prediction error was 18.34%.

  5. Analytical skin friction and heat transfer formula for compressible internal flows

    NASA Technical Reports Server (NTRS)

    Dechant, Lawrence J.; Tattar, Marc J.

    1994-01-01

    An analytic, closed-form friction formula for turbulent, internal, compressible, fully developed flow was derived by extending the incompressible law-of-the-wall relation to compressible cases. The model is capable of analyzing heat transfer as a function of constant surface temperatures and surface roughness as well as analyzing adiabatic conditions. The formula reduces to Prandtl's law of friction for adiabatic, smooth, axisymmetric flow. In addition, the formula reduces to the Colebrook equation for incompressible, adiabatic, axisymmetric flow with various roughnesses. Comparisons with available experiments show that the model averages roughly 12.5 percent error for adiabatic flow and 18.5 percent error for flow involving heat transfer.

  6. A novel PON based UMTS broadband wireless access network architecture with an algorithm to guarantee end to end QoS

    NASA Astrophysics Data System (ADS)

    Sana, Ajaz; Hussain, Shahab; Ali, Mohammed A.; Ahmed, Samir

    2007-09-01

    In this paper we proposes a novel Passive Optical Network (PON) based broadband wireless access network architecture to provide multimedia services (video telephony, video streaming, mobile TV, mobile emails etc) to mobile users. In the conventional wireless access networks, the base stations (Node B) and Radio Network Controllers (RNC) are connected by point to point T1/E1 lines (Iub interface). The T1/E1 lines are expensive and add up to operating costs. Also the resources (transceivers and T1/E1) are designed for peak hours traffic, so most of the time the dedicated resources are idle and wasted. Further more the T1/E1 lines are not capable of supporting bandwidth (BW) required by next generation wireless multimedia services proposed by High Speed Packet Access (HSPA, Rel.5) for Universal Mobile Telecommunications System (UMTS) and Evolution Data only (EV-DO) for Code Division Multiple Access 2000 (CDMA2000). The proposed PON based back haul can provide Giga bit data rates and Iub interface can be dynamically shared by Node Bs. The BW is dynamically allocated and the unused BW from lightly loaded Node Bs is assigned to heavily loaded Node Bs. We also propose a novel algorithm to provide end to end Quality of Service (QoS) (between RNC and user equipment).The algorithm provides QoS bounds in the wired domain as well as in wireless domain with compensation for wireless link errors. Because of the air interface there can be certain times when the user equipment (UE) is unable to communicate with Node B (usually referred to as link error). Since the link errors are bursty and location dependent. For a proposed approach, the scheduler at the Node B maps priorities and weights for QoS into wireless MAC. The compensations for errored links is provided by the swapping of services between the active users and the user data is divided into flows, with flows allowed to lag or lead. The algorithm guarantees (1)delay and throughput for error-free flows,(2)short term fairness among error-free flows,(3)long term fairness among errored and error-free flows,(4)graceful degradation for leading flows and graceful compensation for lagging flows.

  7. A Supersonic Tunnel for Laser and Flow-Seeding Techniques

    NASA Technical Reports Server (NTRS)

    Bruckner, Robert J.; Lepicovsky, Jan

    1994-01-01

    A supersonic wind tunnel with flow conditions of 3 lbm/s (1.5 kg/s) at a free-stream Mach number of 2.5 was designed and tested to provide an arena for future development work on laser measurement and flow-seeding techniques. The hybrid supersonic nozzle design that was used incorporated the rapid expansion method of propulsive nozzles while it maintained the uniform, disturbance-free flow required in supersonic wind tunnels. A viscous analysis was performed on the tunnel to determine the boundary layer growth characteristics along the flowpath. Appropriate corrections were then made to the contour of the nozzle. Axial pressure distributions were measured and Mach number distributions were calculated based on three independent data reduction methods. A complete uncertainty analysis was performed on the precision error of each method. Complex shock-wave patterns were generated in the flow field by wedges mounted near the roof and floor of the tunnel. The most stable shock structure was determined experimentally by the use of a focusing schlieren system and a novel, laser based dynamic shock position sensor. Three potential measurement regions for future laser and flow-seeding studies were created in the shock structure: deceleration through an oblique shock wave of 50 degrees, strong deceleration through a normal shock wave, and acceleration through a supersonic expansion fan containing 25 degrees of flow turning.

  8. FloWave.US: validated, open-source, and flexible software for ultrasound blood flow analysis.

    PubMed

    Coolbaugh, Crystal L; Bush, Emily C; Caskey, Charles F; Damon, Bruce M; Towse, Theodore F

    2016-10-01

    Automated software improves the accuracy and reliability of blood velocity, vessel diameter, blood flow, and shear rate ultrasound measurements, but existing software offers limited flexibility to customize and validate analyses. We developed FloWave.US-open-source software to automate ultrasound blood flow analysis-and demonstrated the validity of its blood velocity (aggregate relative error, 4.32%) and vessel diameter (0.31%) measures with a skeletal muscle ultrasound flow phantom. Compared with a commercial, manual analysis software program, FloWave.US produced equivalent in vivo cardiac cycle time-averaged mean (TAMean) velocities at rest and following a 10-s muscle contraction (mean bias <1 pixel for both conditions). Automated analysis of ultrasound blood flow data was 9.8 times faster than the manual method. Finally, a case study of a lower extremity muscle contraction experiment highlighted the ability of FloWave.US to measure small fluctuations in TAMean velocity, vessel diameter, and mean blood flow at specific time points in the cardiac cycle. In summary, the collective features of our newly designed software-accuracy, reliability, reduced processing time, cost-effectiveness, and flexibility-offer advantages over existing proprietary options. Further, public distribution of FloWave.US allows researchers to easily access and customize code to adapt ultrasound blood flow analysis to a variety of vascular physiology applications. Copyright © 2016 the American Physiological Society.

  9. Assessing and measuring wetland hydrology

    USGS Publications Warehouse

    Rosenberry, Donald O.; Hayashi, Masaki; Anderson, James T.; Davis, Craig A.

    2013-01-01

    Virtually all ecological processes that occur in wetlands are influenced by the water that flows to, from, and within these wetlands. This chapter provides the “how-to” information for quantifying the various source and loss terms associated with wetland hydrology. The chapter is organized from a water-budget perspective, with sections associated with each of the water-budget components that are common in most wetland settings. Methods for quantifying the water contained within the wetland are presented first, followed by discussion of each separate component. Measurement accuracy and sources of error are discussed for each of the methods presented, and a separate section discusses the cumulative error associated with determining a water budget for a wetland. Exercises and field activities will provide hands-on experience that will facilitate greater understanding of these processes.

  10. Estimates of monthly streamflow characteristics at selected sites in the upper Missouri River basin, Montana, base period water years 1937-86

    USGS Publications Warehouse

    Parrett, Charles; Johnson, D.R.; Hull, J.A.

    1989-01-01

    Estimates of streamflow characteristics (monthly mean flow that is exceeded 90, 80, 50, and 20 percent of the time for all years of record and mean monthly flow) were made and are presented in tabular form for 312 sites in the Missouri River basin in Montana. Short-term gaged records were extended to the base period of water years 1937-86, and were used to estimate monthly streamflow characteristics at 100 sites. Data from 47 gaged sites were used in regression analysis relating the streamflow characteristics to basin characteristics and to active-channel width. The basin-characteristics equations, with standard errors of 35% to 97%, were used to estimate streamflow characteristics at 179 ungaged sites. The channel-width equations, with standard errors of 36% to 103%, were used to estimate characteristics at 138 ungaged sites. Streamflow measurements were correlated with concurrent streamflows at nearby gaged sites to estimate streamflow characteristics at 139 ungaged sites. In a test using 20 pairs of gages, the standard errors ranged from 31% to 111%. At 139 ungaged sites, the estimates from two or more of the methods were weighted and combined in accordance with the variance of individual methods. When estimates from three methods were combined the standard errors ranged from 24% to 63 %. A drainage-area-ratio adjustment method was used to estimate monthly streamflow characteristics at seven ungaged sites. The reliability of the drainage-area-ratio adjustment method was estimated to be about equal to that of the basin-characteristics method. The estimate were checked for reliability. Estimates of monthly streamflow characteristics from gaged records were considered to be most reliable, and estimates at sites with actual flow record from 1937-86 were considered to be completely reliable (zero error). Weighted-average estimates were considered to be the most reliable estimates made at ungaged sites. (USGS)

  11. The Robustness of Acoustic Analogies

    NASA Technical Reports Server (NTRS)

    Freund, J. B.; Lele, S. K.; Wei, M.

    2004-01-01

    Acoustic analogies for the prediction of flow noise are exact rearrangements of the flow equations N(right arrow q) = 0 into a nominal sound source S(right arrow q) and sound propagation operator L such that L(right arrow q) = S(right arrow q). In practice, the sound source is typically modeled and the propagation operator inverted to make predictions. Since the rearrangement is exact, any sufficiently accurate model of the source will yield the correct sound, so other factors must determine the merits of any particular formulation. Using data from a two-dimensional mixing layer direct numerical simulation (DNS), we evaluate the robustness of two analogy formulations to different errors intentionally introduced into the source. The motivation is that since S can not be perfectly modeled, analogies that are less sensitive to errors in S are preferable. Our assessment is made within the framework of Goldstein's generalized acoustic analogy, in which different choices of a base flow used in constructing L give different sources S and thus different analogies. A uniform base flow yields a Lighthill-like analogy, which we evaluate against a formulation in which the base flow is the actual mean flow of the DNS. The more complex mean flow formulation is found to be significantly more robust to errors in the energetic turbulent fluctuations, but its advantage is less pronounced when errors are made in the smaller scales.

  12. Application of artificial neural networks for the prediction of volume fraction using spectra of gamma rays backscattered by three-phase flows

    NASA Astrophysics Data System (ADS)

    Gholipour Peyvandi, R.; Islami Rad, S. Z.

    2017-12-01

    The determination of the volume fraction percentage of the different phases flowing in vessels using transmission gamma rays is a conventional method in petroleum and oil industries. In some cases, with access only to the one side of the vessels, attention was drawn toward backscattered gamma rays as a desirable choice. In this research, the volume fraction percentage was measured precisely in water-gasoil-air three-phase flows by using the backscatter gamma ray technique andthe multilayer perceptron (MLP) neural network. The volume fraction determination in three-phase flows requires two gamma radioactive sources or a dual-energy source (with different energies) while in this study, we used just a 137Cs source (with the single energy) and a NaI detector to analyze backscattered gamma rays. The experimental set-up provides the required data for training and testing the network. Using the presented method, the volume fraction was predicted with a mean relative error percentage less than 6.47%. Also, the root mean square error was calculated as 1.60. The presented set-up is applicable in some industries with limited access. Also, using this technique, the cost, radiation safety and shielding requirements are minimized toward the other proposed methods.

  13. Groundwater flow and transport modeling

    USGS Publications Warehouse

    Konikow, Leonard F.; Mercer, J.W.

    1988-01-01

    Deterministic, distributed-parameter, numerical simulation models for analyzing groundwater flow and transport problems have come to be used almost routinely during the past decade. A review of the theoretical basis and practical use of groundwater flow and solute transport models is used to illustrate the state-of-the-art. Because of errors and uncertainty in defining model parameters, models must be calibrated to obtain a best estimate of the parameters. For flow modeling, data generally are sufficient to allow calibration. For solute-transport modeling, lack of data not only limits calibration, but also causes uncertainty in process description. Where data are available, model reliability should be assessed on the basis of sensitivity tests and measures of goodness-of-fit. Some of these concepts are demonstrated by using two case histories. ?? 1988.

  14. Uncertainties in stormwater runoff data collection from a small urban catchment, Southeast China.

    PubMed

    Huang, Jinliang; Tu, Zhenshun; Du, Pengfei; Lin, Jie; Li, Qingsheng

    2010-01-01

    Monitoring data are often used to identify stormwater runoff characteristics and in stormwater runoff modelling without consideration of their inherent uncertainties. Integrated with discrete sample analysis and error propagation analysis, this study attempted to quantify the uncertainties of discrete chemical oxygen demand (COD), total suspended solids (TSS) concentration, stormwater flowrate, stormwater event volumes, COD event mean concentration (EMC), and COD event loads in terms of flow measurement, sample collection, storage and laboratory analysis. The results showed that the uncertainties due to sample collection, storage and laboratory analysis of COD from stormwater runoff are 13.99%, 19.48% and 12.28%. Meanwhile, flow measurement uncertainty was 12.82%, and the sample collection uncertainty of TSS from stormwater runoff was 31.63%. Based on the law of propagation of uncertainties, the uncertainties regarding event flow volume, COD EMC and COD event loads were quantified as 7.03%, 10.26% and 18.47%.

  15. Individual variation of sap-flow rate in large pine and spruce trees and stand transpiration: a pilot study at the central NOPEX site

    NASA Astrophysics Data System (ADS)

    Čermák, J.; Cienciala, E.; Kučera, J.; Lindroth, A.; Bednářová, E.

    1995-06-01

    Transpiration in a mixed old stand of sub-boreal forest in the Norunda region (central Sweden) was estimated on the basis of direct measurement of sap flow rate in 24 large Scots pine and Norway spruce trees in July and August 1993. Sap flow rate was measured using the trunk tissue heat balance method based on internal (electric) heating and sensing of temperature. Transpiration was only 0.7 mm day -1 in a relatively dry period in July (i.e. about 20% of potential evaporation) and substantially higher after a rainy period in August. The error of the estimates of transpiration was higher during a dry period (about 13% and 22% in pine and spruce, respectively) and significantly lower (about 9% in both species) during a period of sufficient water supply. Shallow-rooted spruce trees responded much faster to precipitation than deeply rooted pines.

  16. The effect of flow data resolution on sediment yield estimation and channel design

    NASA Astrophysics Data System (ADS)

    Rosburg, Tyler T.; Nelson, Peter A.; Sholtes, Joel S.; Bledsoe, Brian P.

    2016-07-01

    The decision to use either daily-averaged or sub-daily streamflow records has the potential to impact the calculation of sediment transport metrics and stream channel design. Using bedload and suspended load sediment transport measurements collected at 138 sites across the United States, we calculated the effective discharge, sediment yield, and half-load discharge using sediment rating curves over long time periods (median record length = 24 years) with both daily-averaged and sub-daily streamflow records. A comparison of sediment transport metrics calculated with both daily-average and sub-daily stream flow data at each site showed that daily-averaged flow data do not adequately represent the magnitude of high stream flows at hydrologically flashy sites. Daily-average stream flow data cause an underestimation of sediment transport and sediment yield (including the half-load discharge) at flashy sites. The degree of underestimation was correlated with the level of flashiness and the exponent of the sediment rating curve. No consistent relationship between the use of either daily-average or sub-daily streamflow data and the resultant effective discharge was found. When used in channel design, computed sediment transport metrics may have errors due to flow data resolution, which can propagate into design slope calculations which, if implemented, could lead to unwanted aggradation or degradation in the design channel. This analysis illustrates the importance of using sub-daily flow data in the calculation of sediment yield in urbanizing or otherwise flashy watersheds. Furthermore, this analysis provides practical charts for estimating and correcting these types of underestimation errors commonly incurred in sediment yield calculations.

  17. Accuracy of a pulse-coherent acoustic Doppler profiler in a wave-dominated flow

    USGS Publications Warehouse

    Lacy, J.R.; Sherwood, C.R.

    2004-01-01

    The accuracy of velocities measured by a pulse-coherent acoustic Doppler profiler (PCADP) in the bottom boundary layer of a wave-dominated inner-shelf environment is evaluated. The downward-looking PCADP measured velocities in eight 10-cm cells at 1 Hz. Velocities measured by the PCADP are compared to those measured by an acoustic Doppler velocimeter for wave orbital velocities up to 95 cm s-1 and currents up to 40 cm s-1. An algorithm for correcting ambiguity errors using the resolution velocities was developed. Instrument bias, measured as the average error in burst mean speed, is -0.4 cm s-1 (standard deviation = 0.8). The accuracy (root-mean-square error) of instantaneous velocities has a mean of 8.6 cm s-1 (standard deviation = 6.5) for eastward velocities (the predominant direction of waves), 6.5 cm s-1 (standard deviation = 4.4) for northward velocities, and 2.4 cm s-1 (standard deviation = 1.6) for vertical velocities. Both burst mean and root-mean-square errors are greater for bursts with ub ??? 50 cm s-1. Profiles of burst mean speeds from the bottom five cells were fit to logarithmic curves: 92% of bursts with mean speed ??? 5 cm s-1 have a correlation coefficient R2 > 0.96. In cells close to the transducer, instantaneous velocities are noisy, burst mean velocities are biased low, and bottom orbital velocities are biased high. With adequate blanking distances for both the profile and resolution velocities, the PCADP provides sufficient accuracy to measure velocities in the bottom boundary layer under moderately energetic inner-shelf conditions.

  18. Using lean "automation with a human touch" to improve medication safety: a step closer to the "perfect dose".

    PubMed

    Ching, Joan M; Williams, Barbara L; Idemoto, Lori M; Blackmore, C Craig

    2014-08-01

    Virginia Mason Medical Center (Seattle) employed the Lean concept of Jidoka (automation with a human touch) to plan for and deploy bar code medication administration (BCMA) to hospitalized patients. Integrating BCMA technology into the nursing work flow with minimal disruption was accomplished using three steps ofJidoka: (1) assigning work to humans and machines on the basis of their differing abilities, (2) adapting machines to the human work flow, and (3) monitoring the human-machine interaction. Effectiveness of BCMA to both reinforce safe administration practices and reduce medication errors was measured using the Collaborative Alliance for Nursing Outcomes (CALNOC) Medication Administration Accuracy Quality Study methodology. Trained nurses observed a total of 16,149 medication doses for 3,617 patients in a three-year period. Following BCMA implementation, the number of safe practice violations decreased from 54.8 violations/100 doses (January 2010-September 2011) to 29.0 violations/100 doses (October 2011-December 2012), resulting in an absolute risk reduction of 25.8 violations/100 doses (95% confidence interval [CI]: 23.7, 27.9, p < .001). The number of medication errors decreased from 5.9 errors/100 doses at baseline to 3.0 errors/100 doses after BCMA implementation (absolute risk reduction: 2.9 errors/100 doses [95% CI: 2.2, 3.6,p < .001]). The number of unsafe administration practices (estimate, -5.481; standard error 1.133; p < .001; 95% CI: -7.702, -3.260) also decreased. As more hospitals respond to health information technology meaningful use incentives, thoughtful, methodical, and well-managed approaches to technology deployment are crucial. This work illustrates how Jidoka offers opportunities for a smooth transition to new technology.

  19. Intercomparison of terrain-following coordinate transformation and immersed boundary methods in large-eddy simulation of wind fields over complex terrain

    NASA Astrophysics Data System (ADS)

    Fang, Jiannong; Porté-Agel, Fernando

    2016-09-01

    Accurate modeling of complex terrain, especially steep terrain, in the simulation of wind fields remains a challenge. It is well known that the terrain-following coordinate transformation method (TFCT) generally used in atmospheric flow simulations is restricted to non-steep terrain with slope angles less than 45 degrees. Due to the advantage of keeping the basic computational grids and numerical schemes unchanged, the immersed boundary method (IBM) has been widely implemented in various numerical codes to handle arbitrary domain geometry including steep terrain. However, IBM could introduce considerable implementation errors in wall modeling through various interpolations because an immersed boundary is generally not co-located with a grid line. In this paper, we perform an intercomparison of TFCT and IBM in large-eddy simulation of a turbulent wind field over a three-dimensional (3D) hill for the purpose of evaluating the implementation errors in IBM. The slopes of the three-dimensional hill are not steep and, therefore, TFCT can be applied. Since TFCT is free from interpolation-induced implementation errors in wall modeling, its results can serve as a reference for the evaluation so that the influence of errors from wall models themselves can be excluded. For TFCT, a new algorithm for solving the pressure Poisson equation in the transformed coordinate system is proposed and first validated for a laminar flow over periodic two-dimensional hills by comparing with a benchmark solution. For the turbulent flow over the 3D hill, the wind-tunnel measurements used for validation contain both vertical and horizontal profiles of mean velocities and variances, thus allowing an in-depth comparison of the numerical models. In this case, TFCT is expected to be preferable to IBM. This is confirmed by the presented results of comparison. It is shown that the implementation errors in IBM lead to large discrepancies between the results obtained by TFCT and IBM near the surface. The effects of different schemes used to implement wall boundary conditions in IBM are studied. The source of errors and possible ways to improve the IBM implementation are discussed.

  20. von Kármán swirling flow between a rotating and a stationary smooth disk: Experiment

    NASA Astrophysics Data System (ADS)

    Mukherjee, Aryesh; Steinberg, Victor

    2018-01-01

    Precise measurements of the torque in a von Kármán swirling flow between a rotating and a stationary smooth disk in three Newtonian fluids with different dynamic viscosities are reported. From these measurements the dependence of the normalized torque, called the friction coefficient, on Re is found to be of the form Cf=1.17 (±0.03 ) Re-0.46±0.003 where the scaling exponent and coefficient are close to that predicted theoretically for an infinite, unshrouded, and smooth rotating disk which follows from an exact similarity solution of the Navier-Stokes equations, obtained by von Kármán. An error analysis shows that deviations from the theory can be partially caused by background errors. Measurements of the azimuthal Vθ and axial velocity profiles along radial and axial directions reveal that the flow core rotates at Vθ/r Ω ≃0.22 (up to z ≈4 cm from the rotating disk and up to r0/R ≃0.25 in the radial direction) in spite of the small aspect ratio of the vessel. Thus the friction coefficient shows scaling close to that obtained from the von Kármán exact similarity solution, but the observed rotating core provides evidence of the Batchelor-like solution [Q. J. Mech. Appl. Math. 4, 29 (1951), 10.1093/qjmam/4.1.29] different from the von Kármán [Z. Angew. Math. Mech. 1, 233 (1921), 10.1002/zamm.19210010401] or Stewartson [Proc. Camb. Philos. Soc. 49, 333 (1953), 10.1017/S0305004100028437] one.

  1. Absolute wavelength calibration of a Doppler spectrometer with a custom Fabry-Perot optical system

    NASA Astrophysics Data System (ADS)

    Baltzer, M. M.; Craig, D.; Den Hartog, D. J.; Nishizawa, T.; Nornberg, M. D.

    2016-11-01

    An Ion Doppler Spectrometer (IDS) is used for fast measurements of C VI line emission (343.4 nm) in the Madison Symmetric Torus. Absolutely calibrated flow measurements are difficult because the IDS records data within 0.25 nm of the line. Commercial calibration lamps do not produce lines in this narrow range. A light source using an ultraviolet LED and etalon was designed to provide a fiducial marker 0.08 nm wide. The light is coupled into the IDS at f/4, and a holographic diffuser increases homogeneity of the final image. Random and systematic errors in data analysis were assessed. The calibration is accurate to 0.003 nm, allowing for flow measurements accurate to 3 km/s. This calibration is superior to the previous method which used a time-averaged measurement along a chord believed to have zero net Doppler shift.

  2. Absolute wavelength calibration of a Doppler spectrometer with a custom Fabry-Perot optical system.

    PubMed

    Baltzer, M M; Craig, D; Den Hartog, D J; Nishizawa, T; Nornberg, M D

    2016-11-01

    An Ion Doppler Spectrometer (IDS) is used for fast measurements of C VI line emission (343.4 nm) in the Madison Symmetric Torus. Absolutely calibrated flow measurements are difficult because the IDS records data within 0.25 nm of the line. Commercial calibration lamps do not produce lines in this narrow range. A light source using an ultraviolet LED and etalon was designed to provide a fiducial marker 0.08 nm wide. The light is coupled into the IDS at f/4, and a holographic diffuser increases homogeneity of the final image. Random and systematic errors in data analysis were assessed. The calibration is accurate to 0.003 nm, allowing for flow measurements accurate to 3 km/s. This calibration is superior to the previous method which used a time-averaged measurement along a chord believed to have zero net Doppler shift.

  3. Design and Fabrication of a MEMS Flow Sensor and Its Application in Precise Liquid Dispensing

    PubMed Central

    Liu, Yaxin; Chen, Liguo; Sun, Lining

    2009-01-01

    A high speed MEMS flow sensor to enhance the reliability and accuracy of a liquid dispensing system is proposed. Benefitting from the sensor information feedback, the system can self-adjust the open time of the solenoid valve to accurately dispense desired volumes of reagent without any pre-calibration. First, an integrated high-speed liquid flow sensor based on the measurement of the pressure difference across a flow channel is presented. Dimensions of the micro-flow channel and two pressure sensors have been appropriately designed to meet the static and dynamic requirements of the liquid dispensing system. Experiments results show that the full scale (FS) flow measurement ranges up to 80 μL/s, with a nonlinearity better than 0.51% FS. Secondly, a novel closed-loop control strategy is proposed to calculate the valve open time in each dispensing cycle, which makes the system immune to liquid viscosity, pressure fluctuation, and other sources of error. Finally, dispensing results show that the system can achieve better dispensing performance, and the coefficient of variance (CV) for liquid dispensing is below 3% at 1 μL and below 4% at 100 nL. PMID:22408517

  4. Design and Fabrication of a MEMS Flow Sensor and Its Application in Precise Liquid Dispensing.

    PubMed

    Liu, Yaxin; Chen, Liguo; Sun, Lining

    2009-01-01

    A high speed MEMS flow sensor to enhance the reliability and accuracy of a liquid dispensing system is proposed. Benefitting from the sensor information feedback, the system can self-adjust the open time of the solenoid valve to accurately dispense desired volumes of reagent without any pre-calibration. First, an integrated high-speed liquid flow sensor based on the measurement of the pressure difference across a flow channel is presented. Dimensions of the micro-flow channel and two pressure sensors have been appropriately designed to meet the static and dynamic requirements of the liquid dispensing system. Experiments results show that the full scale (FS) flow measurement ranges up to 80 μL/s, with a nonlinearity better than 0.51% FS. Secondly, a novel closed-loop control strategy is proposed to calculate the valve open time in each dispensing cycle, which makes the system immune to liquid viscosity, pressure fluctuation, and other sources of error. Finally, dispensing results show that the system can achieve better dispensing performance, and the coefficient of variance (CV) for liquid dispensing is below 3% at 1 μL and below 4% at 100 nL.

  5. Application of low-dimensional techniques for closed-loop control of turbulent flows

    NASA Astrophysics Data System (ADS)

    Ausseur, Julie

    The groundwork for an advanced closed-loop control of separated shear layer flows is laid out in this document. The experimental testbed for the present investigation is the turbulent flow over a NACA-4412 model airfoil tested in the Syracuse University subsonic wind tunnel at Re=135,000. The specified control objective is to delay separation - or stall - by constantly keeping the flow attached to the surface of the wing. The proper orthogonal decomposition (POD) is shown to he a valuable tool to provide a low-dimensional estimate of the flow state and the first POD expansion coefficient is proposed to he used as the control variable. Other reduced-order techniques such as the modified linear and quadratic stochastic measurement methods (mLSM, mQSM) are applied to reduce the complexity of the flow field and their ability to accurately estimate the flow state from surface pressure measurements alone is examined. A simple proportional feedback control is successfully implemented in real-time using these tools and flow separation is efficiently delayed by over 3 degrees angle of attack. To further improve the quality of the flow state estimate, the implementation of a Kalman filter is foreseen, in which the knowledge of the flow dynamics is added to the computation of the control variable to correct for the potential measurement errors. To this aim, a reduced-order model (ROM) of the flow is developed using the least-squares method to obtain the coefficients of the POD/Galerkin projection of the Navier-Stokes equations from experimental data. To build the training ensemble needed in this experimental procedure, the spectral mLSM is performed to generate time-resolved series of POD expansion coefficients from which temporal derivatives are computed. This technique, which is applied to independent PIV velocity snapshots and time-resolved surface measurements, is able to retrieve the rational temporal evolution of the flow physics in the entire 2-D measurement area. The quality of the spectral measurements is confirmed by the results from both the linear and quadratic dynamical systems. The preliminary results from the linear ROM strengthens the motivation for future control implementation of a linear Kalman filter in this flow.

  6. An integral formulation for wave propagation on weakly non-uniform potential flows

    NASA Astrophysics Data System (ADS)

    Mancini, Simone; Astley, R. Jeremy; Sinayoko, Samuel; Gabard, Gwénaël; Tournour, Michel

    2016-12-01

    An integral formulation for acoustic radiation in moving flows is presented. It is based on a potential formulation for acoustic radiation on weakly non-uniform subsonic mean flows. This work is motivated by the absence of suitable kernels for wave propagation on non-uniform flow. The integral solution is formulated using a Green's function obtained by combining the Taylor and Lorentz transformations. Although most conventional approaches based on either transform solve the Helmholtz problem in a transformed domain, the current Green's function and associated integral equation are derived in the physical space. A dimensional error analysis is developed to identify the limitations of the current formulation. Numerical applications are performed to assess the accuracy of the integral solution. It is tested as a means of extrapolating a numerical solution available on the outer boundary of a domain to the far field, and as a means of solving scattering problems by rigid surfaces in non-uniform flows. The results show that the error associated with the physical model deteriorates with increasing frequency and mean flow Mach number. However, the error is generated only in the domain where mean flow non-uniformities are significant and is constant in regions where the flow is uniform.

  7. Use of 3H/3He Ages to evaluate and improve groundwater flow models in a complex buried-valley aquifer

    USGS Publications Warehouse

    Sheets, Rodney A.; Bair, E. Scott; Rowe, Gary L.

    1998-01-01

    Combined use of the tritium/helium 3 (3H/3He) dating technique and particle-tracking analysis can improve flow-model calibration. As shown at two sites in the Great Miami buried-valley aquifer in southwestern Ohio, the combined use of 3H/3He age dating and particle tracking led to a lower mean absolute error between measured heads and simulated heads than in the original calibrated models and/or between simulated travel times and 3H/3He ages. Apparent groundwater ages were obtained for water samples collected from 44 wells at two locations where previously constructed finite difference models of groundwater flow were available (Mound Plant and Wright-Patterson Air Force Base (WPAFB)). The two-layer Mound Plant model covers 11 km2 within the buried-valley aquifer. The WPAFB model has three layers and covers 262 km2 within the buried-valley aquifer and adjacent bedrock uplands. Sampled wells were chosen along flow paths determined from potentiometric maps or particle-tracking analyses. Water samples were collected at various depths within the aquifer. In the Mound Plant area, samples used for comparison of 3H/3He ages with simulated travel times were from wells completed in the uppermost model layer. Simulated travel times agreed well with 3H/3He ages. The mean absolute error (MAE) was 3.5 years. Agreement in ages at WPAFB decreased with increasing depth in the system. The MAEs were 1.63, 17.2, and 255 years for model layers 1, 2, and 3, respectively. Discrepancies between the simulated travel times and 3H/3He ages were assumed to be due to improper conceptualization or incorrect parameterization of the flow models. Selected conceptual and parameter modifications to the models resulted in improved agreement between 3H/3He ages and simulated travel times and between measured and simulated heads and flows.

  8. Errors in fluid balance with pump control of continuous hemodialysis.

    PubMed

    Roberts, M; Winney, R J

    1992-02-01

    The use of pumps both proximal and distal to the dialyzer during continuous hemodialysis provides control of dialysate and ultrafiltration flow rates, thereby reducing nursing time. However, we had noted unexpected severe extracellular fluid depletion suggesting that errors in pump delivery may be responsible. We measured in vitro the operation of various pumps under conditions similar to continuous hemodialysis. Fluid delivery of peristaltic and roller pumps varied with how the tubing set was inserted in the pump. Piston and peristaltic pumps with dedicated pump segments were more accurate. Pumps should be calibrated and tested under conditions simulating continuous hemodialysis prior to in vivo use.

  9. Challenges and Opportunities of Long-Term Continuous Stream Metabolism Measurements at the National Ecological Observatory Network

    NASA Astrophysics Data System (ADS)

    Goodman, K. J.; Lunch, C. K.; Baxter, C.; Hall, R.; Holtgrieve, G. W.; Roberts, B. J.; Marcarelli, A. M.; Tank, J. L.

    2013-12-01

    Recent advances in dissolved oxygen sensing and modeling have made continuous measurements of whole-stream metabolism relatively easy to make, allowing ecologists to quantify and evaluate stream ecosystem health at expanded temporal and spatial scales. Long-term monitoring of continuous stream metabolism will enable a better understanding of the integrated and complex effects of anthropogenic change (e.g., land-use, climate, atmospheric deposition, invasive species, etc.) on stream ecosystem function. In addition to their value in the particular streams measured, information derived from long-term data will improve the ability to extrapolate from shorter-term data. With the need to better understand drivers and responses of whole-stream metabolism come difficulties in interpreting the results. Long-term trends will encompass physical changes in stream morphology and flow regime (e.g., variable flow conditions and changes in channel structure) combined with changes in biota. Additionally long-term data sets will require an organized database structure, careful quantification of errors and uncertainties, as well as propagation of error as a result of the calculation of metabolism metrics. Parsing of continuous data and the choice of modeling approaches can also have a large influence on results and on error estimation. The two main modeling challenges include 1) obtaining unbiased, low-error daily estimates of gross primary production (GPP) and ecosystem respiration (ER), and 2) interpreting GPP and ER measurements over extended time periods. The National Ecological Observatory Network (NEON), in partnership with academic and government scientists, has begun to tackle several of these challenges as it prepares for the collection and calculation of 30 years of continuous whole-stream metabolism data. NEON is a national-scale research platform that will use consistent procedures and protocols to standardize measurements across the United States, providing long-term, high-quality, open-access data from a connected network to address large-scale change. NEON infrastructure will support 36 aquatic sites across 19 ecoclimatic domains. Sites include core sites, which remain for 30 years, and relocatable sites, which move to capture regional gradients. NEON will measure continuous whole-stream metabolism in conjunction with aquatic, terrestrial and airborne observations, allowing researchers to link stream ecosystem function with landscape and climatic drivers encompassing short to long time periods (i.e., decades).

  10. Factors Influencing Pitot Probe Centerline Displacement in a Turbulent Supersonic Boundary Layer

    NASA Technical Reports Server (NTRS)

    Grosser, Wendy I.

    1997-01-01

    When a total pressure probe is used for measuring flows with transverse total pressure gradients, a displacement of the effective center of the probe is observed (designated Delta). While this phenomenon is well documented in incompressible flow and supersonic laminar flow, there is insufficient information concerning supersonic turbulent flow. In this study, three NASA Lewis Research Center Supersonic Wind Tunnels (SWT's) were used to investigate pitot probe centerline displacement in supersonic turbulent boundary layers. The relationship between test conditions and pitot probe centerline displacement error was to be determined. For this investigation, ten circular probes with diameter-to-boundary layer ratios (D/delta) ranging from 0.015 to 0.256 were tested in the 10 ft x 10 ft SWT, the 15 cm x 15 cm SWT, and the 1 ft x 1 ft SWT. Reynolds numbers of 4.27 x 10(exp 6)/m, 6.00 x 10(exp 6)/in, 10.33 x 10(exp 6)/in, and 16.9 x 10(exp 6)/m were tested at nominal Mach numbers of 2.0 and 2.5. Boundary layer thicknesses for the three tunnels were approximately 200 mm, 13 mm, and 30 mm, respectively. Initial results indicate that boundary layer thickness, delta, and probe diameter, D/delta play a minimal role in pitot probe centerline offset error, Delta/D. It appears that the Mach gradient, dM/dy, is an important factor, though the exact relationship has not yet been determined. More data is needed to fill the map before a conclusion can be drawn with any certainty. This research provides valuable supersonic, turbulent boundary layer data from three supersonic wind tunnels with three very different boundary layers. It will prove a valuable stepping stone for future research into the factors influencing pitot probe centerline offset error.

  11. Negative elliptic flow of J/ψ's: A qualitative signature for charm collectivity at RHIC

    NASA Astrophysics Data System (ADS)

    Krieg, D.; Bleicher, M.

    2009-01-01

    We discuss one of the most prominent features of the very recent preliminary elliptic flow data of J/ψ-mesons from the PHENIX Collaboration (PHENIX Collaboration (C. Silvestre), arXiv:0806.0475 [nucl-ex]). Even within the rather large error bars of the measured data a negative elliptic flow parameter (v2) for J/ψ in the range of p T = 0.5-2.5 GeV/ c is visible. We argue that this negative elliptic flow at intermediate pT is a clear and qualitative signature for the collectivity of charm quarks produced in nucleus-nucleus reactions at RHIC. Within a parton recombination approach we show that a negative elliptic flow puts a lower limit on the collective transverse velocity of heavy quarks. The numerical value of the transverse flow velocity βT^{} for charm quarks that is necessary to reproduce the data is βT^{}( charm) ˜ 0.55-0.6 c and therefore compatible with the flow of light quarks.

  12. Verification of a three-dimensional resin transfer molding process simulation model

    NASA Technical Reports Server (NTRS)

    Fingerson, John C.; Loos, Alfred C.; Dexter, H. Benson

    1995-01-01

    Experimental evidence was obtained to complete the verification of the parameters needed for input to a three-dimensional finite element model simulating the resin flow and cure through an orthotropic fabric preform. The material characterizations completed include resin kinetics and viscosity models, as well as preform permeability and compaction models. The steady-state and advancing front permeability measurement methods are compared. The results indicate that both methods yield similar permeabilities for a plain weave, bi-axial fiberglass fabric. Also, a method to determine principal directions and permeabilities is discussed and results are shown for a multi-axial warp knit preform. The flow of resin through a blade-stiffened preform was modeled and experiments were completed to verify the results. The predicted inlet pressure was approximately 65% of the measured value. A parametric study was performed to explain differences in measured and predicted flow front advancement and inlet pressures. Furthermore, PR-500 epoxy resin/IM7 8HS carbon fabric flat panels were fabricated by the Resin Transfer Molding process. Tests were completed utilizing both perimeter injection and center-port injection as resin inlet boundary conditions. The mold was instrumented with FDEMS sensors, pressure transducers, and thermocouples to monitor the process conditions. Results include a comparison of predicted and measured inlet pressures and flow front position. For the perimeter injection case, the measured inlet pressure and flow front results compared well to the predicted results. The results of the center-port injection case showed that the predicted inlet pressure was approximately 50% of the measured inlet pressure. Also, measured flow front position data did not agree well with the predicted results. Possible reasons for error include fiber deformation at the resin inlet and a lag in FDEMS sensor wet-out due to low mold pressures.

  13. An Automated Measurement of Ciliary Beating Frequency using a Combined Optical Flow and Peak Detection.

    PubMed

    Kim, Woojae; Han, Tae Hwa; Kim, Hyun Jun; Park, Man Young; Kim, Ku Sang; Park, Rae Woong

    2011-06-01

    The mucociliary transport system is a major defense mechanism of the respiratory tract. The performance of mucous transportation in the nasal cavity can be represented by a ciliary beating frequency (CBF). This study proposes a novel method to measure CBF by using optical flow. To obtain objective estimates of CBF from video images, an automated computer-based image processing technique is developed. This study proposes a new method based on optical flow for image processing and peak detection for signal processing. We compare the measuring accuracy of the method in various combinations of image processing (optical flow versus difference image) and signal processing (fast Fourier transform [FFT] vs. peak detection [PD]). The digital high-speed video method with a manual count of CBF in slow motion video play, is the gold-standard in CBF measurement. We obtained a total of fifty recorded ciliated sinonasal epithelium images to measure CBF from the Department of Otolaryngology. The ciliated sinonasal epithelium images were recorded at 50-100 frames per second using a charge coupled device camera with an inverted microscope at a magnification of ×1,000. The mean square errors and variance for each method were 1.24, 0.84 Hz; 11.8, 2.63 Hz; 3.22, 1.46 Hz; and 3.82, 1.53 Hz for optical flow (OF) + PD, OF + FFT, difference image [DI] + PD, and DI + FFT, respectively. Of the four methods, PD using optical flow showed the best performance for measuring the CBF of nasal mucosa. The proposed method was able to measure CBF more objectively and efficiently than what is currently possible.

  14. Lava effusion rate definition and measurement: a review

    USGS Publications Warehouse

    Calvari, Sonia; Dehn, Jonathan; Harris, A.

    2007-01-01

    Measurement of effusion rate is a primary objective for studies that model lava flow and magma system dynamics, as well as for monitoring efforts during on-going eruptions. However, its exact definition remains a source of confusion, and problems occur when comparing volume flux values that are averaged over different time periods or spatial scales, or measured using different approaches. Thus our aims are to: (1) define effusion rate terminology; and (2) assess the various measurement methods and their results. We first distinguish between instantaneous effusion rate, and time-averaged discharge rate. Eruption rate is next defined as the total volume of lava emplaced since the beginning of the eruption divided by the time since the eruption began. The ultimate extension of this is mean output rate, this being the final volume of erupted lava divided by total eruption duration. Whether these values are total values, i.e. the flux feeding all flow units across the entire flow field, or local, i.e. the flux feeding a single active unit within a flow field across which many units are active, also needs to be specified. No approach is without its problems, and all can have large error (up to ∼50%). However, good agreement between diverse approaches shows that reliable estimates can be made if each approach is applied carefully and takes into account the caveats we detail here. There are three important factors to consider and state when measuring, giving or using an effusion rate. First, the time-period over which the value was averaged; second, whether the measurement applies to the entire active flow field, or a single lava flow within that field; and third, the measurement technique and its accompanying assumptions.

  15. Experimental measurement of structural power flow on an aircraft fuselage

    NASA Technical Reports Server (NTRS)

    Cuschieri, J. M.

    1991-01-01

    An experimental technique is used to measure structural intensity through an aircraft fuselage with an excitation load applied near one of the wing attachment locations. The fuselage is a relatively large structure, requiring a large number of measurement locations to analyze the whole of the structure. For the measurement of structural intensity, multiple point measurements are necessary at every location of interest. A tradeoff is therefore required between the number of measurement transducers, the mounting of these transducers, and the accuracy of the measurements. Using four transducers mounted on a bakelite platform, structural intensity vectors are measured at locations distributed throughout the fuselage. To minimize the errors associated with using the four transducer technique, the measurement locations are selected to be away from bulkheads and stiffeners. Furthermore, to eliminate phase errors between the four transducer measurements, two sets of data are collected for each position, with the orientation of the platform with the four transducers rotated by 180 degrees and an average taken between the two sets of data. The results of these measurements together with a discussion of the suitability of the approach for measuring structural intensity on a real structure are presented.

  16. Low-flow characteristics of streams in Virginia

    USGS Publications Warehouse

    Hayes, Donald C.

    1991-01-01

    Streamflow data were collected and low-flow characteristics computed for 715 gaged sites in Virginia Annual minimum average 7-consecutive-day flows range from 0 to 2,195 cubic feet per second for a 2-year recurrence interval and from 0 to 1,423 cubic feet per second for a 10-year recurrence interval. Drainage areas range from 0.17 to 7,320 square miles. Existing and discontinued gaged sites are separated into three types: long-term continuous-record sites, short-term continuous-record sites, and partial-record sites. Low-flow characteristics for long-term continuous-record sites are determined from frequency curves of annual minimum average 7-consecutive-day flows . Low-flow characteristics for short-term continuous-record sites are estimated by relating daily mean base-flow discharge values at a short-term site to concurrent daily mean discharge values at nearby long-term continuous-record sites having similar basin characteristics . Low-flow characteristics for partial-record sites are estimated by relating base-flow measurements to daily mean discharge values at long-term continuous-record sites. Information from the continuous-record sites and partial-record sites in Virginia are used to develop two techniques for estimating low-flow characteristics at ungaged sites. A flow-routing method is developed to estimate low-flow values at ungaged sites on gaged streams. Regional regression equations are developed for estimating low-flow values at ungaged sites on ungaged streams. The flow-routing method consists of transferring low-flow characteristics from a gaged site, either upstream or downstream, to a desired ungaged site. A simple drainage-area proration is used to transfer values when there are no major tributaries between the gaged and ungaged sites. Standard errors of estimate for108 test sites are 19 percent of the mean for estimates of low-flow characteristics having a 2-year recurrence interval and 52 percent of the mean for estimates of low-flow characteristics having a 10-year recurrence interval . A more complex transfer method must be used when major tributaries enter the stream between the gaged and ungaged sites. Twenty-four stream networks are analyzed, and predictions are made for 84 sites. Standard errors of estimate are 15 percent of the mean for estimates of low-flow characteristics having a 2-year recurrence interval and 22 percent of the mean for estimates of low-flow characteristics having a 10-year recurrence interval. Regional regression equations were developed for estimating low-flow values at ungaged sites on ungaged streams. The State was divided into eight regions on the basis of physiography and geographic grouping of the residuals computed in regression analyses . Basin characteristics that were significant in the regression analysis were drainage area, rock type, and strip-mined area. Standard errors of prediction range from 60 to139 percent for estimates of low-flow characteristics having a 2-year recurrence interval and 90 percent to 172 percent for estimates of low-flow characteristics having a 10-year recurrence interval.

  17. The effect of inflation on the morphology-derived rheological parameters of lava flows and its implications for interpreting remote sensing data - A case study on the 2014/2015 eruption at Holuhraun, Iceland

    NASA Astrophysics Data System (ADS)

    Kolzenburg, S.; Jaenicke, J.; Münzer, U.; Dingwell, D. B.

    2018-05-01

    Morphology-derived lava flow rheology is a frequently used tool in volcanology and planetary science to determine rheological parameters and deduce the composition of lavas on terrestrial planets and their moons. These calculations are usually based on physical equations incorporating 1) lava flow driving forces: gravity, slope and flow-rate and 2) morphological data such as lava flow geometry: flow-width, -height or shape of the flow outline. All available methods assume that no geometrical changes occur after emplacement and that the measured flow geometry reflects the lava's apparent viscosity and/or yield strength during emplacement. It is however well-established from terrestrial examples that lava flows may inflate significantly after the cessation of flow advance. This inflation affects, in turn, the width-to-height ratio upon which the rheological estimates are based and thus must result in uncertainties in the determination of flow rheology, as the flow height is one of the key parameters in the morphology-based deduction of flow properties. Previous studies have recognized this issue but, to date, no assessment of the magnitude of this error has been presented. This is likely due to a lack of digital elevation models (DEMs) at sufficiently high spatial and temporal resolution. The 2014/15 Holuhraun eruption in central Iceland represents one of the best monitored large volume (1.5 km3) lava flow fields (85 km2) to date. An abundance of scientific field and remote sensing data were collected during its emplacement. Moreover, inflation plays a key role in the emplacement dynamics of the late stage of the lava field. Here, we use a time series of high resolution DEMs acquired by the TanDEM-X satellite mission prior, during and after the eruption to evaluate the error associated with the most common methods of deriving lava flow rheology from morphological parameters used in planetary science. We can distinguish two dominant processes as sources of error in the determination of lava flow rheology from morphology 1) wholesale inflation of lava channels and 2) post halting inflation of individual lava toes. These result in a 2.4- to 17 - fold overestimation of apparent viscosity and a 0.7- to 2.4 - fold overestimation of yield strength. When applied in planetary sciences, this overestimation in rheological parameters translates directly to an overestimation of the respective lavas silica content. We conclude that, although qualitatively informative, morphological analysis is insufficient to discern lava rheology and composition. Instead, in-situ analysis together with high resolution remote sensing data is needed to properly constrain the compositions involved in planetary volcanism.

  18. [Feasibility Study on Digital Signal Processor and Gear Pump of Uroflowmeter Calibration Device].

    PubMed

    Yuan, Qing; Ji, Jun; Gao, Jiashuo; Wang, Lixin; Xiao, Hong

    2016-08-01

    It will cause hidden trouble on clinical application if the uroflowmeter is out of control.This paper introduces a scheme of uroflowmeter calibration device based on digital signal processor(DSP)and gear pump and shows studies of its feasibility.According to the research plan,we analyzed its stability,repeatability and linearity by building a testing system and carried out experiments on it.The flow test system is composed of DSP,gear pump and other components.The test results showed that the system could produce a stable water flow with high precision of repeated measurement and different flow rate.The test system can calibrate the urine flow rate well within the range of 9~50mL/s which has clinical significance,and the flow error is less than 1%,which meets the technical requirements of the calibration apparatus.The research scheme of uroflowmeter calibration device on DSP and gear pump is feasible.

  19. Breathing gas perfluorocarbon measurements using an absorber filled with zeolites.

    PubMed

    Proquitté, H; Rüdiger, M; Wauer, R R; Schmalisch, G

    2003-11-01

    Perfluorocarbon (PFC) has been widely used in the treatment of respiratory diseases; however, PFC content of the breathing gases remains unknown. Therefore, we developed an absorber using PFC selective zeolites for PFC measurement in gases and investigated its accuracy. To generate a breathing gas with different PFC contents a heated flask was rinsed with a constant air flow of 4 litre x min(-1) and 1, 5, 10, and 20 ml of PFC were infused over 20 min using an infusor. The absorber was placed on an electronic scale and the total PFC volume was calculated from the weight gain. Steady-state increase in weight was achieved 3.5 min after stopping the infusion. The calculated PFC volume was slightly underestimated but the measuring error did not exceed -1% for PFC less than 1 ml. The measurement error decreased with increasing PFC volume. This zeolite absorber is an accurate method to quantitatively determine PFC in breathing gases and can be used as a reference method to validate other PFC sensors.

  20. Digital-computer model of ground-water flow in Tooele Valley, Utah

    USGS Publications Warehouse

    Razem, Allan C.; Bartholoma, Scott D.

    1980-01-01

    A two-dimensional, finite-difference digital-computer model was used to simulate the ground-water flow in the principal artesian aquifer in Tooele Valley, Utah. The parameters used in the model were obtained through field measurements and tests, from historical records, and by trial-and-error adjustments. The model was calibrated against observed water-level changes that occurred during 1941-50, 1951-60, 1961-66, 1967-73, and 1974-78. The reliability of the predictions is good in most parts of the valley, as is shown by the ability of the model to match historical water-level changes.

  1. A study of disequilibrium between 220Rn and 216Po for 220Rn measurements using a flow-through Lucas scintillation cell.

    PubMed

    Sathyabama, N; Datta, D; Gaware, J J; Mayya, Y S; Tripathi, R M

    2014-01-01

    Lucas-type scintillation cells (LSCs) are commonly used for rapid measurements of (220)Rn concentrations in flow-through mode in field and for calibration experiments in laboratories. However, in those measurements, equilibrium between (220)Rn and (216)Po is generally assumed and two alpha particles are considered to be emitted per (220)Rn decay due to very short half-life of (216)Po. In this paper, a small, yet significant disequilibrium existing between (220)Rn and (216)Po has been examined and shown that less than two alpha particles are actually emitted per (220)Rn decay in the cell when flow is maintained. A theoretical formula has been derived for the first time for a correction factor (CF) to be applied to this measured concentration to account for the disequilibrium. The existence of this disequilibrium has been verified experimentally and is found to increase with the increase in the ratio of flow rate to cell volume. The reason for the disequilibrium is attributed to the flushing out of (216)Po formed in the cell before its decay due to the flow. Uncertainties in measured concentrations have been estimated and the estimated CF values have been found to be significant for the flow rates considered above 5 dm(3) min(-1) for a cell of volume 0.125 dm(3). The calculated values of the CF are about 1.055 to 1.178 in the flow rate range of 4 to 15 dm(3) min(-1) for the cell of volume 0.125 dm(3), while the corresponding experimental values are 1.023 to 1.264. This is a systematic error introduced in (220)Rn measurements using a flow-through LSC, which can be removed either by correct formulation or by proper design of a measurement set-up.

  2. Reliability and relative weighting of visual and nonvisual information for perceiving direction of self-motion during walking

    PubMed Central

    Saunders, Jeffrey A.

    2014-01-01

    Direction of self-motion during walking is indicated by multiple cues, including optic flow, nonvisual sensory cues, and motor prediction. I measured the reliability of perceived heading from visual and nonvisual cues during walking, and whether cues are weighted in an optimal manner. I used a heading alignment task to measure perceived heading during walking. Observers walked toward a target in a virtual environment with and without global optic flow. The target was simulated to be infinitely far away, so that it did not provide direct feedback about direction of self-motion. Variability in heading direction was low even without optic flow, with average RMS error of 2.4°. Global optic flow reduced variability to 1.9°–2.1°, depending on the structure of the environment. The small amount of variance reduction was consistent with optimal use of visual information. The relative contribution of visual and nonvisual information was also measured using cue conflict conditions. Optic flow specified a conflicting heading direction (±5°), and bias in walking direction was used to infer relative weighting. Visual feedback influenced heading direction by 16%–34% depending on scene structure, with more effect with dense motion parallax. The weighting of visual feedback was close to the predictions of an optimal integration model given the observed variability measures. PMID:24648194

  3. Experimental investigation on mass flow rate measurements using fibre Bragg grating sensors

    NASA Astrophysics Data System (ADS)

    Thekkethil, S. R.; Thomas, R. J.; Neumann, H.; Ramalingam, R.

    2017-02-01

    Flow measurement and control of cryogens is one of the major requirements of systems such as superconductor magnets for fusion reactors, MRI magnets etc. They can act as an early diagnostic tool for detection of any faults and ensure correct distribution of cooling load while also accessing thermal performance of the devices. Fibre Bragg Grating (FBG) sensors provide compact and accurate measurement systems which have added advantages such as immunity towards electrical and magnetic interference, low attenuation losses and remote sensing. This paper summarizes the initial experimental investigations and calibration of a novel FBG based mass flow meter. This design utilizes the viscous drag due to the flow to induce a bending strain on the fibre. The strain experienced by the fibre will be proportional to the flowrate and can be measured in terms of Bragg wavelength shift. The flowmeter is initially tested at atmospheric conditions using helium. The results are summarized and the performance parameters of the sensor are estimated. The results were also compared to a numerical model and further results for liquid helium is also reported. An overall sensitivity of 29 pm.(g.s-1)-1 was obtained for a helium flow, with a resolution of 0.2 g.s-1. A hysteresis error of 8 pm was also observed during load-unload cycles. The sensor is suitable for further tests using cryogens.

  4. Explosion Source Modeling, Seismic Waveform Prediction and Yield Verification Research

    DTIC Science & Technology

    1976-05-01

    TITLE (and S..bsdtl.) S. TYPE Of REPORT & PERIOD COVERED- r~r.s~oNscu ~ ~ ~ Q arterly Technical1 Report nicri~ ~ ~n v c~’i ~ ESE ~ ~ Feb. 1, 1976 -i...Description of the techni- que and the constitutive models may be found in Cherry, et al. (1975). KASSERI was detonated in ash flow tuff at Area 20...With these theoretical records we can reduce the measurement errors to nearly vanishing. Rather ’ than measuring by eye, a parabola is fit to the

  5. Lunar heat flow experiments: Science objectives and a strategy for minimizing the effects of lander-induced perturbations

    NASA Astrophysics Data System (ADS)

    Kiefer, Walter S.

    2012-01-01

    Reliable measurements of the Moon's global heat flow would serve as an important diagnostic test for models of lunar thermal evolution and would also help to constrain the Moon's bulk abundance of radioactive elements and its differentiation history. The two existing measurements of lunar heat flow are unlikely to be representative of the global heat flow. For these reasons, obtaining additional heat flow measurements has been recognized as a high priority lunar science objective. In making such measurements, it is essential that the design and deployment of the heat flow probe and of the parent spacecraft do not inadvertently modify the near-surface thermal structure of the lunar regolith and thus perturb the measured heat flow. One type of spacecraft-related perturbation is the shadow cast by the spacecraft and by thermal blankets on some instruments. The thermal effects of these shadows propagate by conduction both downward and outward from the spacecraft into the lunar regolith. Shadows cast by the spacecraft superstructure move over the surface with time and only perturb the regolith temperature in the upper 0.8 m. Permanent shadows, such as from thermal blankets covering a seismometer or other instruments, can modify the temperature to greater depth. Finite element simulations using measured values of the thermal diffusivity of lunar regolith show that the limiting factor for temperature perturbations is the need to measure the annual thermal wave for 2 or more years to measure the thermal diffusivity. The error induced by permanent spacecraft thermal shadows can be kept below 8% of the annual wave amplitude at 1 m depth if the heat flow probe is deployed at least 2.5 m away from any permanent spacecraft shadow. Deploying the heat flow probe 2 m from permanent shadows permits measuring the annual thermal wave for only one year and should be considered the science floor for a heat flow experiment on the Moon. One way to meet this separation requirement would be to deploy the heat flow and seismology experiments on opposite sides of the spacecraft. This result should be incorporated in the design of future lunar geophysics spacecraft experiments. Differences in the thermal environments of the Moon and Mars result in less restrictive separation requirements for heat flow experiments on Mars.

  6. Three Component Velocity and Acceleration Measurement Using FLEET

    NASA Technical Reports Server (NTRS)

    Danehy, Paul M.; Bathel, Brett F.; Calvert, Nathan; Dogariu, Arthur; Miles, Richard P.

    2014-01-01

    The femtosecond laser electronic excitation and tagging (FLEET) method has been used to measure three components of velocity and acceleration for the first time. A jet of pure N2 issuing into atmospheric pressure air was probed by the FLEET system. The femtosecond laser was focused down to a point to create a small measurement volume in the flow. The long-lived lifetime of this fluorescence was used to measure the location of the tagged particles at different times. Simultaneous images of the flow were taken from two orthogonal views using a mirror assembly and a single intensified CCD camera, allowing two components of velocity to be measured in each view. These different velocity components were combined to determine three orthogonal velocity components. The differences between subsequent velocity components could be used to measure the acceleration. Velocity accuracy and precision were roughly estimated to be +/-4 m/s and +/-10 m/s respectively. These errors were small compared to the approx. 100 m/s velocity of the subsonic jet studied.

  7. Assessment of the pseudo-tracking approach for the calculation of material acceleration and pressure fields from time-resolved PIV: part II. Spatio-temporal filtering

    NASA Astrophysics Data System (ADS)

    van Gent, P. L.; Schrijer, F. F. J.; van Oudheusden, B. W.

    2018-04-01

    The present study characterises the spatio-temporal filtering associated with pseudo-tracking. A combined theoretical and numerical assessment is performed that uses the relatively simple flow case of a two-dimensional Taylor vortex as analytical test case. An additional experimental assessment considers the more complex flow of a low-speed axisymmetric base flow, for which time-resolved tomographic PIV measurements and microphone measurements were obtained. The results of these assessments show how filtering along Lagrangian tracks leads to amplitude modulation of flow structures. A cut-off track length and spatial resolution are specified to support future applications of the pseudo-tracking approach. The experimental results show a fair agreement between PIV and microphone pressure data in terms of fluctuation levels and pressure frequency spectra. The coherence and correlation between microphone and PIV pressure measurements were found to be substantial and almost independent of the track length, indicating that the low-frequency behaviour of the flow could be reproduced regardless of the track length. It is suggested that a spectral analysis can be used inform the selection of a suitable track length and to estimate the local error margin of reconstructed pressure values.

  8. Patient identification error among prostate needle core biopsy specimens--are we ready for a DNA time-out?

    PubMed

    Suba, Eric J; Pfeifer, John D; Raab, Stephen S

    2007-10-01

    Patient identification errors in surgical pathology often involve switches of prostate or breast needle core biopsy specimens among patients. We assessed strategies for decreasing the occurrence of these uncommon and yet potentially catastrophic events. Root cause analyses were performed following 3 cases of patient identification error involving prostate needle core biopsy specimens. Patient identification errors in surgical pathology result from slips and lapses of automatic human action that may occur at numerous steps during pre-laboratory, laboratory and post-laboratory work flow processes. Patient identification errors among prostate needle biopsies may be difficult to entirely prevent through the optimization of work flow processes. A DNA time-out, whereby DNA polymorphic microsatellite analysis is used to confirm patient identification before radiation therapy or radical surgery, may eliminate patient identification errors among needle biopsies.

  9. Analysis of surface-water data network in Kansas for effectiveness in providing regional streamflow information

    USGS Publications Warehouse

    Medina, K.D.; Tasker, Gary D.

    1985-01-01

    The surface water data network in Kansas was analyzed using generalized least squares regression for its effectiveness in providing regional streamflow information. The correlation and time-sampling error of the streamflow characteristic are considered in the generalized least squares method. Unregulated medium-flow, low-flow and high-flow characteristics were selected to be representative of the regional information that can be obtained from streamflow gaging station records for use in evaluating the effectiveness of continuing the present network stations, discontinuing some stations; and/or adding new stations. The analysis used streamflow records for all currently operated stations that were not affected by regulation and discontinued stations for which unregulated flow characteristics , as well as physical and climatic characteristics, were available. The state was divided into three network areas, western, northeastern, and southeastern Kansas, and analysis was made for three streamflow characteristics in each area, using three planning horizons. The analysis showed that the maximum reduction of sampling mean square error for each cost level could be obtained by adding new stations and discontinuing some of the present network stations. Large reductions in sampling mean square error for low-flow information could be accomplished in all three network areas, with western Kansas having the most dramatic reduction. The addition of new stations would be most beneficial for man- flow information in western Kansas, and to lesser degrees in the other two areas. The reduction of sampling mean square error for high-flow information would benefit most from the addition of new stations in western Kansas, and the effect diminishes to lesser degrees in the other two areas. Southeastern Kansas showed the smallest error reduction in high-flow information. A comparison among all three network areas indicated that funding resources could be most effectively used by discontinuing more stations in northeastern and southeastern Kansas and establishing more new stations in western Kansas. (Author 's abstract)

  10. [Failure modes and effects analysis in the prescription, validation and dispensing process].

    PubMed

    Delgado Silveira, E; Alvarez Díaz, A; Pérez Menéndez-Conde, C; Serna Pérez, J; Rodríguez Sagrado, M A; Bermejo Vicedo, T

    2012-01-01

    To apply a failure modes and effects analysis to the prescription, validation and dispensing process for hospitalised patients. A work group analysed all of the stages included in the process from prescription to dispensing, identifying the most critical errors and establishing potential failure modes which could produce a mistake. The possible causes, their potential effects, and the existing control systems were analysed to try and stop them from developing. The Hazard Score was calculated, choosing those that were ≥ 8, and a Severity Index = 4 was selected independently of the hazard Score value. Corrective measures and an implementation plan were proposed. A flow diagram that describes the whole process was obtained. A risk analysis was conducted of the chosen critical points, indicating: failure mode, cause, effect, severity, probability, Hazard Score, suggested preventative measure and strategy to achieve so. Failure modes chosen: Prescription on the nurse's form; progress or treatment order (paper); Prescription to incorrect patient; Transcription error by nursing staff and pharmacist; Error preparing the trolley. By applying a failure modes and effects analysis to the prescription, validation and dispensing process, we have been able to identify critical aspects, the stages in which errors may occur and the causes. It has allowed us to analyse the effects on the safety of the process, and establish measures to prevent or reduce them. Copyright © 2010 SEFH. Published by Elsevier Espana. All rights reserved.

  11. Design and performance of a dynaniic gas flux chamber.

    PubMed

    Reichman, Rivka; Rolston, Dennis E

    2002-01-01

    Chambers are commonly used to measure the emission of many trace gases and chemicals from soil. An aerodynamic (flow through) chamber was designed and fabricated to accurately measure the surface flux of trace gases. Flow through the chamber was controlled with a small vacuum at the outlet. Due to the design using fans, a partition plate, and aerodynamic ends, air is forced to sweep parallel and uniform over the entire soil surface. A fraction of the air flowing inside the chamber is sampled in the outlet. The air velocity inside the chamber is controlled by fan speed and outlet suction flow rate. The chamber design resulted in a uniform distribution of air velocity at the soil surface. Steady state flux was attained within 5 min when the outlet air suction rate was 20 L/min or higher. For expected flux rates, the presence of the chamber did not affect the measured fluxes at outlet suction rates of around 20 L/min, except that the chamber caused some cooling of the surface in field experiments. Sensitive measurements of the pressure deficit across the soil layer in conjunction with measured fluxes in the source box and chamber outlet show that the outflow rate must be controlled carefully to minimize errors in the flux measurements. Both over- and underestimation of the fluxes are possible if the outlet flow rate is not controlled carefully. For this design, the chamber accurately measured steady flux at outlet air suction rates of approximately 20 L/min when the pressure deficit within the chamber with respect to the ambient atmosphere ranged between 0.46 and 0.79 Pa.

  12. Zernike ultrasonic tomography for fluid velocity imaging based on pipeline intrusive time-of-flight measurements.

    PubMed

    Besic, Nikola; Vasile, Gabriel; Anghel, Andrei; Petrut, Teodor-Ion; Ioana, Cornel; Stankovic, Srdjan; Girard, Alexandre; d'Urso, Guy

    2014-11-01

    In this paper, we propose a novel ultrasonic tomography method for pipeline flow field imaging, based on the Zernike polynomial series. Having intrusive multipath time-offlight ultrasonic measurements (difference in flight time and speed of ultrasound) at the input, we provide at the output tomograms of the fluid velocity components (axial, radial, and orthoradial velocity). Principally, by representing these velocities as Zernike polynomial series, we reduce the tomography problem to an ill-posed problem of finding the coefficients of the series, relying on the acquired ultrasonic measurements. Thereupon, this problem is treated by applying and comparing Tikhonov regularization and quadratically constrained ℓ1 minimization. To enhance the comparative analysis, we additionally introduce sparsity, by employing SVD-based filtering in selecting Zernike polynomials which are to be included in the series. The first approach-Tikhonov regularization without filtering, is used because it is the most suitable method. The performances are quantitatively tested by considering a residual norm and by estimating the flow using the axial velocity tomogram. Finally, the obtained results show the relative residual norm and the error in flow estimation, respectively, ~0.3% and ~1.6% for the less turbulent flow and ~0.5% and ~1.8% for the turbulent flow. Additionally, a qualitative validation is performed by proximate matching of the derived tomograms with a flow physical model.

  13. Techniques and equipment required for precise stream gaging in tide-affected fresh-water reaches of the Sacramento River, California

    USGS Publications Warehouse

    Smith, Winchell

    1971-01-01

    Current-meter measurements of high accuracy will be required for calibration of an acoustic flow-metering system proposed for installation in the Sacramento River at Chipps Island in California. This report presents an analysis of the problem of making continuous accurate current-meter measurements in this channel where the flow regime is changing constantly in response to tidal action. Gaging-system requirements are delineated, and a brief description is given of the several applicable techniques that have been developed by others. None of these techniques provides the accuracies required for the flowmeter calibration. A new system is described--one which has been assembled and tested in prototype and which will provide the matrix of data needed for accurate continuous current-meter measurements. Analysis of a large quantity of data on the velocity distribution in the channel of the Sacramento River at Chipps Island shows that adequate definition of the velocity can be made during the dominant flow periods--that is, at times other than slack-water periods--by use of current meters suspended at elevations 0.2 and 0.8 of the depth below the water surface. However, additional velocity surveys will be necessary to determine whether or not small systematic corrections need be applied during periods of rapidly changing flow. In the proposed system all gaged parameters, including velocities, depths, position in the stream, and related times, are monitored continuously as a boat moves across the river on the selected cross section. Data are recorded photographically and transferred later onto punchcards for computer processing. Computer programs have been written to permit computation of instantaneous discharges at any selected time interval throughout the period of the current meter measurement program. It is anticipated that current-meter traverses will be made at intervals of about one-half hour over periods of several days. Capability of performance for protracted periods was, consequently, one of the important elements in system design. Analysis of error sources in the proposed system indicates that errors in individual computed discharges can be kept smaller than 1.5 percent if the expected precision in all measured parameters is maintained.

  14. Modeling Water Temperature in the Yakima River, Washington, from Roza Diversion Dam to Prosser Dam, 2005-06

    USGS Publications Warehouse

    Voss, Frank D.; Curran, Christopher A.; Mastin, Mark C.

    2008-01-01

    A mechanistic water-temperature model was constructed by the U.S. Geological Survey for use by the Bureau of Reclamation for studying the effect of potential water management decisions on water temperature in the Yakima River between Roza and Prosser, Washington. Flow and water temperature data for model input were obtained from the Bureau of Reclamation Hydromet database and from measurements collected by the U.S. Geological Survey during field trips in autumn 2005. Shading data for the model were collected by the U.S. Geological Survey in autumn 2006. The model was calibrated with data collected from April 1 through October 31, 2005, and tested with data collected from April 1 through October 31, 2006. Sensitivity analysis results showed that for the parameters tested, daily maximum water temperature was most sensitive to changes in air temperature and solar radiation. Root mean squared error for the five sites used for model calibration ranged from 1.3 to 1.9 degrees Celsius (?C) and mean error ranged from ?1.3 to 1.6?C. The root mean squared error for the five sites used for testing simulation ranged from 1.6 to 2.2?C and mean error ranged from 0.1 to 1.3?C. The accuracy of the stream temperatures estimated by the model is limited by four errors (model error, data error, parameter error, and user error).

  15. Performance Data Errors in Air Carrier Operations: Causes and Countermeasures

    NASA Technical Reports Server (NTRS)

    Berman, Benjamin A.; Dismukes, R Key; Jobe, Kimberly K.

    2012-01-01

    Several airline accidents have occurred in recent years as the result of erroneous weight or performance data used to calculate V-speeds, flap/trim settings, required runway lengths, and/or required climb gradients. In this report we consider 4 recent studies of performance data error, report our own study of ASRS-reported incidents, and provide countermeasures that can reduce vulnerability to accidents caused by performance data errors. Performance data are generated through a lengthy process involving several employee groups and computer and/or paper-based systems. Although much of the airline indUStry 's concern has focused on errors pilots make in entering FMS data, we determined that errors occur at every stage of the process and that errors by ground personnel are probably at least as frequent and certainly as consequential as errors by pilots. Most of the errors we examined could in principle have been trapped by effective use of existing procedures or technology; however, the fact that they were not trapped anywhere indicates the need for better countermeasures. Existing procedures are often inadequately designed to mesh with the ways humans process information. Because procedures often do not take into account the ways in which information flows in actual flight ops and time pressures and interruptions experienced by pilots and ground personnel, vulnerability to error is greater. Some aspects of NextGen operations may exacerbate this vulnerability. We identify measures to reduce the number of errors and to help catch the errors that occur.

  16. Flow-Centric, Back-in-Time Debugging

    NASA Astrophysics Data System (ADS)

    Lienhard, Adrian; Fierz, Julien; Nierstrasz, Oscar

    Conventional debugging tools present developers with means to explore the run-time context in which an error has occurred. In many cases this is enough to help the developer discover the faulty source code and correct it. However, rather often errors occur due to code that has executed in the past, leaving certain objects in an inconsistent state. The actual run-time error only occurs when these inconsistent objects are used later in the program. So-called back-in-time debuggers help developers step back through earlier states of the program and explore execution contexts not available to conventional debuggers. Nevertheless, even Back-in-Time Debuggers do not help answer the question, “Where did this object come from?” The Object-Flow Virtual Machine, which we have proposed in previous work, tracks the flow of objects to answer precisely such questions, but this VM does not provide dedicated debugging support to explore faulty programs. In this paper we present a novel debugger, called Compass, to navigate between conventional run-time stack-oriented control flow views and object flows. Compass enables a developer to effectively navigate from an object contributing to an error back-in-time through all the code that has touched the object. We present the design and implementation of Compass, and we demonstrate how flow-centric, back-in-time debugging can be used to effectively locate the source of hard-to-find bugs.

  17. Performance of a restrictive flow device and an electronic syringe driver for continuous subcutaneous infusion.

    PubMed

    Capes, D; Martin, K; Underwood, R

    1997-10-01

    The aim of this study was to investigate the flow performance of the mechanical Springfusor 30 short model and the electronic Graseby MS16A. Flow rate was measured gravimetrically in a temperature-controlled cabinet. There was no statistically significant difference between the Graseby and Springfusor syringe drivers in the flow rate error at 25 degrees C. The percentage of flow rates within +/-20% accuracy during a 35-min periods at 25 degrees C was significantly less with the Graseby, being 91.9% compared with 100% for the Springfusor. Only 58.2% of flow rates with the Graseby were within the manufacturer claimed accuracy of +/-5%. The flow rate of the Springfusor was affected by temperature; at 30 degrees C the mean flow rate was 10.8% greater than at 25 degrees C. These results indicate that the Springfusor 30 had less flow rate variation than the Graseby MS16A. However, this would not be expected to cause noticeable clinical effects when used for opioid infusion in palliative care.

  18. Automated measurement and classification of pulmonary blood-flow velocity patterns using phase-contrast MRI and correlation analysis.

    PubMed

    van Amerom, Joshua F P; Kellenberger, Christian J; Yoo, Shi-Joon; Macgowan, Christopher K

    2009-01-01

    An automated method was evaluated to detect blood flow in small pulmonary arteries and classify each as artery or vein, based on a temporal correlation analysis of their blood-flow velocity patterns. The method was evaluated using velocity-sensitive phase-contrast magnetic resonance data collected in vitro with a pulsatile flow phantom and in vivo in 11 human volunteers. The accuracy of the method was validated in vitro, which showed relative velocity errors of 12% at low spatial resolution (four voxels per diameter), but was reduced to 5% at increased spatial resolution (16 voxels per diameter). The performance of the method was evaluated in vivo according to its reproducibility and agreement with manual velocity measurements by an experienced radiologist. In all volunteers, the correlation analysis was able to detect and segment peripheral pulmonary vessels and distinguish arterial from venous velocity patterns. The intrasubject variability of repeated measurements was approximately 10% of peak velocity, or 2.8 cm/s root-mean-variance, demonstrating the high reproducibility of the method. Excellent agreement was obtained between the correlation analysis and radiologist measurements of pulmonary velocities, with a correlation of R2=0.98 (P<.001) and a slope of 0.99+/-0.01.

  19. Gas ultrasonic flow rate measurement through genetic-ant colony optimization based on the ultrasonic pulse received signal model

    NASA Astrophysics Data System (ADS)

    Hou, Huirang; Zheng, Dandan; Nie, Laixiao

    2015-04-01

    For gas ultrasonic flowmeters, the signals received by ultrasonic sensors are susceptible to noise interference. If signals are mingled with noise, a large error in flow measurement can be caused by triggering mistakenly using the traditional double-threshold method. To solve this problem, genetic-ant colony optimization (GACO) based on the ultrasonic pulse received signal model is proposed. Furthermore, in consideration of the real-time performance of the flow measurement system, the improvement of processing only the first three cycles of the received signals rather than the whole signal is proposed. Simulation results show that the GACO algorithm has the best estimation accuracy and ant-noise ability compared with the genetic algorithm, ant colony optimization, double-threshold and enveloped zero-crossing. Local convergence doesn’t appear with the GACO algorithm until -10 dB. For the GACO algorithm, the converging accuracy and converging speed and the amount of computation are further improved when using the first three cycles (called GACO-3cycles). Experimental results involving actual received signals show that the accuracy of single-gas ultrasonic flow rate measurement can reach 0.5% with GACO-3 cycles, which is better than with the double-threshold method.

  20. Peak-flow characteristics of Wyoming streams

    USGS Publications Warehouse

    Miller, Kirk A.

    2003-01-01

    Peak-flow characteristics for unregulated streams in Wyoming are described in this report. Frequency relations for annual peak flows through water year 2000 at 364 streamflow-gaging stations in and near Wyoming were evaluated and revised or updated as needed. Analyses of historical floods, temporal trends, and generalized skew were included in the evaluation. Physical and climatic basin characteristics were determined for each gaging station using a geographic information system. Gaging stations with similar peak-flow and basin characteristics were grouped into six hydrologic regions. Regional statistical relations between peak-flow and basin characteristics were explored using multiple-regression techniques. Generalized least squares regression equations for estimating magnitudes of annual peak flows with selected recurrence intervals from 1.5 to 500 years were developed for each region. Average standard errors of estimate range from 34 to 131 percent. Average standard errors of prediction range from 35 to 135 percent. Several statistics for evaluating and comparing the errors in these estimates are described. Limitations of the equations are described. Methods for applying the regional equations for various circumstances are listed and examples are given.

  1. Error estimation for CFD aeroheating prediction under rarefied flow condition

    NASA Astrophysics Data System (ADS)

    Jiang, Yazhong; Gao, Zhenxun; Jiang, Chongwen; Lee, Chunhian

    2014-12-01

    Both direct simulation Monte Carlo (DSMC) and Computational Fluid Dynamics (CFD) methods have become widely used for aerodynamic prediction when reentry vehicles experience different flow regimes during flight. The implementation of slip boundary conditions in the traditional CFD method under Navier-Stokes-Fourier (NSF) framework can extend the validity of this approach further into transitional regime, with the benefit that much less computational cost is demanded compared to DSMC simulation. Correspondingly, an increasing error arises in aeroheating calculation as the flow becomes more rarefied. To estimate the relative error of heat flux when applying this method for a rarefied flow in transitional regime, theoretical derivation is conducted and a dimensionless parameter ɛ is proposed by approximately analyzing the ratio of the second order term to first order term in the heat flux expression in Burnett equation. DSMC simulation for hypersonic flow over a cylinder in transitional regime is performed to test the performance of parameter ɛ, compared with two other parameters, Knρ and MaṡKnρ.

  2. Steady-state low thermal resistance characterization apparatus: The bulk thermal tester

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Burg, Brian R.; Kolly, Manuel; Blasakis, Nicolas

    The reliability of microelectronic devices is largely dependent on electronic packaging, which includes heat removal. The appropriate packaging design therefore necessitates precise knowledge of the relevant material properties, including thermal resistance and thermal conductivity. Thin materials and high conductivity layers make their thermal characterization challenging. A steady state measurement technique is presented and evaluated with the purpose to characterize samples with a thermal resistance below 100 mm{sup 2} K/W. It is based on the heat flow meter bar approach made up by two copper blocks and relies exclusively on temperature measurements from thermocouples. The importance of thermocouple calibration is emphasizedmore » in order to obtain accurate temperature readings. An in depth error analysis, based on Gaussian error propagation, is carried out. An error sensitivity analysis highlights the importance of the precise knowledge of the thermal interface materials required for the measurements. Reference measurements on Mo samples reveal a measurement uncertainty in the range of 5% and most accurate measurements are obtained at high heat fluxes. Measurement techniques for homogeneous bulk samples, layered materials, and protruding cavity samples are discussed. Ultimately, a comprehensive overview of a steady state thermal characterization technique is provided, evaluating the accuracy of sample measurements with thermal resistances well below state of the art setups. Accurate characterization of materials used in heat removal applications, such as electronic packaging, will enable more efficient designs and ultimately contribute to energy savings.« less

  3. Improved accuracy of solar energy system testing and measurements

    NASA Astrophysics Data System (ADS)

    Waterman, R. E.

    1984-12-01

    A real world example is provided of recovery of data on the performance of a solar collector system in the field. Kalman filters were devised to reconstruct data from sensors which had functioned only intermittently over the 3-day trial period designed to quantify phenomena in the collector loop, i.e., hot water delivered to storage. The filter was configured to account for errors in data on the heat exchanger coil differential temperature and mass flow rate. Data were then generated based on a matrix of state equations, taking into account the presence of time delays due to tank stratification and convective flows. Good correlations were obtained with data from other sensors for the flow rate, system temperatures and the energy delivered to storage.

  4. The Effect of Large Angles of Yaw on the Accuracy of Wing-Tip Yawmeters

    NASA Technical Reports Server (NTRS)

    Golden, Jacob

    1942-01-01

    The present method used by the NACA for the measurement of sideslip angles in flight involves the use of a device called the yawmeter. The operation of this instrument depends on the motion of a free-swinging vane which, mounted ahead of the wing tip, alines itself with the local wind direction. Because of the flow pattern about the airplane, the local wind direction at the yaw vane may be slightly different from the direction of the relative wind and the yaw-vane readings may be in error. This error is corrected by using half the difference between the readings of two vanes, one on each wing, for unyawed flight as a calibration constant. It is possible, however, that, because of the change in location of the vane with respect to the flow pattern at large angles of yaw, the constant obtained for unyawed flight may not apply. The present report covers power-off tests made in the free-flight tunnel to check the validity of this method.

  5. Risk management and measuring productivity with POAS--point of act system.

    PubMed

    Akiyama, Masanori; Kondo, Tatsuya

    2007-01-01

    The concept of our system is not only to manage material flows, but also to provide an integrated management resource, a means of correcting errors in medical treatment, and applications to EBM through the data mining of medical records. Prior to the development of this system, electronic processing systems in hospitals did a poor job of accurately grasping medical practice and medical material flows. With POAS (Point of Act System), hospital managers can solve the so-called, "man, money, material, and information" issues inherent in the costs of healthcare. The POAS system synchronizes with each department system, from finance and accounting, to pharmacy, to imaging, and allows information exchange. We can manage Man, Material, Money and Information completely by this system. Our analysis has shown that this system has a remarkable investment effect - saving over four million dollars per year - through cost savings in logistics and business process efficiencies. In addition, the quality of care has been improved dramatically while error rates have been reduced - nearly to zero in some cases.

  6. The validity of flow approximations when simulating catchment-integrated flash floods

    NASA Astrophysics Data System (ADS)

    Bout, B.; Jetten, V. G.

    2018-01-01

    Within hydrological models, flow approximations are commonly used to reduce computation time. The validity of these approximations is strongly determined by flow height, flow velocity and the spatial resolution of the model. In this presentation, the validity and performance of the kinematic, diffusive and dynamic flow approximations are investigated for use in a catchment-based flood model. Particularly, the validity during flood events and for varying spatial resolutions is investigated. The OpenLISEM hydrological model is extended to implement both these flow approximations and channel flooding based on dynamic flow. The flow approximations are used to recreate measured discharge in three catchments, among which is the hydrograph of the 2003 flood event in the Fella river basin. Furthermore, spatial resolutions are varied for the flood simulation in order to investigate the influence of spatial resolution on these flow approximations. Results show that the kinematic, diffusive and dynamic flow approximation provide least to highest accuracy, respectively, in recreating measured discharge. Kinematic flow, which is commonly used in hydrological modelling, substantially over-estimates hydrological connectivity in the simulations with a spatial resolution of below 30 m. Since spatial resolutions of models have strongly increased over the past decades, usage of routed kinematic flow should be reconsidered. The combination of diffusive or dynamic overland flow and dynamic channel flooding provides high accuracy in recreating the 2003 Fella river flood event. Finally, in the case of flood events, spatial modelling of kinematic flow substantially over-estimates hydrological connectivity and flow concentration since pressure forces are removed, leading to significant errors.

  7. Regional Regression Equations to Estimate Flow-Duration Statistics at Ungaged Stream Sites in Connecticut

    USGS Publications Warehouse

    Ahearn, Elizabeth A.

    2010-01-01

    Multiple linear regression equations for determining flow-duration statistics were developed to estimate select flow exceedances ranging from 25- to 99-percent for six 'bioperiods'-Salmonid Spawning (November), Overwinter (December-February), Habitat Forming (March-April), Clupeid Spawning (May), Resident Spawning (June), and Rearing and Growth (July-October)-in Connecticut. Regression equations also were developed to estimate the 25- and 99-percent flow exceedances without reference to a bioperiod. In total, 32 equations were developed. The predictive equations were based on regression analyses relating flow statistics from streamgages to GIS-determined basin and climatic characteristics for the drainage areas of those streamgages. Thirty-nine streamgages (and an additional 6 short-term streamgages and 28 partial-record sites for the non-bioperiod 99-percent exceedance) in Connecticut and adjacent areas of neighboring States were used in the regression analysis. Weighted least squares regression analysis was used to determine the predictive equations; weights were assigned based on record length. The basin characteristics-drainage area, percentage of area with coarse-grained stratified deposits, percentage of area with wetlands, mean monthly precipitation (November), mean seasonal precipitation (December, January, and February), and mean basin elevation-are used as explanatory variables in the equations. Standard errors of estimate of the 32 equations ranged from 10.7 to 156 percent with medians of 19.2 and 55.4 percent to predict the 25- and 99-percent exceedances, respectively. Regression equations to estimate high and median flows (25- to 75-percent exceedances) are better predictors (smaller variability of the residual values around the regression line) than the equations to estimate low flows (less than 75-percent exceedance). The Habitat Forming (March-April) bioperiod had the smallest standard errors of estimate, ranging from 10.7 to 20.9 percent. In contrast, the Rearing and Growth (July-October) bioperiod had the largest standard errors, ranging from 30.9 to 156 percent. The adjusted coefficient of determination of the equations ranged from 77.5 to 99.4 percent with medians of 98.5 and 90.6 percent to predict the 25- and 99-percent exceedances, respectively. Descriptive information on the streamgages used in the regression, measured basin and climatic characteristics, and estimated flow-duration statistics are provided in this report. Flow-duration statistics and the 32 regression equations for estimating flow-duration statistics in Connecticut are stored on the U.S. Geological Survey World Wide Web application ?StreamStats? (http://water.usgs.gov/osw/streamstats/index.html). The regression equations developed in this report can be used to produce unbiased estimates of select flow exceedances statewide.

  8. Methods for estimating selected spring and fall low-flow frequency statistics for ungaged stream sites in Iowa, based on data through June 2014

    USGS Publications Warehouse

    Eash, David A.; Barnes, Kimberlee K.; O'Shea, Padraic S.

    2016-09-19

    A statewide study was led to develop regression equations for estimating three selected spring and three selected fall low-flow frequency statistics for ungaged stream sites in Iowa. The estimation equations developed for the six low-flow frequency statistics include spring (April through June) 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years and fall (October through December) 1-, 7-, and 30-day mean low flows for a recurrence interval of 10 years. Estimates of the three selected spring statistics are provided for 241 U.S. Geological Survey continuous-record streamgages, and estimates of the three selected fall statistics are provided for 238 of these streamgages, using data through June 2014. Because only 9 years of fall streamflow record were available, three streamgages included in the development of the spring regression equations were not included in the development of the fall regression equations. Because of regulation, diversion, or urbanization, 30 of the 241 streamgages were not included in the development of the regression equations. The study area includes Iowa and adjacent areas within 50 miles of the Iowa border. Because trend analyses indicated statistically significant positive trends when considering the period of record for most of the streamgages, the longest, most recent period of record without a significant trend was determined for each streamgage for use in the study. Geographic information system software was used to measure 63 selected basin characteristics for each of the 211streamgages used to develop the regional regression equations. The study area was divided into three low-flow regions that were defined in a previous study for the development of regional regression equations.Because several streamgages included in the development of regional regression equations have estimates of zero flow calculated from observed streamflow for selected spring and fall low-flow frequency statistics, the final equations for the three low-flow regions were developed using two types of regression analyses—left-censored and generalized-least-squares regression analyses. A total of 211 streamgages were included in the development of nine spring regression equations—three equations for each of the three low-flow regions. A total of 208 streamgages were included in the development of nine fall regression equations—three equations for each of the three low-flow regions. A censoring threshold was used to develop 15 left-censored regression equations to estimate the three fall low-flow frequency statistics for each of the three low-flow regions and to estimate the three spring low-flow frequency statistics for the southern and northwest regions. For the northeast region, generalized-least-squares regression was used to develop three equations to estimate the three spring low-flow frequency statistics. For the northeast region, average standard errors of prediction range from 32.4 to 48.4 percent for the spring equations and average standard errors of estimate range from 56.4 to 73.8 percent for the fall equations. For the northwest region, average standard errors of estimate range from 58.9 to 62.1 percent for the spring equations and from 83.2 to 109.4 percent for the fall equations. For the southern region, average standard errors of estimate range from 43.2 to 64.0 percent for the spring equations and from 78.1 to 78.7 percent for the fall equations.The regression equations are applicable only to stream sites in Iowa with low flows not substantially affected by regulation, diversion, or urbanization and with basin characteristics within the range of those used to develop the equations. The regression equations will be implemented within the U.S. Geological Survey StreamStats Web-based geographic information system application. StreamStats allows users to click on any ungaged stream site and compute estimates of the six selected spring and fall low-flow statistics; in addition, 90-percent prediction intervals and the measured basin characteristics for the ungaged site are provided. StreamStats also allows users to click on any Iowa streamgage to obtain computed estimates for the six selected spring and fall low-flow statistics.

  9. A noninvasive method for measuring the velocity of diffuse hydrothermal flow by tracking moving refractive index anomalies

    NASA Astrophysics Data System (ADS)

    Mittelstaedt, Eric; Davaille, Anne; van Keken, Peter E.; Gracias, Nuno; Escartin, Javier

    2010-10-01

    Diffuse flow velocimetry (DFV) is introduced as a new, noninvasive, optical technique for measuring the velocity of diffuse hydrothermal flow. The technique uses images of a motionless, random medium (e.g., rocks) obtained through the lens of a moving refraction index anomaly (e.g., a hot upwelling). The method works in two stages. First, the changes in apparent background deformation are calculated using particle image velocimetry (PIV). The deformation vectors are determined by a cross correlation of pixel intensities across consecutive images. Second, the 2-D velocity field is calculated by cross correlating the deformation vectors between consecutive PIV calculations. The accuracy of the method is tested with laboratory and numerical experiments of a laminar, axisymmetric plume in fluids with both constant and temperature-dependent viscosity. Results show that average RMS errors are ˜5%-7% and are most accurate in regions of pervasive apparent background deformation which is commonly encountered in regions of diffuse hydrothermal flow. The method is applied to a 25 s video sequence of diffuse flow from a small fracture captured during the Bathyluck'09 cruise to the Lucky Strike hydrothermal field (September 2009). The velocities of the ˜10°C-15°C effluent reach ˜5.5 cm/s, in strong agreement with previous measurements of diffuse flow. DFV is found to be most accurate for approximately 2-D flows where background objects have a small spatial scale, such as sand or gravel.

  10. Optimal estimation of suspended-sediment concentrations in streams

    USGS Publications Warehouse

    Holtschlag, D.J.

    2001-01-01

    Optimal estimators are developed for computation of suspended-sediment concentrations in streams. The estimators are a function of parameters, computed by use of generalized least squares, which simultaneously account for effects of streamflow, seasonal variations in average sediment concentrations, a dynamic error component, and the uncertainty in concentration measurements. The parameters are used in a Kalman filter for on-line estimation and an associated smoother for off-line estimation of suspended-sediment concentrations. The accuracies of the optimal estimators are compared with alternative time-averaging interpolators and flow-weighting regression estimators by use of long-term daily-mean suspended-sediment concentration and streamflow data from 10 sites within the United States. For sampling intervals from 3 to 48 days, the standard errors of on-line and off-line optimal estimators ranged from 52.7 to 107%, and from 39.5 to 93.0%, respectively. The corresponding standard errors of linear and cubic-spline interpolators ranged from 48.8 to 158%, and from 50.6 to 176%, respectively. The standard errors of simple and multiple regression estimators, which did not vary with the sampling interval, were 124 and 105%, respectively. Thus, the optimal off-line estimator (Kalman smoother) had the lowest error characteristics of those evaluated. Because suspended-sediment concentrations are typically measured at less than 3-day intervals, use of optimal estimators will likely result in significant improvements in the accuracy of continuous suspended-sediment concentration records. Additional research on the integration of direct suspended-sediment concentration measurements and optimal estimators applied at hourly or shorter intervals is needed.

  11. Methods of automatic nucleotide-sequence analysis. Multicomponent spectrophotometric analysis of mixtures of nucleic acid components by a least-squares procedure

    PubMed Central

    Lee, Sheila; McMullen, D.; Brown, G. L.; Stokes, A. R.

    1965-01-01

    1. A theoretical analysis of the errors in multicomponent spectrophotometric analysis of nucleoside mixtures, by a least-squares procedure, has been made to obtain an expression for the error coefficient, relating the error in calculated concentration to the error in extinction measurements. 2. The error coefficients, which depend only on the `library' of spectra used to fit the experimental curves, have been computed for a number of `libraries' containing the following nucleosides found in s-RNA: adenosine, guanosine, cytidine, uridine, 5-ribosyluracil, 7-methylguanosine, 6-dimethylaminopurine riboside, 6-methylaminopurine riboside and thymine riboside. 3. The error coefficients have been used to determine the best conditions for maximum accuracy in the determination of the compositions of nucleoside mixtures. 4. Experimental determinations of the compositions of nucleoside mixtures have been made and the errors found to be consistent with those predicted by the theoretical analysis. 5. It has been demonstrated that, with certain precautions, the multicomponent spectrophotometric method described is suitable as a basis for automatic nucleotide-composition analysis of oligonucleotides containing nine nucleotides. Used in conjunction with continuous chromatography and flow chemical techniques, this method can be applied to the study of the sequence of s-RNA. PMID:14346087

  12. Daily values flow comparison and estimates using program HYCOMP, version 1.0

    USGS Publications Warehouse

    Sanders, Curtis L.

    2002-01-01

    A method used by the U.S. Geological Survey for quality control in computing daily value flow records is to compare hydrographs of computed flows at a station under review to hydrographs of computed flows at a selected index station. The hydrographs are placed on top of each other (as hydrograph overlays) on a light table, compared, and missing daily flow data estimated. This method, however, is subjective and can produce inconsistent results, because hydrographers can differ when calculating acceptable limits of deviation between observed and estimated flows. Selection of appropriate index stations also is judgemental, giving no consideration to the mathematical correlation between the review station and the index station(s). To address the limitation of the hydrograph overlay method, a set of software programs, written in the SAS macrolanguage, was developed and designated Program HYDCOMP. The program automatically selects statistically comparable index stations by correlation and regression, and performs hydrographic comparisons and estimates of missing data by regressing daily mean flows at the review station against -8 to +8 lagged flows at one or two index stations and day-of-week. Another advantage that HYDCOMP has over the graphical method is that estimated flows, the criteria for determining the quality of the data, and the selection of index stations are determined statistically, and are reproducible from one user to another. HYDCOMP will load the most-correlated index stations into another file containing the ?best index stations,? but will not overwrite stations already in the file. A knowledgeable user should delete unsuitable index stations from this file based on standard error of estimate, hydrologic similarity of candidate index stations to the review station, and knowledge of the individual station characteristics. Also, the user can add index stations not selected by HYDCOMP, if desired. Once the file of best-index stations is created, a user may do hydrographic comparison and data estimates by entering the number of the review station, selecting an index station, and specifying the periods to be used for regression and plotting. For example, the user can restrict the regression to ice-free periods of the year to exclude flows estimated during iced conditions. However, the regression could still be used to estimate flow during iced conditions. HYDCOMP produces the standard error of estimate as a measure of the central scatter of the regression and R-square (coefficient of determination) for evaluating the accuracy of the regression. Output from HYDCOMP includes plots of percent residuals against (1) time within the regression and plot periods, (2) month and day of the year for evaluating seasonal bias in the regression, and (3) the magnitude of flow. For hydrographic comparisons, it plots 2-month segments of hydrographs over the selected plot period showing the observed flows, the regressed flows, the 95 percent confidence limit flows, flow measurements, and regression limits. If the observed flows at the review station remain outside the 95 percent confidence limits for a prolonged period, there may be some error in the flows at the review station or at the index station(s). In addition, daily minimum and maximum temperatures and daily rainfall are shown on the hydrographs, if available, to help indicate whether an apparent change in flow may result from rainfall or from changes in backwater from melting ice or freezing water. HYDCOMP statistically smooths estimated flows from non-missing flows at the edges of the gaps in data into regressed flows at the center of the gaps using the Kalman smoothing algorithm. Missing flows are automatically estimated by HYDCOMP, but the user also can specify that periods of erroneous, but nonmissing flows, be estimated by the program.

  13. Technical Development for S-CO 2 Advanced Energy Conversion

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Anderson, Mark; Ranjan, Devesh; Hassan, Yassin

    This report is divided into four parts. First part of the report describes the methods used to measure and model the flow of supercritical carbon dioxide (S-CO 2) through annuli and straight-through labyrinth seals. The effects of shaft eccentricity in small diameter annuli were observed for length-to-hydraulic diameter (L/D) ratios of 6, 12, 143, and 235. Flow rates through tooth-cavity labyrinth seals were measured for inlet pressures of 7.7, 10, and 11 MPa with corresponding inlet densities of 325, 475, and 630 kg/m 3. Various leakage models were compared to this result to describe their applicability in supercritical carbon dioxidemore » applications. Flow rate measurements were made varying tooth number for labyrinth seals of same total length. Second part of the report describes the computational study performed to understand the leakage through the labyrinth seals using Open source CFD package OpenFOAM. Fluid Property Interpolation Tables (FIT) program was implemented in OpenFOAM to accurately model the properties of CO2 required to solve the governing equations. To predict the flow behavior in the two phase dome Homogeneous Equilibrium Model (HEM) is assumed to be valid. Experimental results for plain orifice (L/D ~ 5) were used to show the capabilities of the FIT model implemented in OpenFOAM. Error analysis indicated that OpenFOAM is capable of predicting experimental data within ±10% error with the majority of data close to ±5% error. Following the validation of computational model, effects of geometrical parameters and operating conditions are isolated from each other and a parametric study was performed in two parts to understand their effects on leakage flow. Third part of the report provides the details of the constructed heat exchanger test facility and presents the experimental results obtained to investigate the effects of buoyancy on heat transfer characteristics of Supercritical carbon dioxide in heating mode. Turbulent flows with Reynolds numbers up to 60,000, at operating pressures of 7.5, 8.1, and 10.2 MPa were tested in a round tube. Local heat transfer coefficients were obtained from measured wall temperatures over a large set of experimental parameters that varied inlet temperature from 20 °C to 55 °C,mass flux from 150 to 350 kg/m 2s, and a maximum heat flux of 65 KW/m 2. Horizontal, upward and downward flows were tested to investigate the unusual heat-transfer characteristics to the effect of buoyancy and flow acceleration caused by large variation in density. Final part of this report presents the simplified analysis performed to investigate the possibility of using wet cooling tower option to reject heat from the supercritical carbon dioxide Brayton cycle power convertor for AFR-100 and ABR-1000 plants. A code was developed to estimate the tower dimensions, power and water consumption, and to perform economic analysis. The code developed was verified by comparing the calculations to a vendor quote. The effect of ambient air and water conditions on the sizing and construction of the cooling tower as well as the cooler is studied. Finally, a cost-based optimization technique is used to estimate the optimum water conditions which will improve the plant economics.« less

  14. Testing FlowTracker2 Performance and Wading Rod Flow Disturbance in Laboratory Tow Tanks

    NASA Astrophysics Data System (ADS)

    Fan, X.; Wagenaar, D.

    2016-12-01

    The FlowTracker2 was released in February 2016 by SonTek (Xylem) to be a more feature-rich and technologically advanced replacement to the Original FlowTracker ADV. These instruments are Acoustic Doppler Velocimeters (ADVs) used for taking high-precision wading discharge and velocity measurements. The accuracy of the FlowTracker2 probe was tested in tow tanks at three different facilities: the USGS Hydrologic Instrumentation Facility (HIF), the Swiss Federal Institute for Metrology (METAS), and at the SonTek Research and Development facility. Multiple mounting configurations were examined, including mounting the ADV probe directly to the tow carts, and incorporating the two most-used wading rods for the FlowTracker (round and hex). Tow speeds ranged from 5cm/s to 1.5m/s, and different tow tank seeding schemes and wait times were examined. In addition, the performance of the FlowTracker2 probe in low Signal-to-Noise Ratio (SNR) environments was compared to the Original FlowTracker ADV. Results confirmed that the FlowTracker2 probe itself performed well within the 1%+0.25cm/s accuracy specification advertised. Tows using the wading rods created a reduced measured velocity by 1.3% of the expected velocity due to flow disturbance, a result similar to the Original FlowTracker ADV despite the change in the FlowTracker2 probe design. Finally, due to improvements in its electronics, the FlowTracker2's performance in low SNR tests exceeded that of the Original FlowTracker ADV, showing less standard error in these conditions compared to its predecessor.

  15. Physiologically assessed hot flashes and endothelial function among midlife women.

    PubMed

    Thurston, Rebecca C; Chang, Yuefang; Barinas-Mitchell, Emma; Jennings, J Richard; von Känel, Roland; Landsittel, Doug P; Matthews, Karen A

    2017-08-01

    Hot flashes are experienced by most midlife women. Emerging data indicate that they may be associated with endothelial dysfunction. No studies have tested whether hot flashes are associated with endothelial function using physiologic measures of hot flashes. We tested whether physiologically assessed hot flashes were associated with poorer endothelial function. We also considered whether age modified associations. Two hundred seventy-two nonsmoking women reporting either daily hot flashes or no hot flashes, aged 40 to 60 years, and free of clinical cardiovascular disease, underwent ambulatory physiologic hot flash and diary hot flash monitoring; a blood draw; and ultrasound measurement of brachial artery flow-mediated dilation to assess endothelial function. Associations between hot flashes and flow-mediated dilation were tested in linear regression models controlling for lumen diameter, demographics, cardiovascular disease risk factors, and estradiol. In multivariable models incorporating cardiovascular disease risk factors, significant interactions by age (P < 0.05) indicated that among the younger tertile of women in the sample (age 40-53 years), the presence of hot flashes (beta [standard error] = -2.07 [0.79], P = 0.01), and more frequent physiologic hot flashes (for each hot flash: beta [standard error] = -0.10 [0.05], P = 0.03, multivariable) were associated with lower flow-mediated dilation. Associations were not accounted for by estradiol. Associations were not observed among the older women (age 54-60 years) or for self-reported hot flash frequency, severity, or bother. Among the younger women, hot flashes explained more variance in flow-mediated dilation than standard cardiovascular disease risk factors or estradiol. Among younger midlife women, frequent hot flashes were associated with poorer endothelial function and may provide information about women's vascular status beyond cardiovascular disease risk factors and estradiol.

  16. Evaluation of the Lattice-Boltzmann Equation Solver PowerFLOW for Aerodynamic Applications

    NASA Technical Reports Server (NTRS)

    Lockard, David P.; Luo, Li-Shi; Singer, Bart A.; Bushnell, Dennis M. (Technical Monitor)

    2000-01-01

    A careful comparison of the performance of a commercially available Lattice-Boltzmann Equation solver (Power-FLOW) was made with a conventional, block-structured computational fluid-dynamics code (CFL3D) for the flow over a two-dimensional NACA-0012 airfoil. The results suggest that the version of PowerFLOW used in the investigation produced solutions with large errors in the computed flow field; these errors are attributed to inadequate resolution of the boundary layer for reasons related to grid resolution and primitive turbulence modeling. The requirement of square grid cells in the PowerFLOW calculations limited the number of points that could be used to span the boundary layer on the wing and still keep the computation size small enough to fit on the available computers. Although not discussed in detail, disappointing results were also obtained with PowerFLOW for a cavity flow and for the flow around a generic helicopter configuration.

  17. River flow modeling using artificial neural networks in Kapuas river, West Kalimantan, Indonesia

    NASA Astrophysics Data System (ADS)

    Herawati, Henny; Suripin, Suharyanto

    2017-11-01

    Kapuas River is located in the province of West Kalimantan. Kapuas river length is 1,086 km and river basin areas about 100,000 Km2. The availability of river flow data in the Long River and very wide catchments are difficult to obtain, while river flow data are essential for planning waterworks. To predict the water flow in the catchment area requires a lot of hydrology coefficient, so it is very difficult to predict and obtain results that closer to the real conditions. This paper demonstrates that artificial neural network (ANN) could be used to predict the water flow. The ANN technique can be used to predict the incidence of water discharge that occurs in the Kapuas River based on rainfall and evaporation data. With the data available to do training on the artificial neural network model is obtained mean square error (MSE) 0.00007. The river flow predictions could be carried out after the training. The results showed differences in water discharge measurement and prediction of about 4%.

  18. Peak flow estimation in ungauged basins by means of water level data analysis

    NASA Astrophysics Data System (ADS)

    Corato, G.; Moramarco, T.; Tucciarelli, T.

    2009-04-01

    Discharge hydrograph estimation in rivers is usually carried out by means of water level measurements and the use of a water depth - discharge relationship. The water depth - discharge curve is obtained by integrating local velocities measured in a given section at specified water depth values. To build up such curve is very expensive and very often the highest points, used for the peak flow estimation, are the result of rough extrapolation of points corresponding to much lower water depths. Recently, discharge estimation methodologies based only on the analysis of synchronous water level data recorded in two different river sections far some kilometers from each other have been developed. These methodologies are based only on the analysis of the water levels, the knowledge of the river bed elevations within the two sections, and the use of a diffusive flow routing numerical model. The bed roughness estimation, in terms of average Manning coefficient, is carried out along with the discharge hydrograph estimation. The 1D flow routing model is given by the following Saint Venant equations, simplified according to the diffusive hypothesis: ‚-+ ‚q-= 0 ‚t ‚x (1) ‚h+ (Sf - S0) = 0 ‚x (2) where q(x,t) is the discharge, h(x,t) is the water depth, Sf is the energy slope and S0 is the bed slope. The energy slope is related to the average n Manning coefficient by the Chezy relationship: -q2n2- Sf = 2ℜ4•3 (3) whereℜ is the hydraulic radius and gs the river section. The upstream boundary condition of the flow routing model is given by the measured upstream water level hydrograph. The computational domain is extended some kilometers downstream the second measurement section and the downstream boundary condition is properly approximated. This avoids the use of the downstream measured data for the solution of the system (1)-(3) and limits the model error even in the case of subcritical flow. The optimal average Manning coefficient is obtained by fitting the water level data available in the downstream measurement section with the computed ones. The optimal discharge hydrograph estimated in the upstream measurement section is given by the function q(0,t) computed in the first section (where x = 0) using the optimal Manning coefficient. Two different fitting quality criteria are compared and their practical implications are discussed; the first one is the equality of the computed and the measured time peak lag between the first and the second measurement section; the second one is the minimization of the total square error between the measured and the computed downstream water level hydrographs. The uniqueness and identifiability properties of the associated inverse problem are analyzed, and a model error analysis is carried out addressing the most relevant sources of error arising from the adopted approximations. Three case studies previously used for the validation of the proposed methodology are reviewed. The first two are water level hydrographs collected in two sections of the Arno river (Tuscany, Italy) and the Tiber river (Umbria, Italy). Water level and discharge hydrographs recorded during many storm events were available in both cases. The optimal average Manning coefficient has been estimated in both cases using the data of a single event, properly selected among all the available ones. In the third case, concerning hystorical data collected in a small tributary of the Tanagro river (Campania, Italy), three water level hydrographs were measured in three different sections of the channel. This allowed to carry on the discharge estimation using the data collected in only two of the three sections, using the data of the third one for validation. The results obtained in the three test cases highlight the advantages and the limits of the adopted analysis. The advantage is the simplicity of the hardware required for the data acquisition, that can be easily performed continuously in time, also during very bad weather conditions and using a long distance control. A first limit is the assumption of negligible inflow between the two measurement sections. Because the distance between the two sections must be large enough to measure the time lag between the two hydrographs, this limit can result in a difficult selection of the measurement sections. A second limit is the real heterogeneity of the bed roughness, that provides a shape of the water level hydrograph different from the computed one. Preliminary results of a new, multiparametric data analysis, are finally presented.

  19. Increasing the reliability of solution exchanges by monitoring solenoid valve actuation.

    PubMed

    Auzmendi, Jerónimo Andrés; Moffatt, Luciano

    2010-01-15

    Solenoid valves are a core component of most solution perfusion systems used in neuroscience research. As they open and close, they control the flow of solution through each perfusion line, thereby modulating the timing and sequence of chemical stimulation. The valves feature a ferromagnetic plunger that moves due to the magnetization of the solenoid and returns to its initial position with the aid of a spring. The delays between the time of voltage application or removal and the actual opening or closing of the valve are difficult to predict beforehand and have to be measured experimentally. Here we propose a simple method for monitoring whether and when the solenoid valve opens and closes. The proposed method detects the movement of the plunger as it generates a measurable signal on the solenoid that surrounds it. Using this plunger signal, we detected the opening and closing of diaphragm and pinch solenoid valves with a systematic error of less than 2ms. After this systematic error is subtracted, the trial-to-trial error was below 0.2ms.

  20. Interrupted infusion of echocardiographic contrast as a basis for accurate measurement of myocardial perfusion: ex vivo validation and analysis procedures.

    PubMed

    Toledo, Eran; Collins, Keith A; Williams, Ursula; Lammertin, Georgeanne; Bolotin, Gil; Raman, Jai; Lang, Roberto M; Mor-Avi, Victor

    2005-12-01

    Echocardiographic quantification of myocardial perfusion is based on analysis of contrast replenishment after destructive high-energy ultrasound impulses (flash-echo). This technique is limited by nonuniform microbubble destruction and the dependency on exponential fitting of a small number of noisy time points. We hypothesized that brief interruptions of contrast infusion (ICI) would result in uniform contrast clearance followed by slow replenishment and, thus, would allow analysis from multiple data points without exponential fitting. Electrocardiographic-triggered images were acquired in 14 isolated rabbit hearts (Langendorff) at 3 levels of coronary flow (baseline, 50%, and 15%) during contrast infusion (Definity) with flash-echo and with a 20-second infusion interruption. Myocardial videointensity was measured over time from flash-echo sequences, from which characteristic constant beta was calculated using an exponential fit. Peak contrast inflow rate was calculated from ICI data using analysis of local time derivatives. Computer simulations were used to investigate the effects of noise on the accuracy of peak contrast inflow rate and beta calculations. ICI resulted in uniform contrast clearance and baseline replenishment times of 15 to 25 cardiac cycles. Calculated peak contrast inflow rate followed the changes in coronary flow in all hearts at both levels of reduced flow (P < .05) and had a low intermeasurement variability of 7 +/- 6%. With flash-echo, contrast clearance was less uniform and baseline replenishment times were only 4 to 6 cardiac cycles. beta Decreased significantly only at 15% flow, and had intermeasurement variability of 42 +/- 33%. Computer simulations showed that measurement errors in both perfusion indices increased with noise, but beta had larger errors at higher rates of contrast inflow. ICI provides the basis for accurate and reproducible quantification of myocardial perfusion using fast and robust numeric analysis, and may constitute an alternative to the currently used techniques.

  1. Gauging Through the Crowd: A Crowd-Sourcing Approach to Urban Rainfall Measurement and Storm Water Modeling Implications

    NASA Astrophysics Data System (ADS)

    Yang, Pan; Ng, Tze Ling

    2017-11-01

    Accurate rainfall measurement at high spatial and temporal resolutions is critical for the modeling and management of urban storm water. In this study, we conduct computer simulation experiments to test the potential of a crowd-sourcing approach, where smartphones, surveillance cameras, and other devices act as precipitation sensors, as an alternative to the traditional approach of using rain gauges to monitor urban rainfall. The crowd-sourcing approach is promising as it has the potential to provide high-density measurements, albeit with relatively large individual errors. We explore the potential of this approach for urban rainfall monitoring and the subsequent implications for storm water modeling through a series of simulation experiments involving synthetically generated crowd-sourced rainfall data and a storm water model. The results show that even under conservative assumptions, crowd-sourced rainfall data lead to more accurate modeling of storm water flows as compared to rain gauge data. We observe the relative superiority of the crowd-sourcing approach to vary depending on crowd participation rate, measurement accuracy, drainage area, choice of performance statistic, and crowd-sourced observation type. A possible reason for our findings is the differences between the error structures of crowd-sourced and rain gauge rainfall fields resulting from the differences between the errors and densities of the raw measurement data underlying the two field types.

  2. Evaporation, precipitation, and associated salinity changes at a humid, subtropical estuary

    USGS Publications Warehouse

    Sumner, D.M.; Belaineh, G.

    2005-01-01

    The distilling effect of evaporation and the diluting effect of precipitation on salinity at two estuarine sites in the humid subtropical setting of the Indian River Lagoon, Florida, were evaluated based on daily evaporation computed with an energy-budget method and measured precipitation. Despite the larger magnitude of evaporation (about 1,580 mm yr-1) compared to precipitation (about 1,180 mm yr-1) between February 2002 and January 2004, the variability of monthly precipitation induced salinity changes was more than twice the variability of evaporation induced changes. Use of a constant, mean value of evaporation, along with measured values of daily precipitation, were sufficient to produce simulated salinity changes that contained little monthly (root-mean-square error = 0.33??? mo-1 and 0.52??? mo-1 at the two sites) or cumulative error (<1??? yr-1) compared to simulations that used computed daily values of evaporation. This result indicates that measuring the temporal variability in evaporation may not be critical to simulation of salinity within the lagoon. Comparison of evaporation and precipitation induced salinity changes with measured salinity changes indicates that evaporation and precipitation explained only 4% of the changes in salinity within a flow-through area of the lagoon; surface water and ocean inflows probably accounted for most of the variability in salinity at this site. Evaporation and precipitation induced salinity changes explained 61% of the variability in salinity at a flow-restricted part of the lagoon. ?? 2005 Estuarine Research Federation.

  3. Repeatability of Doppler ultrasound measurements of hindlimb blood flow in halothane anaesthetised horses.

    PubMed

    Raisis, A L; Young, L E; Meire, H; Walsh, K; Taylor, P M; Lekeux, P

    2000-05-01

    The purpose of this study was to determine the repeatability of femoral blood flow recorded using Doppler ultrasound in anaesthetised horses. Doppler ultrasound of the femoral artery and vein was performed in 6 horses anaesthetised with halothane and positioned in left lateral recumbency. Velocity spectra, recorded using low pulse repetition frequency, were used to calculate time-averaged mean velocity (TAV), velocity of component a (TaVa), velocity of component b (TaVb), volumetric flow, early diastolic deceleration slope (EDDS) and pulsatility index (PI). Within-patient variability was determined for sequential Doppler measurements recorded during a single standardised anaesthetic episode. Within-patient variability was also determined for Doppler and cardiovascular measurements recorded during 4 separate standardised anaesthetic episodes performed at intervals of at least one month. Within-patient variation during a single anaesthetic episode was small. Coefficients of variation (cv) were <12.5% for arterial measurements and <17% for venous measurements. Intraclass correlation coefficient was >0.75 for all measurements. No significant change was observed in measurements of cardiovascular function suggesting that within-patient variation observed during a single anaesthetic episode was due to measurement error. In contrast, within-patient variation during 4 separate anaesthetic episodes was marked (cv>17%) for most Doppler measurements obtained from arteries and veins. Variation in measurements of cardiovascular function were marked (cv>20%), suggesting that there is marked biological variation in central and peripheral observed. Further studies are warranted to determine the ability of this technique to detect differences in blood flow during administration of different anaesthetic agents.

  4. An approach to measure parameter sensitivity in watershed ...

    EPA Pesticide Factsheets

    Hydrologic responses vary spatially and temporally according to watershed characteristics. In this study, the hydrologic models that we developed earlier for the Little Miami River (LMR) and Las Vegas Wash (LVW) watersheds were used for detail sensitivity analyses. To compare the relative sensitivities of the hydrologic parameters of these two models, we used Normalized Root Mean Square Error (NRMSE). By combining the NRMSE index with the flow duration curve analysis, we derived an approach to measure parameter sensitivities under different flow regimes. Results show that the parameters related to groundwater are highly sensitive in the LMR watershed, whereas the LVW watershed is primarily sensitive to near surface and impervious parameters. The high and medium flows are more impacted by most of the parameters. Low flow regime was highly sensitive to groundwater related parameters. Moreover, our approach is found to be useful in facilitating model development and calibration. This journal article describes hydrological modeling of climate change and land use changes on stream hydrology, and elucidates the importance of hydrological model construction in generating valid modeling results.

  5. [Study of high temperature water vapor concentration measurement method based on absorption spectroscopy].

    PubMed

    Chen, Jiu-ying; Liu, Jian-guo; He, Jun-feng; He, Ya-bai; Zhang, Guang-le; Xu, Zhen-yu; Gang, Qiang; Wang, Liao; Yao, Lu; Yuan, Song; Ruan, Jun; Dai, Yun-hai; Kan, Rui-feng

    2014-12-01

    Tunable diode laser absorption spectroscopy (TDLAS) has been developed to realize the real-time and dynamic measurement of the combustion temperature, gas component concentration, velocity and other flow parameters, owing to its high sensitivity, fast time response, non-invasive character and robust nature. In order to obtain accurate water vapor concentration at high temperature, several absorption spectra of water vapor near 1.39 μm from 773 to 1273 K under ordinary pressure were recorded in a high temperature experiment setup using a narrow band diode laser. The absorbance of high temperature absorption spectra was calculated by combined multi-line nonlinear least squares fitting method. Two water vapor absorption lines near 7154.35 and 7157.73 cm(-1) were selected for measurement of water vapor at high temperature. A model method for high temperature water vapor concentration was first proposed. Water vapor concentration from the model method at high temperature is in accordance with theoretical reasoning, concentration measurement standard error is less than 0.2%, and the relative error is less than 6%. The feasibility of this measuring method is verified by experiment.

  6. A fine-wire thermocouple probe for measurement of stagnation temperatures in real gas hypersonic flows of nitrogen

    NASA Technical Reports Server (NTRS)

    Hollis, Brian R.; Griffith, Wayland C.; Yanta, William J.

    1991-01-01

    A fine-wire thermocouple probe was used to determine freestream stagnation temperatures in hypersonic flows. Data were gathered in a N2 blowdown wind tunnel with runtimes of 1-5 s. Tests were made at supply pressures between 30 and 1400 atm and supply temperatures between 700 and 1900 K, with Mach numbers of 14 to 16. An iterative procedure requiring thermocouple data, pilot pressure measurements, and supply conditions was used to determine test cell stagnation temperatures. Probe conduction and radiation losses, as well as real gas behavior of N2, were accounted for during analysis. Temperature measurement error was found to be 5 to 10 percent. A correlation was drawn between thermocouple diameter Reynolds number and temperature recovery ratio. Transient probe behavior was studied and was found to be adequate in temperature gradients up to 1000 K/s.

  7. Parallel pulse processing and data acquisition for high speed, low error flow cytometry

    DOEpatents

    van den Engh, Gerrit J.; Stokdijk, Willem

    1992-01-01

    A digitally synchronized parallel pulse processing and data acquisition system for a flow cytometer has multiple parallel input channels with independent pulse digitization and FIFO storage buffer. A trigger circuit controls the pulse digitization on all channels. After an event has been stored in each FIFO, a bus controller moves the oldest entry from each FIFO buffer onto a common data bus. The trigger circuit generates an ID number for each FIFO entry, which is checked by an error detection circuit. The system has high speed and low error rate.

  8. A photogrammetric technique for generation of an accurate multispectral optical flow dataset

    NASA Astrophysics Data System (ADS)

    Kniaz, V. V.

    2017-06-01

    A presence of an accurate dataset is the key requirement for a successful development of an optical flow estimation algorithm. A large number of freely available optical flow datasets were developed in recent years and gave rise for many powerful algorithms. However most of the datasets include only images captured in the visible spectrum. This paper is focused on the creation of a multispectral optical flow dataset with an accurate ground truth. The generation of an accurate ground truth optical flow is a rather complex problem, as no device for error-free optical flow measurement was developed to date. Existing methods for ground truth optical flow estimation are based on hidden textures, 3D modelling or laser scanning. Such techniques are either work only with a synthetic optical flow or provide a sparse ground truth optical flow. In this paper a new photogrammetric method for generation of an accurate ground truth optical flow is proposed. The method combines the benefits of the accuracy and density of a synthetic optical flow datasets with the flexibility of laser scanning based techniques. A multispectral dataset including various image sequences was generated using the developed method. The dataset is freely available on the accompanying web site.

  9. Estimating the State of Aerodynamic Flows in the Presence of Modeling Errors

    NASA Astrophysics Data System (ADS)

    da Silva, Andre F. C.; Colonius, Tim

    2017-11-01

    The ensemble Kalman filter (EnKF) has been proven to be successful in fields such as meteorology, in which high-dimensional nonlinear systems render classical estimation techniques impractical. When the model used to forecast state evolution misrepresents important aspects of the true dynamics, estimator performance may degrade. In this work, parametrization and state augmentation are used to track misspecified boundary conditions (e.g., free stream perturbations). The resolution error is modeled as a Gaussian-distributed random variable with the mean (bias) and variance to be determined. The dynamics of the flow past a NACA 0009 airfoil at high angles of attack and moderate Reynolds number is represented by a Navier-Stokes equations solver with immersed boundaries capabilities. The pressure distribution on the airfoil or the velocity field in the wake, both randomized by synthetic noise, are sampled as measurement data and incorporated into the estimated state and bias following Kalman's analysis scheme. Insights about how to specify the modeling error covariance matrix and its impact on the estimator performance are conveyed. This work has been supported in part by a Grant from AFOSR (FA9550-14-1-0328) with Dr. Douglas Smith as program manager, and by a Science without Borders scholarship from the Ministry of Education of Brazil (Capes Foundation - BEX 12966/13-4).

  10. Skin friction measurements by a new nonintrusive double-laser-beam oil viscosity balance technique

    NASA Technical Reports Server (NTRS)

    Monson, D. J.; Higuchi, H.

    1980-01-01

    A portable dual-laser-beam interferometer that nonintrusively measures skin friction by monitoring the thickness change of an oil film subject to shear stress is described. The method is an advance over past versions in that the troublesome and error-introducing need to measure the distance to the oil leading edge and the starting time for the oil flow has been eliminated. The validity of the method was verified by measuring oil viscosity in the laboratory, and then using those results to measure skin friction beneath the turbulent boundary layer in a low-speed wind tunnel. The dual-laser-beam skin friction measurements are compared with Preston tube measurements, with mean velocity profile data in a 'law-of-the-wall' coordinate system, and with computations based on turbulent boundary-layer theory. Excellent agreement is found in all cases. This validation and the aforementioned improvements appear to make the present form of the instrument usable to measure skin friction reliably and nonintrusively in a wide range of flow situations in which previous methods are not practical.

  11. Skin Friction Measurements by a Dual-Laser-Beam Interferometer Technique

    NASA Technical Reports Server (NTRS)

    Monson, D. J.; Higuchi, H.

    1981-01-01

    A portable dual-laser-beam interferometer that nonintrusively measures skin friction by monitoring the thickness change of an oil film subject to shear stress is described. The method is an advance over past versions in that the troublesome and error-introducing need to measure the distance to the oil leading edge and the starting time for the oil flow has been eliminated. The validity of the method was verified by measuring oil viscosity in the laboratory, and then using those results to measure skin friction beneath the turbulent boundary layer in a low speed wind tunnel. The dual-laser-beam skin friction measurements are compared with Preston tube measurements, with mean velocity profile data in a "law-of-the-well" coordinate system, and with computations based on turbulent boundary-layer theory. Excellent agreement is found in all cases. (This validation and the aforementioned improvements appear to make the present form of the instrument usable to measure skin friction reliably and nonintrusively in a wide range of flow situations in which previous methods are not practical.)

  12. Creating drag and lift curves from soccer trajectories

    NASA Astrophysics Data System (ADS)

    Goff, John Eric; Kelley, John; Hobson, Chad M.; Seo, Kazuya; Asai, Takeshi; Choppin, S. B.

    2017-07-01

    Trajectory analysis is an alternative to using wind tunnels to measure a soccer ball’s aerodynamic properties. It has advantages over wind tunnel testing such as being more representative of game play. However, previous work has not presented a method that produces complete, speed-dependent drag and lift coefficients. Four high-speed cameras in stereo-calibrated pairs were used to measure the spatial co-ordinates for 29 separate soccer trajectories. Those trajectories span a range of launch speeds from 9.3 to 29.9 m s-1. That range encompasses low-speed laminar flow of air over a soccer ball, through the drag crises where air flow is both laminar and turbulent, and up to high-speed turbulent air flow. Results from trajectory analysis were combined to give speed-dependent drag and lift coefficient curves for the entire range of speeds found in the 29 trajectories. The average root mean square error between the measured and modelled trajectory was 0.028 m horizontally and 0.034 m vertically. The drag and lift crises can be observed in the plots of drag and lift coefficients respectively.

  13. Error Control with Perfectly Matched Layer or Damping Layer Treatments for Computational Aeroacoustics with Jet Flows

    NASA Technical Reports Server (NTRS)

    Goodrich, John W.

    2009-01-01

    In this paper we show by means of numerical experiments that the error introduced in a numerical domain because of a Perfectly Matched Layer or Damping Layer boundary treatment can be controlled. These experimental demonstrations are for acoustic propagation with the Linearized Euler Equations with both uniform and steady jet flows. The propagating signal is driven by a time harmonic pressure source. Combinations of Perfectly Matched and Damping Layers are used with different damping profiles. These layer and profile combinations allow the relative error introduced by a layer to be kept as small as desired, in principle. Tradeoffs between error and cost are explored.

  14. A fast non-contact imaging photoplethysmography method using a tissue-like model

    NASA Astrophysics Data System (ADS)

    McDuff, Daniel J.; Blackford, Ethan B.; Estepp, Justin R.; Nishidate, Izumi

    2018-02-01

    Imaging photoplethysmography (iPPG) allows non-contact, concomitant measurement and visualization of peripheral blood flow using just an RGB camera. Most iPPG methods require a window of temporal data and complex computation, this makes real-time measurement and spatial visualization impossible. We present a fast,"window-less", non-contact imaging photoplethysmography method, based on a tissue-like model of the skin, that allows accurate measurement of heart rate and heart rate variability parameters. The error in heart rate estimates is equivalent to state-of-the-art techniques and computation is much faster.

  15. Study of Periodical Flow Heat Transfer in an Internal Combustion Engine

    NASA Astrophysics Data System (ADS)

    Luo, Xi

    In-cylinder heat transfer is one of the most critical physical behaviors which has a direct influence on engine out emission and thermal efficiency for IC engine. In-cylinder wall temperature has to be precisely controlled to achieve high efficiency and low emission. However, this cannot be done without knowing gas-to-wall heat flux. This study reports on the development of a technique suitable for engine in-cylinder surface temperature measurement, as the traditional method is "hard to reach." A laser induced phosphorescence technique was used to study in-cylinder wall temperature effects on engine out unburned hydrocarbons during the engine transitional period (warm up). A linear correlation was found between the cylinder wall surface temperature and the unburned hydrocarbons at mediate and high charge densities. At low charge density, no clear correlation was observed because of miss-fire events. A new auto background correction infrared (IR) diagnostic was developed to measure the instantaneous in-cylinder surface temperature at 0.1 CAD resolution. A numerical mechanism was designed to suppress relatively low-frequency background noise and provide an accurate in-cylinder surface temperature measurements with an error of less than 1.4% inside the IC engine. In addition, a proposed optical coating reduced time delay errors by 50% compared to more conventional thermocouple techniques. A new cycle-averaged Res number was developed for an IC engine to capture the characteristics of engine flow. Comparison and scaling between different engine flow parameters are available by matching the averaged Res number. From experimental results, the engine flow motion was classified as intermittently turbulent, and it is different from the original fully developed turbulent assumption, which has previously been used in almost all engine simulations. The intermittent turbulence could have a great impact on engine heat transfer because of the transitional turbulence effect. Engine 3D CFD model further proves the existence of transitional turbulence flow. A new multi zone heat transfer model is proposed for IC engines only. The model includes pressure work effects and improved heat transfer prediction compared to the standard Law of the wall model.

  16. Validation of an ultrasound dilution technology for cardiac output measurement and shunt detection in infants and children.

    PubMed

    Lindberg, Lars; Johansson, Sune; Perez-de-Sa, Valeria

    2014-02-01

    To validate cardiac output measurements by ultrasound dilution technology (COstatus monitor) against those obtained by a transit-time ultrasound technology with a perivascular flow probe and to investigate ultrasound dilution ability to estimate pulmonary to systemic blood flow ratio in children. Prospective observational clinical trial. Pediatric cardiac operating theater in a university hospital. In 21 children (6.1 ± 2.6 kg, mean ± SD) undergoing heart surgery, cardiac output was simultaneously recorded by ultrasound dilution (extracorporeal arteriovenous loop connected to existing arterial and central venous catheters) and a transit-time ultrasound probe applied to the ascending aorta, and when possible, the main pulmonary artery. The pulmonary to systemic blood flow ratio estimated from ultrasound dilution curve analysis was compared with that estimated from transit-time ultrasound technology. Bland-Altman analysis of the whole cohort (90 pairs, before and after surgery) showed a bias between transit-time ultrasound (1.01 ± 0.47 L/min) and ultrasound dilution technology (1.03 ± 0.51 L/min) of -0.02 L/min, limits of agreement -0.3 to 0.3 L/min, and percentage error of 31%. In children with no residual shunts, the bias was -0.04 L/min, limits of agreement -0.28 to 0.2 L/min, and percentage error 19%. The pooled co efficient of variation was for the whole cohort 3.5% (transit-time ultrasound) and 6.3% (ultrasound dilution), and in children without shunt, it was 2.9% (transit-time ultrasound) and 4% (ultrasound dilution), respectively. Ultrasound dilution identified the presence of shunts (pulmonary to systemic blood flow ≠ 1) with a sensitivity of 100% and a specificity of 92%. Mean pulmonary to systemic blood flow ratio by transit-time ultrasound was 2.6 ± 1.0 and by ultrasound dilution 2.2 ± 0.7 (not significant). The COstatus monitor is a reliable technique to measure cardiac output in children with high sensitivity and specificity for detecting the presence of shunts.

  17. Use of streamflow data to estimate base flowground-water recharge for Wisconsin

    USGS Publications Warehouse

    Gebert, W.A.; Radloff, M.J.; Considine, E.J.; Kennedy, J.L.

    2007-01-01

    The average annual base flow/recharge was determined for streamflow-gaging stations throughout Wisconsin by base-flow separation. A map of the State was prepared that shows the average annual base flow for the period 1970-99 for watersheds at 118 gaging stations. Trend analysis was performed on 22 of the 118 streamflow-gaging stations that had long-term records, unregulated flow, and provided aerial coverage of the State. The analysis found that a statistically significant increasing trend was occurring for watersheds where the primary land use was agriculture. Most gaging stations where the land cover was forest had no significant trend. A method to estimate the average annual base flow at ungaged sites was developed by multiple-regression analysis using basin characteristics. The equation with the lowest standard error of estimate, 9.5%, has drainage area, soil infiltration and base flow factor as independent variables. To determine the average annual base flow for smaller watersheds, estimates were made at low-flow partial-record stations in 3 of the 12 major river basins in Wisconsin. Regression equations were developed for each of the three major river basins using basin characteristics. Drainage area, soil infiltration, basin storage and base-flow factor were the independent variables in the regression equations with the lowest standard error of estimate. The standard error of estimate ranged from 17% to 52% for the three river basins. ?? 2007 American Water Resources Association.

  18. Time-resolved flow reconstruction with indirect measurements using regression models and Kalman-filtered POD ROM

    NASA Astrophysics Data System (ADS)

    Leroux, Romain; Chatellier, Ludovic; David, Laurent

    2018-01-01

    This article is devoted to the estimation of time-resolved particle image velocimetry (TR-PIV) flow fields using a time-resolved point measurements of a voltage signal obtained by hot-film anemometry. A multiple linear regression model is first defined to map the TR-PIV flow fields onto the voltage signal. Due to the high temporal resolution of the signal acquired by the hot-film sensor, the estimates of the TR-PIV flow fields are obtained with a multiple linear regression method called orthonormalized partial least squares regression (OPLSR). Subsequently, this model is incorporated as the observation equation in an ensemble Kalman filter (EnKF) applied on a proper orthogonal decomposition reduced-order model to stabilize it while reducing the effects of the hot-film sensor noise. This method is assessed for the reconstruction of the flow around a NACA0012 airfoil at a Reynolds number of 1000 and an angle of attack of {20}°. Comparisons with multi-time delay-modified linear stochastic estimation show that both the OPLSR and EnKF combined with OPLSR are more accurate as they produce a much lower relative estimation error, and provide a faithful reconstruction of the time evolution of the velocity flow fields.

  19. Magnetic Fluctuation-Driven Intrinsic Flow in a Toroidal Plasma

    NASA Astrophysics Data System (ADS)

    Brower, D. L.; Ding, W. X.; Lin, L.; Almagri, A. F.; den Hartog, D. J.; Sarff, J. S.

    2012-10-01

    Magnetic fluctuations have been long observed in various magnetic confinement configurations. These perturbations may arise naturally from plasma instabilities such as tearing modes and energetic particle driven modes, but they can also be externally imposed by error fields or external magnetic coils. It is commonly observed that large MHD modes lead to plasma locking (no rotation) due to torque produced by eddy currents on the wall, and it is predicted that stochastic field induces flow damping where the radial electric field is reduced. Flow generation is of great importance to fusion plasma research, especially low-torque devices like ITER, as it can act to improve performance. Here we describe new measurements in the MST reversed field pinch (RFP) showing that the coherent interaction of magnetic and particle density fluctuations can produce a turbulent fluctuation-induced kinetic force, which acts to drive intrinsic plasma rotation. Key observations include; (1) the average kinetic force resulting from density fluctuations, ˜ 0.5 N/m^3, is comparable to the intrinsic flow acceleration, and (2) between sawtooth crashes, the spatial distribution of the kinetic force is directed to create a sheared parallel flow profile that is consistent with the measured flow profile in direction and amplitude, suggesting the kinetic force is responsible for intrinsic plasma rotation.

  20. Improving estimates of streamflow characteristics by using Landsat-1 imagery

    USGS Publications Warehouse

    Hollyday, Este F.

    1976-01-01

    Imagery from the first Earth Resources Technology Satellite (renamed Landsat-1) was used to discriminate physical features of drainage basins in an effort to improve equations used to estimate streamflow characteristics at gaged and ungaged sites. Records of 20 gaged basins in the Delmarva Peninsula of Maryland, Delaware, and Virginia were analyzed for 40 statistical streamflow characteristics. Equations relating these characteristics to basin characteristics were obtained by a technique of multiple linear regression. A control group of equations contains basin characteristics derived from maps. An experimental group of equations contains basin characteristics derived from maps and imagery. Characteristics from imagery were forest, riparian (streambank) vegetation, water, and combined agricultural and urban land use. These basin characteristics were isolated photographically by techniques of film-density discrimination. The area of each characteristic in each basin was measured photometrically. Comparison of equations in the control group with corresponding equations in the experimental group reveals that for 12 out of 40 equations the standard error of estimate was reduced by more than 10 percent. As an example, the standard error of estimate of the equation for the 5-year recurrence-interval flood peak was reduced from 46 to 32 percent. Similarly, the standard error of the equation for the mean monthly flow for September was reduced from 32 to 24 percent, the standard error for the 7-day, 2-year recurrence low flow was reduced from 136 to 102 percent, and the standard error for the 3-day, 2-year flood volume was reduced from 30 to 12 percent. It is concluded that data from Landsat imagery can substantially improve the accuracy of estimates of some streamflow characteristics at sites in the Delmarva Peninsula.

  1. Spatially-resolved temperature diagnostic for supersonic flow using cross-beam Doppler-limited laser saturation spectroscopy

    NASA Astrophysics Data System (ADS)

    Phillips, Grady T.

    Optical techniques for measuring the temperature in three-dimensional supersonic reactive flows have typically depended on lineshape measurements using single-beam laser absorption spectroscopy. However, absorption over extended path lengths in flows with symmetric, turbulent eddies can lead to systematically high extracted temperatures due to Doppler shifts resulting from flow along the absorption path. To eliminate these problems and provide full three-dimensional spatial resolution, two variants of laser saturation spectroscopy have been developed and demonstrated, for the first time, which utilize two crossed and nearly copropogating laser beams. Individual rotational lines in the visible I2 X 1Sigma 0+g → B 3pi 0+u transition were used to develop the two diagnostic to support research on the Chemical Oxygen-Iodine Laser (COIL), the weapon aboard the USAF Airborne Laser. Cross-Beam Saturation Absorption Spectroscopy (CBSAS) and Cross-Beam Inter-Modulated Fluorescence (CBIMF) were demonstrated as viable methods for recording the spectral signal of an I2 ro-vibrational line in a small three-dimensional volume using a tunable CW dye laser. Temperature is extracted by fitting the recorded signal with a theoretical signal constructed from the Doppler-broadened hyperfine components of the ro-vibrational line. The CBIMF technique proved successful for extracting the temperature of an I2-seeded, Ar gas flow within a small, Mach 2, Laval nozzle where the overlap volume of the two 1 mm diameter laser beams was 2.4 mm 3. At a test point downstream of the nozzle throat, the average temperature of 146 K +/- 1.5 K extracted from measurements of the I2 P(46) 17-1 spectral line compared favorably with the 138 K temperature calculated from isentropic, one-dimensional flow theory. CBIMF provides sufficient accuracy for characterizing the temperature of the gas flow in a COIL device, and could be applied to other areas of flow-field characterization and nozzle design. In contrast, the CBSAS signal was not sufficiently strong for reliable temperature extraction from the 2.4 mm3 overlap volume required in the nozzle experiments. Otherwise, the CBSAS technique could have greater success for application in flow field test environments that allow the use of a larger overlap-volume. CBIMF and CBSAS measurements were also made in a static cell at 293 K. At 50 mTorr of I2, the standard error in temperature from CBIMF measurements of the I2 P(46) 17-1 line was approximately 0.5 K. For CBSAS, the standard error in temperature was approximately 3 K at 50 mTorr of I2. Accuracy improved with increasing I2 pressure. In addition, the spatial-resolution capability of CBIMF and CBSAS was demonstrated in a static cell with an applied temperature gradient ranging from 300 to 365 K. Extracted temperatures were compared to thermocouple measurements at multiple positions in the gradient. Agreement between extracted temperatures and thermocouple measurements was better at the lower temperatures. Doppler-free measurements of several I2 hyperfine spectra were also performed to support development of the theoretical model. Saturation Absorption Spectroscopy was used to obtain Ar pressure broadening rates of 8.29 +/- 0.30 MHz/Torr for the I2 P(70) 17-1 hyperfine spectrum, and 10.70 +/- 0.41 MHz/Torr for the I2 P(10) 17-1 hyperfine spectrum.

  2. PRESAGE: Protecting Structured Address Generation against Soft Errors

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sharma, Vishal C.; Gopalakrishnan, Ganesh; Krishnamoorthy, Sriram

    Modern computer scaling trends in pursuit of larger component counts and power efficiency have, unfortunately, lead to less reliable hardware and consequently soft errors escaping into application data ("silent data corruptions"). Techniques to enhance system resilience hinge on the availability of efficient error detectors that have high detection rates, low false positive rates, and lower computational overhead. Unfortunately, efficient detectors to detect faults during address generation (to index large arrays) have not been widely researched. We present a novel lightweight compiler-driven technique called PRESAGE for detecting bit-flips affecting structured address computations. A key insight underlying PRESAGE is that any addressmore » computation scheme that flows an already incurred error is better than a scheme that corrupts one particular array access but otherwise (falsely) appears to compute perfectly. Enabling the flow of errors allows one to situate detectors at loop exit points, and helps turn silent corruptions into easily detectable error situations. Our experiments using PolyBench benchmark suite indicate that PRESAGE-based error detectors have a high error-detection rate while incurring low overheads.« less

  3. Method for estimating low-flow characteristics of ungaged streams in Indiana

    USGS Publications Warehouse

    Arihood, Leslie D.; Glatfelter, Dale R.

    1991-01-01

    Equations for estimating the 7-day, 2-year and 7oday, 10-year low flows at sites on ungaged streams are presented. Regression analysis was used to develop equations relating basin characteristics and low-flow characteristics at 82 gaging stations. Significant basin characteristics in the equations are contributing drainage area and flow-duration ratio, which is the 20-percent flow duration divided by the 90-percent flow duration. Flow-duration ratio has been regionalized for Indiana on a plate. Ratios for use in the equations are obtained from the plate. Drainage areas are determined from maps or are obtained from reports. The predictive capability of the method was determined by tests of the equations and of the flow-duration ratios on the plate. The accuracy of the equations alone was tested by estimating the low-flow characteristics at 82 gaging stations where flow-duration ratio is already known. In this case, the standard errors of estimate for 7-day, 2-year and 7-day, 10-year low flows are 19 and 28 percent. When flow-duration ratios for the 82 gaging stations are obtained from the map, the standard errors are 46 and 61 percent. However, when stations having drainage areas of less than 10 square miles are excluded from the test, the standard errors decrease to 38 and 49 percent. Standard errors increase when stations with small basins are included, probably because some of the flow-duration ratios obtained for these small basins are incorrect. Local geology and its effect on the ratio are not adequately reflected on the plate, which shows the regional variation in flow-duration ratio. In all the tests, no bias is apparent areally, with increasing drainage area or with increasing ratio. Guidelines and limitations should be considered when using the method. The method can be applied only at sites in the northern and central physiographic zones of the State. Low-flow characteristics cannot be estimated for regulated streams unless the amount of regulation is known so that the estimated low-flow characteristic can be adjusted. The method is most accurate for sites having drainage areas ranging from 10 to 1,000 square miles and for predictions of 7-day, 10-year low flows ranging from 0.5 to 340 cubic feet per second.

  4. Method for estimating low-flow characteristics of ungaged streams in Indiana

    USGS Publications Warehouse

    Arihood, L.D.; Glatfelter, D.R.

    1986-01-01

    Equations for estimating the 7-day, 2-yr and 7-day, 10-yr low flows at sites on ungaged streams are presented. Regression analysis was used to develop equations relating basin characteristics and low flow characteristics at 82 gaging stations. Significant basin characteristics in the equations are contributing drainage area and flow duration ratio, which is the 20% flow duration divided by the 90% flow duration. Flow duration ratio has been regionalized for Indiana on a plate. Ratios for use in the equations are obtained from this plate. Drainage areas are determined from maps or are obtained from reports. The predictive capability of the method was determined by tests of the equations and of the flow duration ratios on the plate. The accuracy of the equations alone was tested by estimating the low flow characteristics at 82 gaging stations where flow duration ratio is already known. In this case, the standard errors of estimate for 7-day, 2-yr and 7-day, 10-yr low flows are 19% and 28%. When flow duration ratios for the 82 gaging stations are obtained from the map, the standard errors are 46% and 61%. However, when stations with drainage areas < 10 sq mi are excluded from the test, the standard errors reduce to 38% and 49%. Standard errors increase when stations with small basins are included, probably because some of the flow duration ratios obtained for these small basins are incorrect. Local geology and its effect on the ratio are not adequately reflected on the plate, which shows the regional variation in flow duration ratio. In all the tests, no bias is apparent areally, with increasing drainage area or with increasing ratio. Guidelines and limitations should be considered when using the method. The method can be applied only at sites in the northern and the central physiographic zones of the state. Low flow characteristics cannot be estimated for regulated streams unless the amount of regulation is known so that the estimated low flow characteristic can be adjusted. The method is most accurate for sites with drainage areas ranging from 10 to 1,000 sq mi and for predictions of 7-day, 10-yr low flows ranging from 0.5 to 340 cu ft/sec. (Author 's abstract)

  5. Field, laboratory and numerical approaches to studying flow through mangrove pneumatophores

    NASA Astrophysics Data System (ADS)

    Chua, V. P.

    2014-12-01

    The circulation of water in riverine mangrove swamps is expected to be influenced by mangrove roots, which in turn affect the nutrients, pollutants and sediments transport in these systems. Field studies were carried out in mangrove areas along the coastline of Singapore where Avicennia marina and Sonneratia alba pneumatophore species are found. Geometrical properties, such as height, diameter and spatial density of the mangrove roots were assessed through the use of photogrammetric methods. Samples of these roots were harvested from mangrove swamps and their material properties, such as bending strength and Young's modulus were determined in the laboratory. It was found that the pneumatophores under hydrodynamic loadings in a mangrove environment could be regarded as fairly rigid. Artificial root models of pneumatophores were fabricated from downscaling based on field observations of mangroves. Flume experiments were performed and measurements of mean flow velocities, Reynolds stress and turbulent kinetic energy were made. The boundary layer formed over the vegetation patch is fully developed after x = 6 m with a linear mean velocity profile. High shear stresses and turbulent kinetic energy were observed at the interface between the top portion of the roots and the upper flow. The experimental data was employed to calibrate and validate three-dimensional simulations of flow in pneumatophores. The simulations were performed with the Delft3D-FLOW model, where the vegetation effect is introduced by adding a depth-distributed resistance force and modifying the k-ɛ turbulence model. The model-predicted profiles for mean velocity, turbulent kinetic energy and concentration were compared with experimental data. The model calibration is performed by adjusting the horizontal and vertical eddy viscosities and diffusivities. A skill assessment of the model is performed using statistical measures that include the Pearson correlation coefficient (r), the mean absolute error (MAE), and the root-mean-squared error (RMSE).

  6. Digital Analysis and Sorting of Fluorescence Lifetime by Flow Cytometry

    PubMed Central

    Houston, Jessica P.; Naivar, Mark A.; Freyer, James P.

    2010-01-01

    Frequency-domain flow cytometry techniques are combined with modifications to the digital signal processing capabilities of the Open Reconfigurable Cytometric Acquisition System (ORCAS) to analyze fluorescence decay lifetimes and control sorting. Real-time fluorescence lifetime analysis is accomplished by rapidly digitizing correlated, radiofrequency modulated detector signals, implementing Fourier analysis programming with ORCAS’ digital signal processor (DSP) and converting the processed data into standard cytometric list mode data. To systematically test the capabilities of the ORCAS 50 MS/sec analog-to-digital converter (ADC) and our DSP programming, an error analysis was performed using simulated light scatter and fluorescence waveforms (0.5–25 ns simulated lifetime), pulse widths ranging from 2 to 15 µs, and modulation frequencies from 2.5 to 16.667 MHz. The standard deviations of digitally acquired lifetime values ranged from 0.112 to >2 ns, corresponding to errors in actual phase shifts from 0.0142° to 1.6°. The lowest coefficients of variation (<1%) were found for 10-MHz modulated waveforms having pulse widths of 6 µs and simulated lifetimes of 4 ns. Direct comparison of the digital analysis system to a previous analog phase-sensitive flow cytometer demonstrated similar precision and accuracy on measurements of a range of fluorescent microspheres, unstained cells and cells stained with three common fluorophores. Sorting based on fluorescence lifetime was accomplished by adding analog outputs to ORCAS and interfacing with a commercial cell sorter with a radiofrequency modulated solid-state laser. Two populations of fluorescent microspheres with overlapping fluorescence intensities but different lifetimes (2 and 7 ns) were separated to ~98% purity. Overall, the digital signal acquisition and processing methods we introduce present a simple yet robust approach to phase-sensitive measurements in flow cytometry. The ability to simply and inexpensively implement this system on a commercial flow sorter will both allow better dissemination of this technology and better exploit the traditionally underutilized parameter of fluorescence lifetime. PMID:20662090

  7. Low Reynolds number wind tunnel measurements - The importance of being earnest

    NASA Technical Reports Server (NTRS)

    Mueller, Thomas J.; Batill, Stephen M.; Brendel, Michael; Perry, Mark L.; Bloch, Diane R.

    1986-01-01

    A method for obtaining two-dimensional aerodynamic force coefficients at low Reynolds numbers using a three-component external platform balance is presented. Regardless of method, however, the importance of understanding the possible influence of the test facility and instrumentation on the final results cannot be overstated. There is an uncertainty in the ability of the facility to simulate a two-dimensional flow environment due to the confinement effect of the wind tunnel and the method used to mount the airfoil. Additionally, the ability of the instrumentation to accurately measure forces and pressures has an associated uncertainty. This paper focuses on efforts taken to understand the errors introduced by the techniques and apparatus used at the University of Notre Dame, and, the importance of making an earnest estimate of the uncertainty. Although quantitative estimates of facility induced errors are difficult to obtain, the uncertainty in measured results can be handled in a straightforward manner and provide the experimentalist, and others, with a basis to evaluate experimental results.

  8. A comparative study of artificial neural network, adaptive neuro fuzzy inference system and support vector machine for forecasting river flow in the semiarid mountain region

    NASA Astrophysics Data System (ADS)

    He, Zhibin; Wen, Xiaohu; Liu, Hu; Du, Jun

    2014-02-01

    Data driven models are very useful for river flow forecasting when the underlying physical relationships are not fully understand, but it is not clear whether these data driven models still have a good performance in the small river basin of semiarid mountain regions where have complicated topography. In this study, the potential of three different data driven methods, artificial neural network (ANN), adaptive neuro fuzzy inference system (ANFIS) and support vector machine (SVM) were used for forecasting river flow in the semiarid mountain region, northwestern China. The models analyzed different combinations of antecedent river flow values and the appropriate input vector has been selected based on the analysis of residuals. The performance of the ANN, ANFIS and SVM models in training and validation sets are compared with the observed data. The model which consists of three antecedent values of flow has been selected as the best fit model for river flow forecasting. To get more accurate evaluation of the results of ANN, ANFIS and SVM models, the four quantitative standard statistical performance evaluation measures, the coefficient of correlation (R), root mean squared error (RMSE), Nash-Sutcliffe efficiency coefficient (NS) and mean absolute relative error (MARE), were employed to evaluate the performances of various models developed. The results indicate that the performance obtained by ANN, ANFIS and SVM in terms of different evaluation criteria during the training and validation period does not vary substantially; the performance of the ANN, ANFIS and SVM models in river flow forecasting was satisfactory. A detailed comparison of the overall performance indicated that the SVM model performed better than ANN and ANFIS in river flow forecasting for the validation data sets. The results also suggest that ANN, ANFIS and SVM method can be successfully applied to establish river flow with complicated topography forecasting models in the semiarid mountain regions.

  9. An experimental verification of laser-velocimeter sampling bias and its correction

    NASA Technical Reports Server (NTRS)

    Johnson, D. A.; Modarress, D.; Owen, F. K.

    1982-01-01

    The existence of 'sampling bias' in individual-realization laser velocimeter measurements is experimentally verified and shown to be independent of sample rate. The experiments were performed in a simple two-stream mixing shear flow with the standard for comparison being laser-velocimeter results obtained under continuous-wave conditions. It is also demonstrated that the errors resulting from sampling bias can be removed by a proper interpretation of the sampling statistics. In addition, data obtained in a shock-induced separated flow and in the near-wake of airfoils are presented, both bias-corrected and uncorrected, to illustrate the effects of sampling bias in the extreme.

  10. Estimation of Blood Flow Rates in Large Microvascular Networks

    PubMed Central

    Fry, Brendan C.; Lee, Jack; Smith, Nicolas P.; Secomb, Timothy W.

    2012-01-01

    Objective Recent methods for imaging microvascular structures provide geometrical data on networks containing thousands of segments. Prediction of functional properties, such as solute transport, requires information on blood flow rates also, but experimental measurement of many individual flows is difficult. Here, a method is presented for estimating flow rates in a microvascular network based on incomplete information on the flows in the boundary segments that feed and drain the network. Methods With incomplete boundary data, the equations governing blood flow form an underdetermined linear system. An algorithm was developed that uses independent information about the distribution of wall shear stresses and pressures in microvessels to resolve this indeterminacy, by minimizing the deviation of pressures and wall shear stresses from target values. Results The algorithm was tested using previously obtained experimental flow data from four microvascular networks in the rat mesentery. With two or three prescribed boundary conditions, predicted flows showed relatively small errors in most segments and fewer than 10% incorrect flow directions on average. Conclusions The proposed method can be used to estimate flow rates in microvascular networks, based on incomplete boundary data and provides a basis for deducing functional properties of microvessel networks. PMID:22506980

  11. A High-Pressure Bi-Directional Cycloid Rotor Flowmeter

    PubMed Central

    Liu, Shuo; Ding, Fan; Ding, Chuan; Man, Zaipeng

    2014-01-01

    The measurement of the flow rate of various liquids and gases is critical in industrial automation. Rotary positive displacement meters (rotary PD meters) are highly accurate flowmeters that are widely employed in engineering applications, especially in custody transfer operations and hydraulic control systems. This paper presents a high pressure rotary PD meter containing a pair of internal cycloid rotors. It has the advantages of concise structure, low pressure loss, high accuracy and low noise. The curve of the internal rotor is designed as an equidistant curtate epicycloid curve with the external rotor curve as its conjugate. The calculation method used to determine the displacement of the cycloid rotor flowmeter is discussed. A prototype was fabricated, and experiments were performed to confirm measurements over a flow range of 1–100 L/min with relative errors of less than ±0.5%. The pressure loss through the flowmeter was about 3 bar at a flow rate of 100 L/min. PMID:25196162

  12. [Effect of physical properties of respiratory gas on pneumotachographic measurement of ventilation in newborn infants].

    PubMed

    Foitzik, B; Schmalisch, G; Wauer, R R

    1994-04-01

    The measurement of ventilation in neonates has a number of specific characteristics; in contrast to lung function testing in adults, the inspiratory gas for neonates is often conditioned. In pneumotachographs (PNT) based on Hagen-Poiseuille's law, changes in physical characteristics of respiratory gas (temperature, humidity, pressure and oxygen fraction [FiO2]) produce a volume change as calculated with the ideal gas equation p*V/T = const; in addition, the viscosity of the gas is also changed, thus leading to measuring errors. In clinical practice, the effect of viscosity on volume measurement is often ignored. The accuracy of these empirical laws was investigated in a size 0 Fleisch-PNT using a flow-through technique and variously processed respiratory gas. Spontaneous breathing was simulated with the aid of a calibration syringe (20 ml) and a rate of 30 min-1. The largest change in viscosity (11.6% at 22 degrees C and dry gas) is found with an increase in FiO2 (21...100%). A rise in temperature from 24 to 35 degrees C (dry air) produced an increase in viscosity of 5.2%. An increase of humidity (0...90%, 35 degrees C) decreased the viscosity by 3%. A partial compensation of these viscosity errors is thus possible. Pressure change (0...50 mbar, under ambient conditions) caused no measurable viscosity error. With the exception of temperature, the measurements have shown good agreement between the measured volume measuring errors and those calculated from viscosity changes. If the respiratory gas differs from ambient air (e.g. elevated FiO2) or if the PNT is calibrated under BTPS conditions, changes in viscosity must not be neglected when performing accurate ventilation measurements. On the basis of the well-known physical laws of Dalton, Thiesen and Sutherland, a numerical correction of adequate accuracy is possible.

  13. Predicting areas of sustainable error growth in quasigeostrophic flows using perturbation alignment properties

    NASA Astrophysics Data System (ADS)

    Rivière, G.; Hua, B. L.

    2004-10-01

    A new perturbation initialization method is used to quantify error growth due to inaccuracies of the forecast model initial conditions in a quasigeostrophic box ocean model describing a wind-driven double gyre circulation. This method is based on recent analytical results on Lagrangian alignment dynamics of the perturbation velocity vector in quasigeostrophic flows. More specifically, it consists in initializing a unique perturbation from the sole knowledge of the control flow properties at the initial time of the forecast and whose velocity vector orientation satisfies a Lagrangian equilibrium criterion. This Alignment-based Initialization method is hereafter denoted as the AI method.In terms of spatial distribution of the errors, we have compared favorably the AI error forecast with the mean error obtained with a Monte-Carlo ensemble prediction. It is shown that the AI forecast is on average as efficient as the error forecast initialized with the leading singular vector for the palenstrophy norm, and significantly more efficient than that for total energy and enstrophy norms. Furthermore, a more precise examination shows that the AI forecast is systematically relevant for all control flows whereas the palenstrophy singular vector forecast leads sometimes to very good scores and sometimes to very bad ones.A principal component analysis at the final time of the forecast shows that the AI mode spatial structure is comparable to that of the first eigenvector of the error covariance matrix for a "bred mode" ensemble. Furthermore, the kinetic energy of the AI mode grows at the same constant rate as that of the "bred modes" from the initial time to the final time of the forecast and is therefore characterized by a sustained phase of error growth. In this sense, the AI mode based on Lagrangian dynamics of the perturbation velocity orientation provides a rationale of the "bred mode" behavior.

  14. An automated system for performing continuous viscosity versus temperature measurements of fluids using an Ostwald viscometer

    NASA Astrophysics Data System (ADS)

    Beaulieu, L. Y.; Logan, E. R.; Gering, K. L.; Dahn, J. R.

    2017-09-01

    An automated system was developed to measure the viscosity of fluids as a function of temperature using image analysis tracking software. An Ostwald viscometer was placed in a three-wall dewar in which ethylene glycol was circulated using a thermal bath. The system collected continuous measurements during both heating and cooling cycles exhibiting no hysteresis. The use of video tracking analysis software greatly reduced the measurement errors associated with measuring the time required for the meniscus to pass through the markings on the viscometer. The stability of the system was assessed by performing 38 consecutive measurements of water at 42.50 ± 0.05 °C giving an average flow time of 87.7 ± 0.3 s. A device was also implemented to repeatedly deliver a constant volume of liquid of 11.00 ± 0.03 ml leading to an average error in the viscosity of 0.04%. As an application, the system was used to measure the viscosity of two Li-ion battery electrolyte solvents from approximately 10 to 40 °C with results showing excellent agreement with viscosity values calculated using Gering's Advanced Electrolyte Model (AEM).

  15. Evaluation of Event-Based Algorithms for Optical Flow with Ground-Truth from Inertial Measurement Sensor

    PubMed Central

    Rueckauer, Bodo; Delbruck, Tobi

    2016-01-01

    In this study we compare nine optical flow algorithms that locally measure the flow normal to edges according to accuracy and computation cost. In contrast to conventional, frame-based motion flow algorithms, our open-source implementations compute optical flow based on address-events from a neuromorphic Dynamic Vision Sensor (DVS). For this benchmarking we created a dataset of two synthesized and three real samples recorded from a 240 × 180 pixel Dynamic and Active-pixel Vision Sensor (DAVIS). This dataset contains events from the DVS as well as conventional frames to support testing state-of-the-art frame-based methods. We introduce a new source for the ground truth: In the special case that the perceived motion stems solely from a rotation of the vision sensor around its three camera axes, the true optical flow can be estimated using gyro data from the inertial measurement unit integrated with the DAVIS camera. This provides a ground-truth to which we can compare algorithms that measure optical flow by means of motion cues. An analysis of error sources led to the use of a refractory period, more accurate numerical derivatives and a Savitzky-Golay filter to achieve significant improvements in accuracy. Our pure Java implementations of two recently published algorithms reduce computational cost by up to 29% compared to the original implementations. Two of the algorithms introduced in this paper further speed up processing by a factor of 10 compared with the original implementations, at equal or better accuracy. On a desktop PC, they run in real-time on dense natural input recorded by a DAVIS camera. PMID:27199639

  16. A multi points ultrasonic detection method for material flow of belt conveyor

    NASA Astrophysics Data System (ADS)

    Zhang, Li; He, Rongjun

    2018-03-01

    For big detection error of single point ultrasonic ranging technology used in material flow detection of belt conveyor when coal distributes unevenly or is large, a material flow detection method of belt conveyor is designed based on multi points ultrasonic counter ranging technology. The method can calculate approximate sectional area of material by locating multi points on surfaces of material and belt, in order to get material flow according to running speed of belt conveyor. The test results show that the method has smaller detection error than single point ultrasonic ranging technology under the condition of big coal with uneven distribution.

  17. New approach for simulating groundwater flow in discrete fracture network

    NASA Astrophysics Data System (ADS)

    Fang, H.; Zhu, J.

    2017-12-01

    In this study, we develop a new approach to calculate groundwater flowrate and hydraulic head distribution in two-dimensional discrete fracture network (DFN) where both laminar and turbulent flows co-exist in individual fractures. The cubic law is used to calculate hydraulic head distribution and flow behaviors in fractures where flow is laminar, while the Forchheimer's law is used to quantify turbulent flow behaviors. Reynolds number is used to distinguish flow characteristics in individual fractures. The combination of linear and non-linear equations is solved iteratively to determine flowrates in all fractures and hydraulic heads at all intersections. We examine potential errors in both flowrate and hydraulic head from the approach of uniform flow assumption. Applying the cubic law in all fractures regardless of actual flow conditions overestimates the flowrate when turbulent flow may exist while applying the Forchheimer's law indiscriminately underestimate the flowrate when laminar flows exist in the network. The contrast of apertures of large and small fractures in the DFN has significant impact on the potential errors of using only the cubic law or the Forchheimer's law. Both the cubic law and Forchheimer's law simulate similar hydraulic head distributions as the main difference between these two approaches lies in predicting different flowrates. Fracture irregularity does not significantly affect the potential errors from using only the cubic law or the Forchheimer's law if network configuration remains similar. Relative density of fractures does not significantly affect the relative performance of the cubic law and Forchheimer's law.

  18. Computational analysis and preliminary redesign of the nozzle contour of the Langley hypersonic CF4 tunnel

    NASA Technical Reports Server (NTRS)

    Thompson, R. A.; Sutton, Kenneth

    1987-01-01

    A computational analysis, modification, and preliminary redesign study was performed on the nozzle contour of the Langley Hypersonic CF4 Tunnel. This study showed that the existing nozzle was contoured incorrectly for the design operating condition, and this error was shown to produce the measured disturbances in the exit flow field. A modified contour was designed for the current nozzle downstream of the maximum turning point that would provide a uniform exit flow. New nozzle contours were also designed for an exit Mach number and Reynolds number combination which matches that attainable in the Langley 20-Inch Mach 6 Tunnel. Two nozzle contours were designed: one having the same exit radius but a larger mass flow rate than that of the existing CF4 Tunnel, and the other having the same mass flow rate but a smaller exit radius than that of the existing CF4 Tunnel.

  19. Estimation of Reynolds number for flows around cylinders with lattice Boltzmann methods and artificial neural networks.

    PubMed

    Carrillo, Mauricio; Que, Ulices; González, José A

    2016-12-01

    The present work investigates the application of artificial neural networks (ANNs) to estimate the Reynolds (Re) number for flows around a cylinder. The data required to train the ANN was generated with our own implementation of a lattice Boltzmann method (LBM) code performing simulations of a two-dimensional flow around a cylinder. As results of the simulations, we obtain the velocity field (v[over ⃗]) and the vorticity (∇[over ⃗]×v[over ⃗]) of the fluid for 120 different values of Re measured at different distances from the obstacle and use them to teach the ANN to predict the Re. The results predicted by the networks show good accuracy with errors of less than 4% in all the studied cases. One of the possible applications of this method is the development of an efficient tool to characterize a blocked flowing pipe.

  20. Variability of computational fluid dynamics solutions for pressure and flow in a giant aneurysm: the ASME 2012 Summer Bioengineering Conference CFD Challenge.

    PubMed

    Steinman, David A; Hoi, Yiemeng; Fahy, Paul; Morris, Liam; Walsh, Michael T; Aristokleous, Nicolas; Anayiotos, Andreas S; Papaharilaou, Yannis; Arzani, Amirhossein; Shadden, Shawn C; Berg, Philipp; Janiga, Gábor; Bols, Joris; Segers, Patrick; Bressloff, Neil W; Cibis, Merih; Gijsen, Frank H; Cito, Salvatore; Pallarés, Jordi; Browne, Leonard D; Costelloe, Jennifer A; Lynch, Adrian G; Degroote, Joris; Vierendeels, Jan; Fu, Wenyu; Qiao, Aike; Hodis, Simona; Kallmes, David F; Kalsi, Hardeep; Long, Quan; Kheyfets, Vitaly O; Finol, Ender A; Kono, Kenichi; Malek, Adel M; Lauric, Alexandra; Menon, Prahlad G; Pekkan, Kerem; Esmaily Moghadam, Mahdi; Marsden, Alison L; Oshima, Marie; Katagiri, Kengo; Peiffer, Véronique; Mohamied, Yumnah; Sherwin, Spencer J; Schaller, Jens; Goubergrits, Leonid; Usera, Gabriel; Mendina, Mariana; Valen-Sendstad, Kristian; Habets, Damiaan F; Xiang, Jianping; Meng, Hui; Yu, Yue; Karniadakis, George E; Shaffer, Nicholas; Loth, Francis

    2013-02-01

    Stimulated by a recent controversy regarding pressure drops predicted in a giant aneurysm with a proximal stenosis, the present study sought to assess variability in the prediction of pressures and flow by a wide variety of research groups. In phase I, lumen geometry, flow rates, and fluid properties were specified, leaving each research group to choose their solver, discretization, and solution strategies. Variability was assessed by having each group interpolate their results onto a standardized mesh and centerline. For phase II, a physical model of the geometry was constructed, from which pressure and flow rates were measured. Groups repeated their simulations using a geometry reconstructed from a micro-computed tomography (CT) scan of the physical model with the measured flow rates and fluid properties. Phase I results from 25 groups demonstrated remarkable consistency in the pressure patterns, with the majority predicting peak systolic pressure drops within 8% of each other. Aneurysm sac flow patterns were more variable with only a few groups reporting peak systolic flow instabilities owing to their use of high temporal resolutions. Variability for phase II was comparable, and the median predicted pressure drops were within a few millimeters of mercury of the measured values but only after accounting for submillimeter errors in the reconstruction of the life-sized flow model from micro-CT. In summary, pressure can be predicted with consistency by CFD across a wide range of solvers and solution strategies, but this may not hold true for specific flow patterns or derived quantities. Future challenges are needed and should focus on hemodynamic quantities thought to be of clinical interest.

Top